id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247995599
pes2o/s2orc
v3-fos-license
Finite-time Fault-Tolerant Control for a Stewart Platform using Sliding Mode Control with Improved Reaching Law In this paper, a fault-tolerant control (FTC) is proposed for a nonlinear system as a Stewart platform (SP). To reject the singularity issue of a traditional fast terminal sliding mode control (FTSMC) and to have a fast finite-time convergence, a nonsingular fast terminal sliding mode control (NFTSMC) is used. In addition, an extended state observer (ESO) is applied for the control scheme to estimate uncertainties, disturbances, and faults. To increase the convergence speed and alleviate the chattering phenomenon, a novel reaching law is proposed which gives the system a quick reaching speed. Finally, a novel FTC that ensures robustness to disturbances and faults is developed based on the NFTSMC, the ESO, and the proposed reaching law. Consequently, the proposed FTC has outstanding features such as high tracking performance, a decrease of the effects of disturbances and faults, a fast convergence speed in finite time, and less chattering. The simulation and experiment results demonstrate the efficiency of the proposed FTC compared to other control schemes. I. INTRODUCTION Robots play an increasingly important role in human life these days. They are used to perform complicated tasks in many fields such as industrial manufacturing, medicine, civil engineering, and aerospace. Nevertheless, in practice, we have to face some inevitable problems during the operation of robots, such as uncertainty, disturbances, and unmodeled dynamics and friction which may lead to a serious destabilization of the system. This has caused great obstacles and challenges in designing controllers for robot manipulators. Therefore, the requirement for precise and robust control has attracted a massive number of researchers over the past decades. Various solutions to improving robot performance have been developed, such as adaptive control [1], [2], neural network control [3], [4], and sliding mode control (SMC) [5]- [7] for nonlinear systems in general and for the Stewart platform (SP) in particular. A SP is a parallel manipulator that has six prismatic actuators connecting the fixed base and the moving platform. Thanks to its outstanding benefits of high precision, good rigidity, and higher payloads compared with other serial robots, it is extensively applied in industry, telescopes, flight and vehicle simulators, entertainment, and medical instruments [8]- [10]. Nonetheless, due to the inherent complexity in the kinetic analysis of its closedloop structure, the application of a SP is often challenging. Hence, various kinematic and dynamic investigations have been reported in the literature [11]- [14], and several control technologies for the SP have been studied over the years [2], [4], [7]. Among them, SMC possesses fascinating characteristics of robustness to disturbances and uncertainties, and low sensitivity to noise. Nevertheless, conventional SMC cannot ensure that the states of the system approach the equilibrium point in finite time. Therefore, to ensure that the system state quickly converges in finite time, Nonsingular Fast Terminal Sliding mode control (NFTSMC) was developed and has received much attention from many researchers [5], [6]. It not only preserves the robustness of the traditional SMC, but also has fast convergence in finite time and avoids the singularity issue of Fast Terminal Sliding mode control (FTSMC). Although NFTSMC has many advantages, in practice there might be faults occurring in the system and NFTSMC alone cannot ensure the stability of the system. Therefore, there have been many investigations of this problem over the years, and some fault-tolerant technologies have been proposed to increase the safety of robotic systems. In general, there are two major types of Fault-tolerant control (FTC): passive FTC (PFTC) [15], [16] and active FTC (AFTC) [17], [18]. A PFTC is designed without a fault diagnosis module for normal and fault operation, and depends on the robust capability of the controllers to address lumped disturbances, uncertainty, and faults. The most notable feature of PFTC is its quick response to the occurrence of faults because it does not take time to wait for the fault feedback; however, its ability to compensate for high-magnitude faults is restricted. As a result, there are some limitations in the application of PFTC in actual systems. In contrast, the key feature of AFTC is its use of an estimation module to compensate for the unpredictable faults in mechanical components, sensors, and actuators to preserve the stability of the system within performance requirements. The robust response of AFTC to faults primarily depends on the efficiency of the estimation module. Hence, a series of active fault-tolerant strategies have been developed for robotic systems based on various observers, such as the sliding mode observer [19], the fuzzy observer [17], and the extended state observer (ESO) [18]. Compared to the other methods, the ESO is an efficient way to estimate faults and is easy to implement in practice. Nonetheless, it is well known that the conventional ESO has several drawbacks, such as the peaking phenomenon that can cause serious stability deterioration of the overall system [20], and its trade-off between the speed of estimation and insensitivity to measurement noise [21]. Many researchers have introduced solutions to decreasing the magnitude of peaking and ensuring the robustness to measurement noise [22]- [24]. In [24], Ran et al. proposed a new ESO that was effective in lessening the peaking issue and had improved sensitivity to measurement noise. Thus, given the significant benefits mentioned above, in this study a NFTSMC and an ESO [24] are applied in a FTC scheme to considerably improve its performance regardless of the presence of faults in the SP. Although the accuracy of the system can be improved by the FTC schemes described above, researchers have developed various methods to speed up the reaching rate and diminish chattering which is a major issue in SMC. The chattering problem not only destabilizes the system but also seriously affects its practical applications. Hence, it is of great interest to resolve this issue, and strategies such as the boundary layer method [25], [26], high-order SMC [27], [28], and the reaching law SMC method [29]- [33] have been developed. Of these, the reaching law SMC has attractive advantages due to not only its ability to effectively decrease the chattering issue, but also to improve the approaching phase rate. In [29], three continuous-time reaching laws were proposed by Gao et al. First, the constant rate reaching law is a simple method that makes the state slide on the sliding surface at a constant rate. Its drawback is the trade-off between the speed of the approaching phase and the magnitude of oscillation in the sliding phase. Next, a modification to the constant reaching law, called the constant plus proportional rate reaching law, can reduce the oscillation to a certain level. The final method is the power rate reaching law, which can decrease chattering. Based on these methods, several other deep investigations on reaching law have been produced over the years. Wang [30] used a double-power reaching law to further enhance the efficiency of the power reaching law and decline the chattering issue, and an improved double-power reaching law was proposed by Tao [31]. Fallaha [32] introduced the exponential reaching law, which can increase the convergence speed and reduce oscillation. Yang [33] designed a piecewise fast multi-power reaching law based on the fast-power and double-power reaching laws. Generally, the power reaching law has excellent reaching performance and less chattering. Inspired by these aforementioned works, a new reaching law (NRL) is proposed in this paper to further reduce the reaching time and the chattering problem. The finite-time stability of this new reaching law is demonstrated, as well as its ability to give the system a fast reaching speed. The dynamic coefficient is used to accelerate the convergence rate and minimize the chattering amplitude when the system approaches the sliding surface. As a result, this paper will illustrate the performance of the proposed FTC scheme by combining NFTMSC, ESO [24], and the NRL, which has the benefits of easy implementation, singularity avoidance, robustness in uncertainties and faults, a decrease of the peaking issue, high accuracy, chattering alleviation, and rapid convergence in finite time. In the simulation and experiment parts, a comparison between the proposed FTC and the control scheme without the estimation module is shown to demonstrate the usefulness of the ESO [24] in the proposed FTC under the occurrence of faults. In addition, this paper compares the performance of the proposed FTC with the control schemes using the other reaching laws to prove the effect of the NRL for enhancing the reaching speed. Accordingly, the validity of the proposed FTC using the new ESO [24] and the proposed reaching law is evaluated. The remainder of this study is organized as follows: The NRL is described in Section II; Section III shows the SP dynamic; the traditional ESO [34] and the new ESO [24] are introduced in Section IV; Section V illustrates the proposed FTC based on NFTSMC, the ESO [24], and the NRL. The results of the control performance in the simulation and experiment are given in Sections VI and VII, respectively. Finally, the conclusions are discussed in Section VIII. II. A NEW REACHING LAW As mentioned above, many valid methods have been investigated to reduce chattering in SMC. Among them, the improvement of the reaching law in SMC can not only eliminate the oscillation but can also approach the sliding surface rapidly. Thus, many reaching laws have been proposed, such as the quick-power reaching law (QPRL) and the double-power reaching law (DPRL), which have excellent reaching performance. The QPRL is derived by a combination of the power rate reaching and the proportional rate term with the constant coefficients and can be designed as: Whereas the DPRL has two power terms and can be described as: where k1 > 0, k2 > 0, 0 < w1< 1, w2 > 1. The first part in the right hand of the DPRL (2) plays the main role when 1 s  and while the second part plays the main role when 1 s  . It is well known that the reaching speed of the QPRL (1) is slower than that of the DPRL (2) when the states of the system are far away from the sliding surface, i.e., 1 s  , but the reaching speed of the QPRL (1) is faster than that of the DPRL (2) when the states approach the sliding surface, i.e., 1 s  . Taking advantage of the benefits of the QPRL and the DPRL, a NRL is described as: 3 2 tanh sgn( ) , r > 1, positive constant k1, k2, ε, c, η are positive constants, and 0 < ε < 1. In the NRL, the first coefficient and the second power terms can be dynamically changed according to the magnitude of s. In particular, the hyperbolic tangent function is used in the NRL instead of the sign function to further reduce the chattering when the system is close to the sliding surface. Figure 1 shows the value of the sign function sign(s) and tangent function tanh(s). It can be seen that the value of tanh(s) smoothly changes, and when s approaches zero, the magnitude of tanh(s) decreases dramatically to make the first term in (3) decline. It is very helpful to reject the oscillation when the system is near the sliding surface. For example, the parameters in (1), (2), and (3) are given as k1 = 3, k2 = 4, w1 = 0.8, w2 = 1.5, r = 1.5, ε = 0.1, η = 0.1, c = 0.2. We test the convergence speeds of the thee reaching laws for two cases in which the initial value of s is given as s(0) = 10 and s(0) = 1. Figure 2 shows the results of the simulation for the reaching laws. As we can see, when 1 s  , the convergence rate of the NRL (3) is faster than that of the DPRL (2). On the other hand, when 1 s  , the convergence speed of (3) is faster than that of the QPRL (1). Remark: When s approaches zero, (3) can be approximately equivalent to the following expression: which can obtain the goal of less chattering. Therefore, the NRL can not only have a fast reaching rate in different stages but can also decrease the chattering issue. CONVERGENCE ANALYSIS For the NRL (3), selecting the Lyapunov function V1 = 0.5s 2 and its derivative is: Thus, the stability condition can be guaranteed. Case 1: Assuming s(0) > 1, the reaching process can be divided into two stages For the first stage, (0) 1 s s   , (3) can be written as: Hence, the convergence time can be determined as: For the second stage, 1 0 s s    , (3) can be written as: In practice, s may only approach a value near zero and we assume that the slope of this value is small enough, e.g., 0.001, 0.0001, etc. In this case, there may be a small steadystate error but it probably will not influence the convergence precision of the system. We define the convergence value as equal to σ approaching zero. Thus, the convergence time can be calculated as: Hence, the total time 1 s t can be calculated as: Case 2: Assuming s(0) < -1, the reaching manner also has two stages: The analysis in this case is similar to that of Case 1. Thus, the sum time 2 s t can be calculated as: Overall, the sliding mode s can reach the value approaching 0 in a finite time t s for any initial condition s(0): Figure 3 shows a SP mainly constructed of six linear actuators, a fixed base, and a moving platform. The kinematics and dynamics of the SP were studied in much previous research [11]- [14]. We assume that the position of the center of the moving platform is [px, py, pz] with respect to a coordinate {O} placed at the center of the fixed base, and the orientation of the moving platform is described by a rotation of angle φx about the x-axis of {O} (roll), then about the y-axis of {O} by angle φy (pitch) and about the z-axis of {O} by angle φz (yaw). Generally, the dynamic equation of the SP can be given as follows: III. STEWART PLATFORM DYNAMIC where . . However, in practice there are certain faults, such as sensor faults, mechanical faults, and actuator faults. In this paper, we consider actuator faults. IV. THE EXTENDED STATE OBSERVER FOR ESTIMATION OF THE UNCERTAINTY, DISTURBANCE, AND FAULT The dynamic model (17) can be rewritten in the state space as follows: to be the extended state of the system (18), then (18) becomes: According to [34], a conventional linear ESO can be designed as: where 1 2 3 , , x x x    are observer states, 1 2 3 , ,    are positive constants chosen so that the polynomial  is a Hurwitz polynomial, and μ < 1 is a small positive constant. As aforementioned, some disadvantages of the conventional ESO (20) (ESO1) are the peaking issue and high sensitivity to measurement noise. Hence, a different ESO is proposed by Ran et al. [24] to decrease the influence of these downsides on the system, and it can be described as: Ran et al. [24] demonstrated the convergence of the new ESO (21) (ESO2) such that there exists δ > 0 and T > 0 such that: V. DESIGN OF A FAULT-TOLERANT CONTROL In this section, a fault-tolerant control based on NFTSMC, ESO2, and the improved reaching law (3) is developed for the SP. The sliding surface of the NFTSMC is defined as: Xd is the desired trajectory in Cartesian space, l, h, p, and q are positive odd integers, 1 < p/q < 2, l/h > p/q, and λ1 and λ2 are the positive constants. Taking the time derivative of (22) yields: Substituting (19) into (23) yields: Applying ESO2 and the proposed reaching law (3) for control input, then the proposed FTC law is described as follows: is an equivalent control, and is a switching term Theorem: Considering the SP described in (19) with the nonsingular fast terminal sliding surface defined in (22), the ESO in (21), the proposed reaching law in (3) , and the FTC law designed in (25), then the tracking error e will converge to zero within a finite time. VI. SIMULATION RESULTS To demonstrate the effectiveness of the proposed FTC, the simulation results are illustrated in this section. First, the mechanical model of the SP was designed in SolidWorks. Next, it was exported to the Simulink environment via the Simscape Multibody link tool, and the simulation was executed in MATLAB/Simulink. The parameters of the SP a, b, y, u, mp, Ixx, Iyy, and Izz were given in SolidWorks as 54 mm, 198 mm, 54 mm, 126 mm, 145 g, 296,223 g.mm 2 , 296,223 g.mm 2 , and 588,962 g.mm 2 , respectively. First, the performance of the proposed FTC with the ESO2 was compared to the FTC with the conventional ESO1 and the NFTSMC without the ESO. The control input of the NFTSMC can be given as: The control parameters in (36) and (25) . It could be assumed that multiple faults arose at the first, third, and fifth actuators at 5 sec. The torque functions with multiple faults were given in (16), , and 6 ( ) 0 f t  . Figure 4 shows the tracking trajectory and performance of a NFTSMC without ESO (NFTSMC), a FTC using the conventional ESO1 (FTC-NFTSMC1), and the proposed FTC (25) using ESO2 (Proposed FTC). When the faults did not occur in the first 5 sec, the tracking performances of controllers were almost the same. However, the performances of the controllers significantly changed after the faults appeared. As shown, the FTC-NFTSMC1 and Proposed FTC had more excellent performance than the NFTSMC because the faults were efficiently estimated and compensated by ESO1 and ESO2. Furthermore, Proposed FTC had less peaking than did FTC-NFTSMC1 and NFTSMC. This exhibits the success in lessening the peaking value of ESO2 in the proposed FTC compared to that of the conventional ESO1. Next, to verify the effectiveness of the improved reaching law, the performance of the proposed FTC law (25) using the NRL (3) (Proposed FTC) was compared with that of the FTC laws using the QPRL (FTC-NFTSMC2-QPRL) and the DPRL (FTC-NFTSMC2-DPRL) respectively described as: Proposed FTC, FTC-NFTSMC2-QPRL, and FTC-NFTSMC2-DPRL use the same ESO2. The parameters in (37) and (38) are given as λ1 = 0.1, λ2 = 0.02, l = 27, h = 19, q = 19, p = 21, k1 = 5, k2 = 2,000, w1 = 0.8, and w2 = 1.1. The tracking performances of the three controllers are shown in Figure 6. As we can see, the convergence speed of Proposed FTC was faster than that of FTC-NFTSMC2-QPRL and FTC-NFTSMC2-DPRL. Besides, all three controllers had good tracking errors in the presence of the actuator faults, which demonstrates the efficiency of ESO2 compensating the faults regardless of which reaching law was used in the FTC law. In addition, when the faults occurred, Proposed FTC and FTC-NFTSMC2-QPRL had slightly better performance than FTC-NFTSMC2-DPRL because the convergence speed of FTC-NFTSMC2-DPRL was slightly slower than the other two when the system states changed around the sliding surface, as mentioned in Section II. Therefore, the proposed FTC scheme had not only a fast transient response but also robustness to the lumped uncertainty and faults of the system and the decrease of the peaking value. The control signals of all controllers are illustrated in Figures 5 and 7. VII. EXPERIMENTAL RESULTS This section describes implementations of the proposed FTC compared with the other controllers for an actual SP that was assembled with plastic upper and lower platforms and six MightyZap actuators (12Lf-17F-90; IR Robot Co., Ltd., Korea). This actual SP in Figure 8 was designed with parameters a, b, y, and u set as 54 mm, 198 mm, 54 mm, and 126 mm, respectively. The reference trajectory of the upper platform was given in (35). The parameters in the FTC law (25) . Next, we assumed that multiple faults occurred in the first, third, and fifth actuators at 5 sec, as described in the Simulation section (VI above), and the torque functions with multiple faults were described in (16) where the parameters were set as in the Simulation section. The tracking trajectory and performance of the NFTSMC, FTC-NFTSMC1, and Proposed FTC are illustrated in Figure 9. The performances of FTC-NFTSMC1 and Proposed FTC were slightly better than that of NFTSMC within the first 5 sec. When actuator faults appeared after 5 sec, the tracking errors of FTC-NFTSMC1 and Proposed FTC were considerably lower than those of NFTSMC due to the successful compensation of ESO1 and ESO2 for the disturbances, uncertainties, and faults. In addition, the peaking value in Proposed FTC was a little lower than that of FTC-NFTSMC1 when the fault occurred. It should be noted that the actual SP might have had different uncertainty and disturbance compared to the simulation, so the performance results of the actual SP were unlike those of the simulation. Overall, FTC-NFTSMC1 and Proposed FTC had smaller tracking errors than NFTSMC, while Proposed FTC achieved slightly higher accuracy than FTC-NFTSMC1. Figure 10 shows the control signals of three controllers. FTC-NFTSMC1 and Proposed FTC use ESO1 and ESO2 respectively to compensate uncertainties, disturbances, and faults, while NFTSMC has no compensation for those. Hence, the controllers gave major differences in the input force at each joint. To evaluate the efficiency of the proposed reaching law, we investigated the performance of FTC-NFTSMC2-QPRL (37), FTC-NFTSMC2-DPRL (38), and our Proposed FTC (25) in controlling the actual SP. The parameters in (37) and (38) are given as λ1 = 0.1, λ2 = 0.02, l = 27, h = 19, q = 19, p = 21, k1 = 5, k2 = 400, w1 = 0.8, and w2 = 1.1. Figure 11 illustrates the trajectory and performance of Proposed FTC compared with FTC-NFTSMC2-QPRL and FTC-NFTSMC2-DPRL. In general, the three controllers had similar performances in the presence of the actuator faults. Also, due to possible limitations of the hardware, it was not as easy to clearly see the difference in the convergence speed of the controllers as it had been in the simulation. The three controllers use the same ESO2 for the estimation and compensation, thus there were no significant differences in the magnitude of control signals at each leg shown in Figure 12. VIII. CONCLUSION In this paper, a new fault-tolerant scheme was proposed for a Stewart platform. First, a NFTSMC was used in the FTC to enhance the convergence speed of the state in finite time without the singularity issue. Then ESO2 was applied for the FTC to not only effectively estimate and compensate the uncertainty, disturbances and faults, but also reduce the peaking issue in the conventional ESO1. To further enhance the reaching phase speed and decrease the chattering, an improved reaching law (3) was designed and its quick convergence ability in finite time was demonstrated. Consequently, the new FTC showing the above benefits was derived by combining the NFTSMC, the ESO2 and the novel reaching law (3). To assess the efficiency of the proposed FTC, the desired trajectory and an assumption of faults were used for all the controllers throughout the simulation and experiments. Next, we showed a comparison between the proposed FTC and the control law using the traditional ESO1 and the control law without the ESO to evaluate the effectiveness of ESO2 in the FTC scheme. Then, the performance of the proposed FTC and the other FTC schemes using the same ESO2 but different reaching laws are exhibited to demonstrate enhancement in the convergence rate of the NRL. By verifying the simulation and the experiments, we could confirm that the proposed FTC is easy to implement and has the inherent advantages of the NFTSMC, the estimation and compensation ability of ESO2 plus the reaching speed improvement of the improved reaching law (3). Thus, the proposed FTC showed remarkable features such as insensitivity to uncertainties, disturbances, and faults, reducing the peaking value, high precision and robustness, rejecting the singularity, less oscillation, and a fast convergence speed in finite time.
2022-04-07T15:09:36.129Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "e2cbfccc82918c922dd2024352eaecb90b8ea2ea", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09749249.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "333139389610140860dd3ce159c06a27a9920fad", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
234359224
pes2o/s2orc
v3-fos-license
Endoscopic sleeve gastroplasty requiring conversion to partial gastrectomy with paraesophageal hernia repair Abstract Endoluminal bariatric surgery has lower costs and perceived risks compared to traditional surgery. Endoluminal procedures are a newer approach to weight loss but long-term outcomes and complications continue to emerge. This case report is an endoscopic sleeve gastroplasty that resulted in a paraesophageal hernia repair with removal of gastroplasty sutures and partial gastrectomy. INTRODUCTION Endoluminal bariatric surgery is an alternative treatment due to the potentially lower costs and risks. However, due to the paucity of research, it is difficult to compare the outcomes of endoluminal versus laparoscopic approaches to weight loss surgery. Outcomes for endoscopic sleeve gastroplasty (ESG) are still being studied, and complications are continuing to emerge. Our case will discuss an ESG that resulted in the patient requiring a paraesophageal hernia repair with removal of gastroplasty sutures and partial gastrectomy. CASE The patient is a 49-year-old female with a BMI of 31 kg/m 2 that presented with worsening dysphagia, pain, heartburn, and progressively worsening nausea and vomiting since her ESG. She underwent an ESG one-year prior with intragastric plication sutures, however, did not follow up with her surgeon. Associated Published by Oxford University Press and JSCR Publishing Ltd. All rights reserved. © The Author(s) 2021. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com symptoms had inadequate relief with smaller meals and proton pump inhibitors. Of note, her symptoms did not include weight loss. She underwent an upper gastrointestinal series that showed a normal-appearing stomach without any evidence of her prior gastroplasty, moderately sized hiatal hernia and slight delay in passage of contrast through the gastroesophageal junction ( Fig. 1). On subsequent endoscopy, she was noted to have multiple loosely placed sutures within the gastric lumen along the greater curvature (Fig. 2), as well as a large-sized paraesophageal hernia. Patient subsequently underwent a robotically assisted laparoscopic paraesophageal hernia repair with mesh, partial gastrectomy with removal of gastric foreign bodies, and flexible endoscopy. Intraoperatively via initial laparoscopic approach, there was only noted to be small dimples along the greater gastric curve without any noticeable plication (Fig. 3). The stomach was twisted up into the hiatal hernia. Multiple permanent sutures with T-fasteners were seen on endoscopy, and attempts were made to remove these endoscopically without success. These sutures were not of full thickness, so a gastrotomy was made to remove them. Partial gastrectomy was performed, including the fundus. The diaphragm was repaired with interrupted silk sutures and reinforced with a bioabsorbable mesh. The patient recovered well and was discharged home on postoperative day two tolerating a diet. She was seen at followup with complete resolution of her preoperative symptoms. DISCUSSION Bariatric surgery requires a multidisciplinary approach with an intensive preoperative and postoperative course. These patients are expected to adhere to preoperative and postoperative care, including long-term follow-up. Although ESG is considered a 'non-surgical' approach to weight loss, patients should still undergo the standard bariatric multidisciplinary approach. ESG was first reported in 2013, and the other endoluminal bariatric therapies include an intragastric balloon, endoscopic bypass and endoscopic gastroplasty. These procedures may have lower costs and fewer risks compared to laparoscopic surgery. However, this is a new procedure and its complications may be under-reported in the literature. In addition, patients may not seek medical care at the same facilities or practices. This allows for flaws with currently published studies. ESG has been described as having the advantages of maintaining anatomic structure and potential reversibility and is repeatable [1]. ESG reported total body weight loss of 14.9-15.2% at 6 months with low rates of adverse events (2-2.7%) [2,3]. However, laparoscopic sleeve gastrectomy (LSG) demonstrates total body weight loss of 24% in 6 months. At the 1-year mark, these differences become less apparent per Novikov et al., with comparable weight loss only in the BMI < 40 kg/m 2 group between ESG and LSG [4]. In terms of the reversibility of ESG, our case showed that this was not achievable and required a gastrectomy due to the inability to remove the sutures endoscopically. Studies have also shown lower rates of adverse events with ESG. These events include perigastric leaks, perigastric inflammation, hemorrhage, pneumoperitoneum, pneumothorax and pulmonary embolism. This is compared with LSG, where major adverse events are as high as 5% [5], with perigastric leak being the most dreaded complication. However, newer advancements in technology and technique have reduced the leak rate to less than 1% [6]. Lopez-Nava et al. also describes the importance of technique and training when performing these advanced endoscopic procedures, specifically, intraoperative bleeding in 10% of their cases. They suggest the learning curve to be approximately 5-15 procedures but based on Watson et al. that the learning curve may require 50 procedures or more [7]. The rate of procedure failure for ESG has been reported to be greater than 50% with loose sutures requiring reoperation [8]. This is related to the failure of full-thickness suturing, as was depicted in our patient. Our upper gastrointestinal series failed to show the normal tubular appearance of the stomach after an ESG. A paraesophageal hernia is contraindicated in an ESG and likely contributed to the failure. It is possible that the restriction created by the hernia made it difficult to ensure full-thickness sutures during the initial ESG. The operative report before the placement of the sutures stated that there was no hiatal hernia present; however, no other diagnostic imaging was done prior. This questions whether the hiatal hernia was present before the ESG or was a complication of the procedure. For the latter, we wanted to raise awareness of this potential complication. We recommend that ESG should be avoided in patients with a hiatal hernia due to the high likelihood of failure.
2021-05-12T05:17:07.001Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "9d4454e4ed9897c7985e6be3ddd15d54ae371e7a", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jscr/article-pdf/2021/5/rjab149/37787169/rjab149.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d4454e4ed9897c7985e6be3ddd15d54ae371e7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54016714
pes2o/s2orc
v3-fos-license
Non-commutative nodal curves and derived tame algebras In this paper, we develop a geometric approach to study derived tame finite dimensional associative algebras, based on the theory of non-commutative nodal curves. Let be an algebraically closed field such that char( ) = 2. For λ ∈ \ {0, 1}, let be the corresponding elliptic curve and Y λ ı −→ Y λ be the involution, given by the rule (x : y : z) ı → (x : −y : z). Let T λ be the tubular canonical algebra of type ((2, 2, 2, 2); λ) [46], i.e. the path algebra of the following quiver (1) • modulo the relations b 1 a 1 − b 2 a 2 = b 3 a 3 and b 1 a 1 − λb 2 a 2 = b 4 a 4 . According to Geigle and Lenzing [25,Example 5.8], there exists an exact equivalence of triangulated categories where G = ı ∼ = Z 2 and Coh G (Y λ ) is the category of G-equivariant coherent sheaves on Y λ . It is well-known that D b Coh G (Y λ ) and D b (T λ −mod) have tame representation type; see [3,30,39]. At this place one can ask the following natural Question. Is there any link between D b Coh G (Y λ ) and D b (T λ −mod) when the parameter λ ∈ takes the "forbidden" value 0? Example. Let p = (3, 2), q be void (i.e. r = 1 and s = 0) and ≈ be given by the rule x + 1,1 ≈ x − 1,1 and x + 1,3 ≈ x − 1,2 . Then the corresponding gentle algebra Λ( p, ≈) is the path algebra of the following quiver (4) subject to the relations: u 1 x + 1,1 = 0 = v 1 x − 1,1 and u 2 x + 1,3 = 0 = v 2 x − 1,2 . Example. Let p = (1, 1), (1,1) , q = (2) and ≈ be given by the rule: x + 1,1 ≈ w 1,1 , x − 1,1 ≈ x + 2,1 and w 1,2 ≈ w 1,2 . Then the corresponding algebra Λ( p, q, ≈) is the path algebra of the following quiver (5) modulo the relations:        z 1,2 z 1,1 + w 1,2 w 1,1 + t 1,2 t 1,1 = 0 v ± 3 w 1,2 = 0 u 1 x − 1,1 = 0 = v 1 x + 2,1 u 2 x + 1,1 = 0 = v 2 w 1,1 . Let ( p, q, ≈) be a datum as in the definition of Λ( p, q, ≈), additionally satisfying a certain admissibility condition. It turns out that it defines (uniquely up to Morita equivalence) a tame non-commutative projective nodal curve X = X( p, q, ≈). Converesely, any tame noncommutative projective nodal curve is Morita equivalent to X( p, q, ≈) for an appropriate admissible datum ( p, q, ≈), see [23]. This class of non-commutative nodal curves includes as a special case stacky chains and cycles of projective lines, considered by Lekili and Polishchuk [37,38] in the context of the homological mirror symmetry for compact surfaces with non-empty boundary; see Example 4.20 for a detailed discussion. The main result of this paper is the following. Theorem (see Corollary 5.6). Let ( p, q, ≈) be an admissible datum, X be the corresponding tame non-commutative nodal curve and Λ be the corresponding -algebra. Next, let Y be the Auslander curve of X (which is another tame non-commutative projective nodal curve) and X ν −→ X be the hereditary cover of X. Then there exists the following commutative diagram of triangulated categories and exact functors: where T is an equivalence of triangulated categories, E and E are fully faithful functors, I is the canonical inclusion, P is an appropriate localization functor and ν * is induced by the forgetful functor Coh( X) −→ Coh(X). This theorem generalizes an earlier results of the authors [14], where X was a commutative tame nodal curve (i.e. a chain or a cycle of projective lines [22,12]). If q is void and ≈ does not contain reflexive elements then the algebra Λ( p, ≈) is gentle. In Lemma 7.5, we compute the AG-invariant of Avella-Alaminos and Geiß [4] of Λ( p, ≈). Recent results of Lekili and Polishchuk [37,38] allow to deduce from it a version of the homological mirror symmetry for tame non-commutative nodal curves of gentle type. At this place we want to stress that the introduced class of algebras Λ( p, q, ≈) does not exhaust (even up to derived equivalence) all derived tame algebras which are derived equivalent to an appropriate non-commutative tame projective nodal curve. For example, in the paper [9, Theorem 2.1] it was observed that on a chain of projective lines there exists a tilting bundle whose endomorphism algebra is a gentle algebra of infinite global dimension. In this paper, we have found another class of gentle algebras which are derived equivalent to appropriate non-commutative tame projective nodal curves. For any n ∈ N, let Υ n be the path algebra of the following quiver Since HH 3 (Υ n ) = 0, the algebra Υ n can not be derived equivalent to any gentle algebra of the form Λ( p, ≈). On the other hand, we prove that D b (Υ n −mod) is equivalent to the derived category of coherent sheaves on the so-called Zhelobenko non-commutative cycle of projective lines; see Theorem 6.9. Acknowledgement. We are thankful to Christoph Geiß for giving us a hint of the construction of the derived equivalence from Proposition 6.5 as well as to Yanki Lekili for explaining us connections between gentle algebras and various versions of the Fukaya category of a compact surface with non-empty boundary. The first-named author is also indebted to Nicolo Sibilla for communicating him the statement of Proposition 6.7 as well as for helpful discussions. The work of the first-named author was partially supported by the DFG project Bu-1866/4-1. The results of this paper were mainly obtained during the stay of the second-named author at Max-Planck-Institut für Mathematik in Bonn. Some algebraic prerequisities 2.1. Brief review of the theory of minors. Throughout this subsection, let R be a commutative noetherian ring. For any R-algebra C, we denote by C • the opposite R-algebra, by C−mod (respectively, by mod − C) the category of finitely generated left (respectively, right) C-modules and by C−mod (respectively, Mod − C) the category of all left (respectively, right) C-modules. For any C-module X we denote by add(X) the additive closure of X, i.e. the full subcategory of C−mod consisting of all direct summands of all finite direct sums of X. In what follows, B is an R-algebra, which is finitely generated as R-module. Definition 2.1. Let P be a finitely generated projective left B-module. Then the Ralgebra A := End B (P ) • is called a minor of B; see [20,17]. It is clear that P is a (B-A)-bimodule and we have an exact functor Remark 2.2. In the case P is a projective generator of the category B−mod (meaning that for any object X of B−mod there exists an epimorphism P n ։ X for some n ∈ N), Morita theorem asserts that the functor G is an equivalence of categories. It is well-known that G induces an isomorphism of centers Z(B) It is not difficult to prove the following result. Lemma 2.3. Consider the dual (right) B-module P ∨ := Hom B (P, B). The following statements hold. • The canonical morphism P −→ P ∨∨ = Hom B (P ∨ , B) is an isomorphism of left B-modules. Moreover, the canonical morphism of R-algebras • is an isomorphism too. • For any object X of B−mod, the canonical morphism of left A-modules is an isomorphism, i.e. we have an isomorphism of functors G ∼ = − ⊗ B P ∨ . As a consequence, P ∨ is a flat (actually, even projective) right B-module. • In particular, the canonical morphism Using Lemma 2.3, one can deduce the following results. Theorem 2.4. Consider the functors F := P ⊗ A − and H := Hom A (P ∨ , − ) from A−mod to B−mod. Then the following statements hold. • The functors (F, G, H) form an adjoint triple, i.e. (F, G) and (G, H) form adjoint pairs. • The functors F and H are fully faithful, whereas G is essentially surjective. • Let I P := Im P ⊗ A P ∨ ev −→ B . Then I P is a (B-B)-bimodule and ker(G) := X ∈ B−mod I P X = 0 . In other words, the kernel of the exact functor G can be identified with the essential image of the (fully faithful) restriction functorB−mod −→ B−mod, whereB = B/I P . Moreover, the functor G induces an equivalence of categories • The same results remain true, when we consider G, F and H as functors between the categories B−mod and A−mod of finitely generated modules. • The essential image of the functor F : A−mod −→ B−mod is the category P −mod := X ∈ B−mod there exists an exact sequence It turns out that the relation between B−mod, A−mod andB−mod becomes even more transparent, when we pass to the setting of derived categories. • (LF, DG, RH) is an adjoint triple of functors. • The functors LF and RH are fully faithful, whereas DG is essentially surjective. • The essential image of LF is equal to the left orthogonal category ofB (viewed as a left B-module). Similarly, the essential image of RH is equal to the right orthogonal categoryB ⊥ . • We have a recollement diagram where DB(B−mod) is the full subcategory of D(B−mod) consisting of those complexes whose cohomologies belong toB−mod. • Assume that the (B-B)-bimodule I P is flat viewed as a right B-module. -Finally, suppose that gl.dimB < ∞ and gl.dimA < ∞. Then we have a recollement diagram Remark 2.6. In the case P = Be for an idempotent e ∈ B, most of the results from this subsection are due to Cline, Parshall and Scott [19]. The "abelian" theory of minors attached to an arbitrary finitely generated projective B-module P was for the first time suggested in [20]. Detailed proofs of Theorems 2.4 and 2.5 can also be found in [17]. 2.2. Generalities on orders. From now on, let R be an excellent reduced equidimensional commutative ring of Krull dimension one. Let be the corresponding total ring of fractions, where K 1 , . . . , K r are fields. Definition 2.7. An R-algebra A is an R-order if the following conditions hold. • A is a finitely generated torsion free R-module. Lemma 2.8. Let R be as above, R ′ ⊆ R be a finite ring extension and A be an R-algebra. Then A is an R-order if and only if A is an R ′ -order. Moreover, if K ′ := Quot(R ′ ) then we have: Proof. It is clear that A is finitely generated and torsion free over R if and only if it is finitely generated and torsion free over R ′ . Next, note that the ring extension R ′ ⊆ R induces a finite ring extension K ′ ⊆ K of the corresponding total rings of fractions. Moreover, Chinese remainder theorem implies that the multiplication map K ′ ⊗ R ′ R −→ K is an isomorphism. Therefore, for any finitely generated R-module M , the natural map Definition 2.9. Let A be a ring. • A is a one-dimensional order (or just an order ) provided its center R = Z(A) is a reduced excellent ring of Krull dimension one, and A is an R-order. Note that for any overorder A of A, the map K ⊗ R A −→ A K is automatically an isomorphism. Hence, A K = A K and A is an order over R. Proofs of all these results can be for instance found in [42,43]. Theorem 2.11. Let be an algebraically closed field and K be either ((w)) or the function field of an algebraic curve over . Let Υ be a finite dimensional central simple algebra over K. Proof. This is a restatement of the fact that the Brauer group of the field K is trivial; see for instance [ The following result is well-known; see [43]. Lemma 2.12. Let R be a discrete valuation ring, m be its maximal ideal, := R/ m the corresponding residue field and K the field of fractions of R. For any sequence of natural numbers p = p 1 , . . . , p r , consider the R-algebra where the size of the i-th diagonal block is (p i × p i ) for each 1 ≤ i ≤ r and p := | p| = p 1 + · · · + p r . Then H(R, p) is a hereditary R-order. In what follows, H(R, p) will be called standard hereditary R-order of type p. For any M ∈ H(R, p) and any pair 1 ≤ i, j ≤ r, we denote by M (i,j) the corresponding block of M , which is a matrix of size p i × p j . In particular, M (i,i) (0) ∈ Mat p i ( ). In the simplest case when p = p r := (1, . . . , 1 r times ), we shall use the notation H r (R) := H R, p r . Theorem 2.13. Let be an algebraically closed field, R = w and K = ((w)). Then the following results are true. • Assume that H is a hereditary R-order in the central simple K-algebra Mat p (K). Then there exists S ∈ Mat p (K) such that H = Ad S H(R, p) := S · H(R, p) · S −1 for some tuple p = (p 1 , . . . , p r ) such that p = | p|. Moreover, such a tuple p is uniquely determined up to a cyclic permutation. • Let H be a hereditary R-order. Then we have: is some standard R-order for any 1 ≤ i ≤ s. • For any vector p, the orders H(R, p) and H p (R) are centrally Morita equivalent. Proofs of these results can be found in [43]. Nodal orders Nodal orders are appropriate non-commutative generalizations of the commutative nodal ring D := u, v /(uv). Definition and basic properties of nodal orders. Definition 3.1. An order A is called nodal if its center is a semilocal excellent ring and there exists a hereditary overorder H ⊇ A such that the following conditions are satisfied. • For any finitely generated simple left A-module U we have: Remark 3.2. In what follows, hereditary orders will be considered as special cases of nodal orders. Next, it is clear that an order A is nodal if and only if its radical completion A is nodal. Moreover, it is not difficult to show that for a nodal order A, the hereditary overorder H from Definition 3.1 is in fact uniquely determined and admits the following intrinsic description: where J is viewed as a right A-module and A K is the rational envelope of A. For a nodal order A, the order H will be called the hereditary cover of A. Nodal orders were introduced by the second-named author in [21]. In that work it was shown that the category of finite length modules over a (non-hereditary) nodal order is representation tame and conversely an order of nonwild representation type is automatically nodal. In our previous joint work [11] we proved that even the derived category of finite length representations of a nodal order has tame representation type. Theorem 3.5. Let A be a nodal order. Then the following statements are true. • Any overorder of A is nodal. • Any minor of A is nodal. In particular, if A ′ is Morita equivalent to A then A ′ is a nodal order too. • Let G be a finite group acting on A. If |G| is invertible in A then the skew group product A * G is a nodal order. Proofs of the above statements can be found or easily deduced from the results of [21]. 3.2. Combinatorics of nodal orders. Let be an algebraically closed field. It turns out that nodal orders over the discrete valuation ring x can be completely classified. Definition 3.6. Let Ω be a finite set and ≈ be a symmetric but not necessarily reflexive relation on Ω such that for any ω ∈ Ω there exists at most one ω ′ ∈ Ω (possibly, ω ′ = ω) such that ω ≈ ω ′ (note that ≈ is automatically transitive). We say that ω ∈ Ω is It is clear that any element of Ω is either simple, or reflexive, or tied with respect to the relation ≈. Given (Ω, ≈) as above, we define two new sets Ω ‡ and Ω ‡ by the following procedures. • We get Ω ‡ from Ω by replacing each reflexive element ω ∈ Ω by a pair of new simple elements ω + and ω − . The tied elements of Ω ‡ are the same as for Ω. • The set Ω ‡ is obtained from Ω ‡ by replacing each pair of tied elements ω ′ , ω ′′ by a single element ω ′ , ω ′′ . determines a map (also called weight function) Ω wt −→ N given by the rule wt(ω) := wt ‡ (ω + ) + wt ‡ (ω − ) for any reflexive point ω ∈ Ω. Abusing the notation, we shall drop the symbol ‡ in the notation of wt ‡ and write wt for all weight functions introduced above. Let (Ω, ≈) be as in Definition 3.6 and Ω σ −→ Ω be a permutation. Then we have a decomposition where σ(Ω i ) = Ω i and the restricted permutation σ Ω i is cyclic for any 1 ≤ i ≤ t. In a similar way, we get a decomposition Ω ‡ = Ω ‡ 1 ⊔ · · · ⊔ Ω ‡ t . Definition 3.8. Let Ω be a finite set and Ω σ −→ Ω a permutation. A marking m of (Ω, σ) is a choice of an element ω i ∈ Ω i for any 1 ≤ i ≤ t, where Ω i are given by (12). Note that a choice of marking m makes each set Ω i totally ordered: where l i := |Ω i |. Let ≈ be a relation on Ω as in Definition 3.6 and Ω ‡ wt −→ N be a weight function. Then for any 1 ≤ i ≤ t, we get a vector (14) p Definition 3.9. Let (Ω, ≈, σ) be a datum as in Definition 3.8, m be a marking of (Ω, σ) and Ω ‡ wt −→ N be a weight function. For any 1 ≤ i ≤ t, let H i := H(R, p i ) be the corresponding standard hereditary order (11). Then we put: It is clear that H is a hereditary order, whose rational envelope is the semisimple algebra where K is the fraction field of R and s i = p i for 1 ≤ i ≤ t. Remark 3.10. According to the definition (11) of a standard hereditary order, any matrix belonging to H i is endowed with a division into vertical and horizontal stripes labelled by the elements of the set Ω i . Moreover, for any reflexive element ω ∈ Ω i , the corresponding vertical and horizontal stripes have further subdivisions labelled by the elements ω ± ∈ Ω ‡ i . Definition 3.11. Let R be a discrete valuation ring and Ω, σ, ≈, m, wt be a datum as above. Then we have a ring A = A R, (Ω, σ, ≈, m, wt) ⊆ H defined as follows: The proof of the following results is straightforward. Theorem 3.12. Let R be a discrete valuation ring, Ω, σ, ≈, m, wt a datum as in Definition 3.8 and A = A R, (Ω, σ, ≈, m, wt) the corresponding ring from Definition 3.11. Then the following statements hold. for some m as well as elements . If A is connected and R = w then the center of A is isomorphic to w 1 , . . . , w t /(w i w j , 1 ≤ i = j ≤ t). • The ring A is an order whose rational envelope is the semisimple algebra Λ given by (16). The order H defined by (15) is an overorder of A. • Let m be any other marking of (Ω, σ) and A = A R, (Ω, σ, ≈, m, wt) be the corresponding order. Then there exists S ∈ Λ such that A = Ad S (A), i.e. the orders A and A are conjugate in Λ. It means that the order A R, (Ω, σ, ≈, m, wt) does not depend (up to a conjugation) on the choice of marking of (Ω, σ), so in what follows it will be denoted by A R, (Ω, σ, ≈, wt) . • A R, (Ω, σ, ≈, wt) and A R, (Ω, σ, ≈, wt • ) are centrally Morita equivalent. In the examples below we take R = w . where D = x, y /(xy). Alternatively, one can identify A with the arrow completion of the path algebra of the following quiver with relations: The order given by (19) will be called Zhelobenko order, since it appeared for the first time in a work of Zhelobenko [54] dedicated to the study of admissible finite length representations of the Lie group SL 2 (C). where D = x, y /(xy), I = (x, y) and D = x × y . The order A is isomorphic to the arrow completion of the following quiver with relations: Following the terminology of our previous work [14], A will be called the Auslander order of D, or just Auslander order. Example 3.17. Let Ω = {1} with 1 ≈ 1 (of course, σ = id in this case). Then we have: The hereditary cover H of A ist just the matrix algebra Mat 2 w . Example 3.18. Let Ω = {1, 2}, σ = ( 1 2 2 1 ) and 2 ≈ 2. Then we have: Note that A is isomorphic to the arrow completion of the following quiver with relations: The order (24) appeared for the first time in the 1970 ICM talk of I. Gelfand [27] in the context of the study of admissible finite length representations of the Lie group SL 2 (R). In what follows, it will be called Gelfand order. The hereditary cover of A is Theorem 3.19. Let be an algebraically closed field, R = w , K = ((w)) and for some s 1 , . . . s t ∈ N. If A is a nodal order whose rational envelope is Λ then there exists a datum (Ω, σ, ≈, wt) and S ∈ Λ such that A = Ad S A(Ω, σ, ≈, wt) , where A(Ω, σ, ≈, wt) is the nodal order from Definition 3.11. Proof. Let A ⊂ Λ be a nodal order, whose rational envelope is Λ and A ⋄ be a basic order Morita equivalent to A. Then A ⋄ is also nodal, see Theorem 3.5. Let H ⋄ be the hereditary cover of A ⋄ . Then H ⋄ ∼ = H(R, p 1 ) × · · · × H(R, p r ), where each component H(R, p i ) is a standard hereditary order given by (11). isomorphic to a product of several copies of the field . By the definition of nodal orders, the embedding of semisimple algebrasĀ ⋄ ı ֒→H ⋄ has the following property: for any simpleĀ ⋄ -module U we have: lĀ⋄ H ⋄ ⊗Ā⋄ U ≤ 2. From this property one can easily deduce that • Each simple component ofH ⋄ is either or Mat 2 ( ). In other words, for any 1 ≤ i ≤ r, each entry of the vector p i is either 1 or 2. • The embeddingĀ ⋄ ı ֒→H ⋄ splits into the product of the following components: Let 1 ≤ i ≤ r and p i = p i,1 , . . . , p i,k i . Then we put: Note that the set Ω parameterizes simple components of the algebraH ⋄ . If where wt • is the trivial weight function. Since the order A is Morita equivalent to A ⋄ , there exists a projective left A ⋄ -module P such that A ∼ = End A ⋄ (P ) • . Recall that the isomorphism classes of indecomposable A ⋄ -modules are parameterized by the elements of the set Ω ‡ . Let P ∼ = γ∈ Ω ‡ P ⊕mγ γ be a decomposition of P into a direct sum of indecomposable modules. Then we get a weight function 3.3. Skew group products of u, v /(uv) with a finite group. Let be an algebraically closed field of characteristic zero, ζ ∈ be a primitive n-th root of 1, G := ρ ρ n = e be a cyclic group of order n ∈ N ≥2 and [G] be the corresponding group algebra. The following results is well-known. Proposition 3.21. Consider the action of the cyclic group G on the polynomial algebra [u] given by the rule: ρ • u := ζu. Then the skew product [u] * G is isomorphic to the path algebra of the cyclic quiver C n Proof. An isomorphism [u] * G −→ C n is given by the rule: where e k ∈ C n is the trivial path corresponding to the vertex k. Corollary 3.22. Let R := u . Then the skew group product R * G is isomorphic to the arrow completion of the path algebra of (26). Note that the latter algebra can in its turn be identified with the algebra of matrices where the primitive idempotent e k corresponding to the vertex k of C n is sent to the k-th diagonal matrix unit of T n (R). Remark 3.23. Let us notice that, strictly speaking, R * G depends on the choice of an n-th primitive root of unity ζ. On the other hand, the primitive idempotent does not depend on the choice of ζ. Therefore, identifying the skew group product R * G with the completed path algebra of a cyclic quiver, we shall always choose a labeling of vertices of C n such that the idempotent ε is identified with the trivial path corresponding to the vertex labeled by n. Note also that the orders T n (R) and H n (R) are isomorphic. Let 0 < c < n be such that gcd(n, c) = 1. Then we have a permutation wherek denotes the remainder of k modulo n. Proposition 3.24. For any 0 < c < n such that gcd(n, c) = 1, consider the action of the cyclic group G = ρ ρ n = e on the nodal algebra D = u, v /(uv), given by the rule Then the nodal order A = A (n,d) := D * G has the following description: is the hereditary cover of A. For any 1 ≤ k ≤ n, let We consider the elements ε k and ε k as elements, respectively, of u * G and v * G. It is convenient, taking into account that the actions of G on u and on v are actually different. We have the following commutative diagram of algebras and algebra homomorphisms: Viewing ε k and ε k as elements of Taking into account the rules for the isomorphisms we get the description (29) of the nodal order A (n,c) . Remark 3.25. Note that the center of the order A (n,c) is equal to a, b /(ab), where a = u n and b = v n . Observe that A (n,c) has precisely n pairwise not-isomorphic finitely generated simple modules. It is not difficult top see that two such orders A (n,c) and A (n ′ ,c ′ ) are Morita equivalent if and only if n = n ′ and c = c ′ or d = c ′ , where cd ≡ 1 mod n. If c = d then any Morita equivalence between A (n,c) and A (n,d) permutes the irreducible branches of their common center a, b /(ab). In the terms of Theorem 3.12, the order A (n,c) has the following description. • The relation ≈ is given by the rule:k ≈ τ c (k) for 1 ≤ k ≤ n. • The permutation Ω σ −→ Ω is given by the formula Then we have: For any other weight function wt, the orders A (n,c) and A (n,c) (wt) := A(Ω, σ, ≈, wt) are centrally Morita equivalent. It is not difficult to derive the quiver description of the order A (n,c) . Of the major interest is the case c = n − 1. Then A (n,c) is isomorphic to the arrow completion of the path algebra of the following quiver Lemma 3.26. Consider the action of the cyclic group G = τ τ 2 = e on the nodal algebra D = u, v /(uv), given by the rule τ (u) = v. Then the nodal order A := D * G has the following description: Proof. Consider the elements e ± := 1 ± τ 2 ∈ A. Then the following statements are true. • e 2 ± = e ± , e ± e ∓ = 0 and 1 = e + + e − . Moreover, τ · e ± = ±e ± . • For any s, t ∈ {+, −} we have: e s Ae t = e s De t . Therefore we have the Peirce decomposition: for any m 1 , m 2 ∈ N 0 and s 1 , s 2 ∈ +, − . Moreover, one can check that One can check that the linear map given by the following rules: is an algebra isomorphism we are looking for. Proposition 3.27. For any n ∈ N, let G := ρ, τ ρ n = e = τ 2 , τ ρτ = ρ −1 be the dihedral group and ζ ∈ be a primitive n-th root of 1. Consider the action of G on the nodal ring D = u, v /(uv) given by the rules: Then the nodal order A := D * G has the following description. where Ω := 1, 2, . . . , n , σ = 1 2 . . . n n 1 . . . n − 1 Sketch of the proof. The cyclic group K := ρ is a normal subgroup of G of index two. Let L = τ . Then we have a commutative diagram of -algebras and algebra homomorphisms where the action of L on H * K = u * K × v * K is given by the rule for any k 1 , k 2 , l 1 , l 2 ∈ N 0 . The nodal ring A * K is described in Remark 3.25. In the terms of the quiver presentation (30) we have: wherek = (n − k) for 1 ≤ k ≤ n and all indices are taken modulo n. The remaining part is a lengthy computation analogous to the one made in the course of the proof of Lemma 3.26 which we leave for the interested reader. 3.4. Auslander order of a nodal order. Definition 3.28. Let A be a nodal order and H be its hereditary cover. Then Remark 3.29. It follows from the definition that C = ACH and the canonical morphism is a bijection (here, we view both H and A as left A-modules). Proposition 3.30. Let R be a discrete valuation ring and A = A R, (Ω, σ, ≈, wt) be the nodal order from the Definition 3.11. Then we have: In particular, C is a two-sided ideal both in H and A containing the common radical J = rad(A) = rad(H) of A and H. Corollary 3.31. Let be an algebraically closed field, R be the local ring of an affine curve over at a smooth point, H be a hereditary R-order, A be a nodal order whose hereditary cover is H and C be the corresponding conductor ideal. Then C is a two sided ideal in H. Proof. We have to show that the canonical map of R-modules C −→ HCH is surjective. For this, it is sufficient to prove the corresponding statement for the radical completions of A and H. However, the structure of nodal orders over R ∼ = w is known; see Theorem 3.19. Hence, the statement follows from Proposition 3.30. Lemma 3.32. Let A = A R, (Ω, σ, ≈, wt) be a nodal order, H be its hereditary cover, C be the conductor ideal,Ā := A/C andH := H/C. Let Ω • be the subset of Ω whose elements are reflexive or tied elements of Ω and Ω ‡ • be the subset of Ω ‡ defined in a similar way. Then the following diagram is commutative, where the components of the embedding ı are described in the same way as in diagram (18). i.e. the order from Example 3.16. It is easy to see that B is Morita equivalent to the Gelfand order (24). Let A be an arbitrary nodal order and B be the Auslander order of A. For the idempotents consider the corresponding projective left B-modules The action of B on the projective left B-modules A C and H H is given by the matrix multiplication, whereas the isomorphisms A ∼ = End B (P ) are compatible with the canonical right actions on P , respectively Q. The nodal order A as well as its hereditary cover H are minors of the Auslander order B in the sense of Definition 2.1. Let In the terms of Subsection 2.1, we have the following functors. • Since P is a projective left A-module, we get an exact functor Of course, it restricts to an exact functor between the corresponding categories of finitely generated modules. to H−mod, as well as its restriction on the full subcategories of the corresponding finitely generated modules. Additionally to Theorem 2.4, the following result is true. Proposition 3.36. The functor F is exact, maps projective modules to projective modules and has the following explicit description: if N is a left H-module, then , the element bz is given by the matrix multiplication. Obviously, both S and T have finite length viewed as B-modules. Moreover, S is isomorphic toĀ viewed as an A-module and T is isomorphic toH viewed as an H-module. of S, whereas the essential image of R H is equal to the right orthogonal category S ⊥ . Similarly, the essential image of LF is equal to ⊥ T and the essential image of RH is equal to T ⊥ . • We have a recollement diagram where I(Ā) := S, the functor I * is left adjoint to I and I ! is right adjoint to I. • Similarly, we have another recollement diagram is the full subcategory of the derived category D(B−mod) consisting of those complexes whose cohomologies belong to Add(T ) and J is the canonical inclusion functor. • We have: gl.dimB = 2. Proof. These results are specializations of Theorem 2.5. The statements about both recollement diagrams (37) and (38) follow from the description of the kernels of the functors D G and DG. Namely, consider the two-sided ideal in the algebra B. As one can easily see, I Q is projective viewed as a right B-module. Moreover, B/I Q ∼ = A/C =:Ā is semisimple. Hence, Theorem 2.5 gives the first recollement diagram (37). Analogously, for As a consequence, we have a semiorthogonal decomposition Moreover, we have the following commutative diagram of categories and functors: where Perf(A) is the perfect derived category of A, E is the canonical inclusion functor and P is the derived functor of the restriction functor H−mod −→ A−mod. Proof. The recollement diagram (39) is just the restriction of the recollement diagram (37) on the corresponding full subcategories of compact objects. The isomorphism E ≃ DG • LF follows from the fact that the adjunction unit Id D(A−mod) −→ DG • LF is an isomorphism of functors (already on the level of unbounded derived categories). Next, since the functor F is exact, we have: For any H-module N we have: Hence, G • F is isomorphic to the restriction functor H−mod −→ A−mod, what finishes a proof of the second statement. Proposition 3.40. Let A be a nodal order. Then the corresponding Auslander order B is nodal too. Since J is projecive as H-module, H is hereditary. Then the commutative diagram implies that B is a nodal order and H is its hereditary cover. Since the conductor ideal C contains the radical J, the Auslander order B is an overorder of B. It follows from Theorem 3.5 that the order B is nodal, too. In what follows, we shall need the following result about the finite length B-module S. Assume that R = w and A = A R, (Ω, σ, ≈, wt) . Recall that It is clear that the set Ω ‡ • also parameterizes the isomorphism classes of the simpleĀmodules. For any γ ∈ Ω ‡ • , let S γ be the simple left B-module which corresponds to the (unique, up to an isomorphism) simpleĀ γ -module and P γ be its projective cover. Then we . Our next goal is to describe a minimal projective resolution of S γ . For any ω ∈ Ω, let Q ω be the corresponding indecomposable projective left H-module and Q ω Q ω be the corresponding indecomposable projective left B-module. Lemma 3.41. The following statements hold. • Let ω ∈ Ω be a reflexive element and γ = ω ± be one of the corresponding elements of Ω ‡ • . Then is a minimal projective resolution of the simple B-module S γ . In particular, for any δ ∈ Ω we have: • Let ω ′ , ω ′′ ∈ Ω be a pair of tied elements and γ = {ω ′ , ω ′′ } be the corresponding element of Ω ‡ • . Then a minimal projective resolution of the simple B-module S γ has the following form: In particular, for any δ ∈ Ω we have: (43) and (45) follow from the fact that rad(Q ω ) = Q σ(ω) for any ω ∈ Ω. Non-commutative nodal curves: global theory In this section, we are going to explain the construction as well as main properties of non-commutative nodal curves of tame representation type. Next, consider the order Let Z = Z(A) be the center of A. Then we have: The multiplication maps K ⊗ Z A −→ Υ and K ⊗ Z H −→ Υ are isomorphisms. In other words, A and H are Z-orders in the central simple K-algebra Υ. Then the Z-order A can be extended to a sheaf of orders A on the projective curve E in such a way that the stalk of A at the "infinite point" (0 : 1 : 0) of E is a maximal order (see for instance [8]). The ringed space = (E, A) is a typical example of a projective non-commutative nodal curve of tame representation type. −→ E is the normalization map. The functor ν * provides an equivalence between the categories of coherent sheaves on and (È 1 , H). In what follows, we shall consider as the hereditary cover of the non-commutative nodal curve E what can be viewed as an appropriate non-commutative generalization of the normalization of a singular commutative nodal curve. Definition 4.1. Let X be a reduced quasi-projective curve over a field and A be a sheaf of orders on X. Then the ringed space X = (X, A) is called a non-commutative curve. We say that X is projective if the commutative curve X is projective. If for any point x ∈ X the corresponding stalk A x is a nodal order then X is a non-commutative nodal curve. 4.2. Construction of non-commutative nodal curves. Let be an algebraically closed field and ( X, O X ) be a smooth quasi-projective curve over . • Let X l −→ N be a function such that l(x) = 1 for all but finitely many points x ∈ X (such function will be called a length function). • We say that wt is a weight function compatible with the given length function l if m(x ′ ) = m(x ′′ ) for any pair of pointsx ′ ,x ′′ ∈ X belonging to the same irreducible component of X. • Forx ∈ X, let Ox be the stalk of the structure sheaf O X at the pointx. Let be the standard hereditary order defined by (11). is commutative. In other words, the Morita type of a non-commutative hereditary curve does not depend on the choice of a weight function wt and is determined by the underlying commutative curve X and length function l. Comment to the proof. This result is due to Spieß [51], see also [16,Section 4.3]. Remark 4.6. Let X = È 1 , X l −→ N be a length function, Π wt −→ N be a weight function compatible with l and X be the corresponding hereditary curve. Then X can be identified with an appropriate weighted projective line of Geigle and Lenzing [25] in the sense that the categories of (quasi-)coherent sheaves on both objects are equivalent (see, for instance, the paper [44]). • È 1 (n + , n − ) is the weighted projective line corresponding to the length function given by the rule: • È 1 (n + , n − , n) is the weighted projective line corresponding to the length function given by the rule: where we additionally assume that n ± ≥ 2. Definition 4.7. Let X be a smooth quasi-projective curve over and X l −→ N be a length function. Let ≈ be a relation on the set Π defined by (47) such that • For any ω ∈ Π there exists at most one ω ′ ∈ Π such that ω ≈ ω ′ (such elements ω, ω ′ will be called special). • There are only finitely many special elements in Π. Non-special elements of Π will be called simple. The set of special elements of Π will be denoted by Π • . An element ω ∈ Π • is called reflexive if ω ≈ ω and tied if ω ≈ ω ′ for some ω = ω ′ . Similarly to Definition 3.6 we define the set Π ‡ by replacing each reflexive element ω ∈ Π by two new simple elements ω + and ω − . The pairs of tied elements of Π ‡ are the same as for Π. Let Π ‡ wt ‡ −→ N be a function such that wt ‡ (ω ′ ) = wt ‡ (ω ′′ ) for all ω ′ ≈ ω ′′ in Π ‡ . Then we define the map Π wt −→ N by the following rule: We call such a relation ≈ on the set Π admissible if there exists a function Π ‡ wt ‡ −→ N for which the corresponding function Π wt −→ N is a weight function compatible with the length function l. Abusing the notation, we shall drop the symbol ‡ in the notation of wt ‡ and write wt for all weight functions introduced above. We say that two pointsx ′ =x ′′ ∈ X are tied if there are ω ′ ∈ Ωx′ and ω ′′ ∈ Ωx′′ such that ω ′ ≈ ω ′′ . Let (48) Z := x ∈ X there existsỹ ∈ X \ {x} such thatx andỹ are tied be the set of tied points of X. Taking the transitive closure, we get an equivalence relation ∼ on Z. We put: Z := Z/ ∼ . In what follows, we shall also consider Z as a reduced subscheme of X, Z as a reduced scheme over and the projection map Zν −→ Z as a morphism of schemes. Given an admissible datum ( X, l, ≈), we define a quasi-projective curve X requiring the following diagram of algebraic schemes (49) to be cartesian. In other words, the curve X is obtained from X by gluing transversally the equivalent points. It is clear that X is singular provided Z is non-empty and that X ν −→ X is the normalization map. It always exists, as follows from [50]. We put: H := ν * H . For any x ∈ X (respectively,x ∈ X) let H x (respectively, Hx) be the radical completion of H x (respectively, Hx). Note that in the notation of (11) we have: where Ox is the completion of the local ring Ox. It is clear that Hx is also an order over the local ring O x , which is the completion of the local ring of the structure sheaf of X at the point x. Next, we put: Ω x := Ωx 1 ⊔ · · · ⊔ Ωx r Then we have a permutation Ω x σx −→ Ω x given by the rule σ x (x, i) := x, i + 1 mod l(x) for anyx ∈ {x 1 , . . . ,x r }. In the terms of Definition 3.11 we put: Then A x is a nodal order and H x is its hereditary cover. Moreover, the center of A x contains the local ring O x . Definition 4.8. We define the sheaf of orders A on the curve X to be the subsheaf of H satisfying the following conditions on the stalks: We call the ringed space X = (X, A) the non-commutative nodal curve attached to the datum ( X, l, ≈, wt). The ringed space X = (X, H) will be called the hereditary cover of X. Note that for X ′ := ( X, H) we have a natural morphism of ringed spaces X ′ ν −→ X, which induces an equivalence of categories Coh( X ′ ) −→ Coh( X). Theorem 4.9. Let ( X, l, ≈) be an admissible datum, Π ‡ wt −→ N be any compatible weight function and X be the corresponding non-commutative nodal curve. Then the following results hold. • Let Π ‡ wt ′ −→ N be any other compatible weight function and X ′ be the corresponding non-commutative nodal curve. Then the categories of quasi-coherent sheaves QCoh(X) and QCoh(X ′ ) are equivalent. That is why we often do not mention the weight wt and say that X is attached to the admissible datum ( X, l, ≈). • Let ≈ ′ be another equivalence relation on Π and Π wt ′ −→ N be a weight function compatible with ≈ ′ . Suppose that for anyx ∈ X there exists a cyclic permutation is commutative. Then the categories QCoh(X) and QCoh(X ′ ) are equivalent. Comment to the proof. This result is a consequence of Theorem 4.3 (proven in [16]) and Theorem 3.12. Example 4.10. Let ( X, l, ≈) be such that for anyx ∈ X with l(x) ≥ 2, the set Πx contains a non-tied element. Then the datum ( X, l, ≈) is admissible. Example 4.11. Let X be any curve andx 1 =x 2 ∈ X be two distinct points. Define a length function X l −→ N by the rule: Let ≈ be given by the rule: (x 1 , 1) ≈ (x 2 , 1). Then the datum ( X, l, ≈) is not admissible. 4.3. Non-commutative nodal curves of tame representation type. In this subsection we recall, following the paper [23], the description of those non-commutative projective nodal curves X = (X, A) for which the category VB(X) of vector bundles (i.e. of locally projective coherent A-modules) has tame representation type. Let X = X 1 ⊔ · · · ⊔ X r be a disjoint union of r projective lines. We choose homogeneous coordinates on each component X k and define pointsõ k ,õ ± k , ∈ X k setting:õ k := (1 : 1),õ + k := (0 : 1) and o − k := (1 : 0). Assume that ( X, l, ≈) is an admissible datum defining a non-commutative nodal curve X. For each 1 ≤ k ≤ r, let Σ k ⊂ X k be the corresponding set of special points. Theorem 4.12. Let X be a non-commutative projective curve. Then VB(X) has tame representation type if and only if the following conditions are satisfied. • X is Morita equivalent to a commutative elliptic curve, i.e. X is an elliptic curve, while l and ≈ are trivial. • X is the rational non-commutative nodal curve attached to an admissible datum ( X, l, ≈) such that X = X 1 ⊔ · · · ⊔ X r is a disjoint union of r projective lines, whereas (l, ≈) satisfies the following conditions: -For any 1 ≤ k ≤ r we have: Σ k ≤ 3. Definition 4.13. Consider the pair p, q , where p = (p + 1 , p − 1 ), . . . , (p + t , p − t ) ∈ N 2 t and q = (q 1 , . . . , q s ) ∈ N s for some t, s ∈ N 0 (either of this tuples may be empty). Let X := X 1 ⊔ · · · ⊔ X t ⊔ X t+1 ⊔ · · · ⊔ X t+s be a disjoint union of t + s projective lines. We define the weight function X l −→ N by the following rules • For each 1 ≤ k ≤ t we put: l(õ ± k ) = p ± k . • For each 1 ≤ k ≤ s we put: l(õ ± t+k ) = 2 and l(õ t+k ) = q k . Let ≈ be a relation on the set Πõ+ satisfying the conditions of Definition 4.7. If p, q, ≈ is admissible and wt is a compatible weight, we denote by X p, q, ≈, wt the corresponding non-commutative nodal rational projective curve. Since the weight wt does not imply the derived category, we often omit it and write X p, q, ≈ . One can rephrase Theorem 4.12 in the following way. Theorem 4.14. The category VB(X) of vector bundles on a non-commutative projective curve X is representation tame if and only if X is either a commutative elliptic curve or a non-commutative nodal curve X p, q, ≈ , where ( p, q, ≈) is an admissible datum as in Definition 4.13. Remark 4.15. Let p = (2, 2), (2, 2) , q be void and ≈ be given by the following rule: (õ + k , 1) ≈ (õ − k , 1) for k = 1, 2 and (õ ± 1 , 2) ≈ (õ ± 2 , 2). Then the central curve X of the corresponding non-commutative nodal curve X( p, ≈) is given by the following Cartesian diagram: where E π −→ E ′ is the projection map. Then A is a sheaf of nodal orders on the projective curve E ′ . For any x ∈ E ′ \ {s}, the order A x is maximal, whereas A s is the nodal order given by (30). The following result is obvious. Consider the length function È 1 l −→ N given by the rule: We then define the relation ≈ on the set Π setting (õ + , k) ≈ (õ − , n − k) for 1 ≤ k ≤ n, where we replace 0 by n. It is easy to see that the datum (È 1 , l, ≈) is admissible. Using Theorem 4.3 one can conclude that E and the non-commutative nodal curve corresponding to the datum (È 1 , l, ≈) are Morita equivalent. Example 4.20. In this example, we give a description of stacky cycles of projective lines used in the paper of Lekili and Polishchuk [37] in the language of non-commutative nodal curves. Let r ∈ N, n = (n 1 , . . . , n r ) ∈ N r and c = (c 1 , . . . , c r ) ∈ N r be such that gcd(n k , c k ) = 1 for any 1 ≤ k ≤ r. Let E r be a cycle of r projective lines and X π −→ E r be its normalization. Then X = X 1 ⊔ · · · ⊔ X r is a disjoint union of r projective lines. Let o 1 , . . . , o r be the set of singular points of E r , where we choose their labeling in such a way that π −1 (o k ) = õ − k ,õ + k+1 , whereõ − k = (1 : 0) ∈ X k andõ + k+1 = (0 : 1) ∈ X k+1 . The completion of the local ring of E r at each point o k is isomorphic to the commutative nodal ring u, v /(uv). For any 1 ≤ k ≤ n, consider the action of the cyclic group G n k = ρ ρ n k = e on D = u, v /(uv) given by the rule where ζ k is some primitive n k -th root of 1. Heuristically, the category of coherent sheaves Coh(E) on a stacky cycle of projective lines E := E r ( n, c) is an abelian category satisfying the following property: the category Tor(E) of finite length objects of Coh(E) splits into a direct sum of blocks: where Tor x (E) is equivalent to the category of finite length w -modules if x ∈ E r smooth and to the category of finite length D * G n k -modules if x = o k , where the action of G n k is given by the rule (52). Informally speaking, a stacky cycle of projective lines E r ( n, c) can be thought as an appropriate cyclic gluing of weighted projective lines È 1 (n 1 , n 2 ), . . . , È 1 (n r−1 , n r ), È 1 (n r , n 1 ). Now let us proceed with a formal definition of E r ( n, c) viewed as a non-commutative nodal curve. As above, let X be a disjoint union of r projective lines. Consider the length function X l −→ N given by the rule: It is convenient to use the identification given by the rule: (õ − k , j) =j = (õ + k+1 , j) for 1 ≤ j ≤ n k . We have a bijection Let ≈ be a relation on the set Π, given by the rule: (õ − k , j) ≈ õ + k+1 , τ k (j) . Note that the set Π does not contain reflexive elements, hence Π ‡ = Π in this case. Next, we claim that the datum ( X, l, ≈) is admissible. Indeed, let n := lcm(n 1 , . . . , n r ) be the least common multiple of n 1 , . . . , n r . Then we can define a compatible weight function Π wt −→ N by the following rules: Then the stacky cycle of projective lines E r ( n, c) can be defined as the non-commutative nodal curve corresponding to the datum ( X, l, ≈, wt). The central curve of E r ( n, c) is the usual cycle of r projective lines E r , i.e. E r ( n, c) = (E r , A), where A is an appropriate sheaf of nodal orders. Let X ν −→ E r be the normalization morphism. It is clear that the locus S E r ( n, c) of non-regular points of E r ( n, c) is just the set o 1 , . . . , o r of the singular points of E r , where Moreover, for any 1 ≤ k ≤ r the order A o k is Morita equivalent to the basic nodal order A (n k ,c k ) ; see Remark 3.25 for the corresponding notation. As a ringed space, E r ( n, c) depends on the choice of the weight function wt. However, Theorem 4.9 assures that the corresponding category of coherent sheaves Coh E r ( n, c) does not depend on this choice. One can define stacky chains of projective lines (which appeared in [37]) in a similar way. Remark 4.21. The following example shows that a heuristic treatment of the notion of a stacky cycle of projective lines has to be performed with a special care. Let n ∈ N and 1 ≤ c, c ′ , d, d < n are such that c, c ′ , d, d ′ are all pairwise distinct, mutually prime with n, cc ′ ≡ 1 mod n and dd ′ ≡ 1 mod n. Consider the corresponding non-commutative nodal curves E := E (n, n), (c, d) and E ′ := E (n, n), (c ′ , d) . Then E = (E, A) and E ′ = (E, A ′ ) are tame non-commutative nodal curves, whose underlying central curve E is a cycle of two projective lines. Next, S(E) = S(E ′ ) = {o 1 , o 2 }, where o 1 , o 2 are the singular points of E. We have: where ∼ denotes central Morita equivalence of the corresponding module categories. Note that A (n,c) and A (n,c ′ ) are isomorphic as rings. Summing up, E and E ′ are two noncommutative nodal curves with the same central curve E and such that for any x ∈ E, the corresponding categories Tor x (E) and Tor x (E ′ ) are equivalent. Nonetheless, we claim that the categories Coh(E) and Coh(E ′ ) are not equivalent. Indeed, any equivalence induces an automorphism of the central curve E Φc −→ E; see [16,Theorem 4.4]. It follows from the assumptions on d, d ′ , c, c ′ that the orders A o 1 and A ′ o 2 are not Morita equivalent. Hence, Φ c (o 1 ) = o 1 and Φ c (o 2 ) = o 2 . In particular, we get (as restrictions of Φ) equivalences of categories • We have: gl.dim Coh(Y) = 2. • We have a recollement diagram Here, the exact functor I is determined by the rule I(Ā) = S, where S is given by the locally projective resolution In particular, we have a semi-orthogonal decomposition • Moreover, we have the following commutative diagram of categories and functors: where Perf(X) is the perfect derived category of coherent sheaves on X, E is the canonical inclusion functor, LF and L F are fully faithful, DG is an appropriate localization functor and ν * is the functor induced by the "normalization map" X ν −→ X. 5. Tilting on rational non-commutative nodal projective curves 5.1. Tilting on projective hereditary curves. We begin with a brief description of the standard tilting bundle on a weighted projective line due to Geigle and Lenzing [25] reexpressed in the language of non-commutative hereditary curves. Let X = È 1 and X l −→ N be any length function. Let us fix any weight function Π wt −→ N compatible with l. As usual, we put p(x) := wt(x, 1), . . . , wt(x, l(x)) for anyx ∈ X. Let m := p(x) for some (hence for any) pointx ∈ X and Hx := H Ox, p(x) ⊆ Mat m (Ox) be the standard hereditary order, defined by the vector p(x). Next, let Q (x,1) , . . . , Q (x,l(x)) be the standard indecomposable projective left Hx-modules, i.e. we have a direct sum decomposition . Here, we have: Note that there is a chain of embeddings of Hx-modules Q ′ (x,1) ⊂ Q (x,l(x)) ⊂ · · · ⊂ Q (x,1) . Let H := H(l, wt) be the sheaf of hereditary orders on X defined by (l, wt) and X = ( X, H) the corresponding non-commutative hereditary curve. Recall that H is a subsheaf of the sheaf of maximal orders Mat m (O X ) such that Hx = Hx for anyx ∈ X. First of all, note that we have an exact fully faithful functor which transforms locally free sheaves on X into locally projective H-modules. We put: Next, for anyx ∈ X such that l(x) ≥ 2 and 2 ≤ i ≤ l(x), we have a locally projective H-module L (x,i) uniquely determined by the following properties: • The sheaf L (x,i) is a subsheaf of L. • For anyỹ ∈ X we have: For a convenience of notation, we also define for any pointx ∈ X the locally projective subsheaf L (x,1) of L defined by the following condition on the stalks: It is clear that Note that L (x ′ ,1) ∼ = L (x ′′ ,1) for anyx ′ ,x ′′ ∈ X. Abusing the notation, we shall denote this sheaf by L(−1). Next, let is a tilting object in the derived category D b Coh( X) (called, in what follows, the standard tilting bundle on X) and the corresponding algebra Γ := End X ( T ) • is isomorphic to the Ringel canonical algebra Γ = Γ (x 1 , l 1 ), . . . , (x r , l r ) which is the path algebra of the following quiver 1 (62) In other words, the derived functor is an equivalence of triangulated categories. Remark 5.2. In what follows, we shall call the arrows u ij in the quiver (62) for 1 ≤ i ≤ r, 1 ≤ j ≤ l i essential, whereas the arrows z and w will be called redundant. Note that the relations (62) defining the canonical algebra Γ (x 1 , l 1 ), . . . , (x r , l r ) generate an admissible ideal if and only if the set Φ is empty (in this case, the canonical algebra is the path algebra of the Kronecker quiver). If r ≥ 2 then both redundant arrow can be excluded and if r ≥ 1 then one of the redundant arrows can be excluded. Note also that we can formally add to the set Φ any pointx ∈ X of length one, which correspond to a formal addition of another redundant arrow and does not change the corresponding canonical algebra. We will use this procedure in the study of nodal curves. 5.2. Tilting on non-commutative rational projective nodal curves. We begin with an admissible datum ( X, l, ≈), where X = X 1 ⊔ · · · ⊔ X r is a smooth rational projective curve (each X i ≃ È 1 ). Let wt be any compatible weight function, X = (X, A) be the corresponding non-commutative nodal curve, X = (X, H) = X 1 ⊔ · · · ⊔ X r be its hereditary cover and Y = (X, B) be the corresponding Auslander curve. Proof. The fact that X • generates the derived category D b Coh(Y) follows from the recollement diagram (56) and the facts that T generates D b Coh( X) andĀ generates D b (Ā−mod). Since the functors I and L F are fully faithful, we have: Since the functor L F is left adjoint to DG and DG(S) = 0, we have: Finally, for any i ∈ Z we have: . Since S is torsion and T is locally projective, we have: Hom Y (S, T ) = 0. It follows from the exact sequence (57) that Ext i Y (S, T ) = 0 for i ≥ 2. Therefore, • then the triangulated categories D b Coh(Y) and D b (Λ−mod) are equivalent; see [35]. Remark 5.5. Note that we have: Corollary 5.6. Let ( X, l, ≈) be an admissible datum, where X is a disjoint union of projective lines. Then we have the following commutative diagram of categories and functors: in which T is an exact equivalence of triangulated categories, LF and L F are fully faithful exact functors, E is the canonical inclusion, DG is an appropriate Verdier localization functor and ν * is induced by the forgetful functor Coh( X) −→ Coh(X) (normalization). 5.3. Tame non-commutative nodal curves and tilting. We are especially interested in studying those finite dimensional -algebras Λ arising in the diagram (64) for which the derived category D b (Λ−mod) has tame representation type. Since D b (Λ−mod) contains the category of vector bundles VB(X) as a full subcategory, the non-commutative nodal curve X has to be vector bundle tame, i.e. of the form X( p, q, ≈), where ( p, q, ≈) is a datum from Definition 4.13; see Theorem 4.14. In this subsection we are going to elaborate one step further an explicit description of the corresponding algebras Λ( p, q, ≈). Definition 5.7. Let us start with a pair of tuples where r, s ∈ N 0 (either of this tuples is allowed to be empty). For Next, for any 1 ≤ j ≤ s, let Ξ • j := w j,1 , . . . , w j,q j and (66) Γ 2, 2, q j ) = . . w j,1 = 0. Let ≈ be a symmetric relation on the set such that for any ξ ∈ Ξ there exists at most one ξ ′ ∈ Ξ such that ξ ≈ ξ ′ . Then the datum ( p, q, ≈) defines a finite dimensional -algebra Λ = Λ( p, q, ≈) which is obtained from the disjoint union of quivers with relatioins Γ p + i , p − i and Γ 2, 2, q j ) by the following combinatorial procedure. • For any pair of tied elements ̺ ′ ≈ ̺ ′′ of Ξ, we add a new vertex and two arrows ending in it: The new arrows satisfy the following zero relations: ϑ ′ ̺ ′ = 0 = ϑ ′′ ̺ ′′ . • For each reflexive element ̺ ∈ Ξ, we add two new vertices and two arrows ending in each new vertex: The new arrows satisfy the following zero relations: ϑ ± ̺ = 0. Remark 5.8. In the case when s = 0 (i.e. when the tuple q is void) the algebra Λ is skew-gentle [26]. If additionally ξ ≈ ξ for all ξ ∈ Ξ, then the algebra Λ is gentle [2]. We also refer to [15] for a survey of results on the derived categories of gentle and skew-gentle algebras. Theorem 5.9. Let X = X( p, q, ≈) be the non-commutative nodal curve attached to an admissible datum ( p, q, ≈) from Definition 4.13, Y be the Auslander curve of X and Λ = Λ( p, q, ≈) be the finite dimensional algebra from Definition 5.7. Then the following results hold: • The derived categories D b Coh(Y) and D b (Λ−mod) are equivalent. • Moreover, D b Coh(Y) and D b Coh(X) have tame representation type. Proof. According to Theorem 5.4, there exists a tilting complex X • := T ⊕ S[−1] in the derived category D b Coh(Y) such that Next,Ā is a product of several copies of the semisimple algebras and × . Namely, each pair ω ′ , ω ′′ ∈ Π of tied elements gives a factor , whereas each reflexive element ω ∈ Π gives a factor × . Taking into account the description of the space W = Γ X, Ext 1 Y (S, T ) viewed as right Γ-module given by Lemma 3.41, we can conclude that actually Λ = Λ, giving the first statement. Since the derived category D b (Λ−mod) is representation tame (it can be deduced as in [13]), the derived category D b Coh(Y) is representation tame too. Since D b Coh(X) can be obtained as a Verdier localization of D b Coh(Y) (see Theorem 4.26), one can conclude that D b Coh(X) is representation tame as well. 2 6. Tilting exercises with some tame non-commutative nodal curves In this section we are going to study in more details several special cases of the setting of Corollary 5.6. 6.1. Elementary modifications. We are going to introduce two "elementary modifications", which allow to replace the algebra Λ = Λ( p, q, ≈) by a derived-equivalent algebra. Lemma 6.1. Any fragment of Λ of the form (67) can be replaces by the fragment Proof. Let j be the common target of the arrows ϑ ′ and ϑ ′′ , i ′ be the source of ϑ ′ and i ′′ be the source of ϑ ′′ . Consider the complex where the underlined term of T * is located in the zero degree. Let Ω be the set of vertices of the quiver of the algebra Λ. Then T := T j ⊕ ⊕ i∈Ω\{j} P i ) is a tilting object of D b (Λ−mod). Then on the level of quivers and relations we get precisely the transformation described in the statement of Lemma. Example 6.2. Let Λ be the path algebra of the following quiver (69) and v j y j = 0 for j ∈ {2, 3}. Making an elementary transformation at both bullets, we get a derived equivalent algebra Γ, which is the path algebra of the following quiver 3 = 0. Lemma 6.3. Any fragment of Λ of the form (68) can be replaced by the fragment Proof. Let j ± be the target of ϑ ± and i be their common source. Consider the complexes Again, let Ω be the set of vertices of the quiver of the algebra Λ. Then is a tilting object in D b (Λ−mod). If Γ := End D b (Λ) (T ) • , then on the level of quivers and relations the passage from Λ to Γ gives the desired elementary transformation. Example 6.4. Let Λ be the path algebra of the following quiver (71) subject to the relations: u i x i = 0 for all 1 ≤ i ≤ 3, v 2 y 2 = 0 and v ± 1 y 1 = 0. Performing the elementary transformations at all bullets, we get a derived equivalent algebra Γ, given as the path algebra of the following quiver (72) subject to the relations: 3 = 0 and y + 12 y + 11 = y − 12 y − 11 . 6.2. Degenerate tubular algebra. Let E = V zy 2 − x 2 (x − z) ⊂ È 2 be a plane nodal cubic and G = τ ∼ = Z 2 , where E τ −→ E is the involution given by the rule (x : y : z) → (x : −y : z). Then the category Coh G (E) of G-equivariant coherent sheaves on E is equivalent to the category of coherent sheaves on the non-commutative nodal curve E = X( p, q ≈) described in Example 4.19. Recall that the vector p is void, q = (1) and (õ, 1) ≈ (õ, 1). Then the corresponding algebra Λ = Λ( p, q, ≈) is the path algebra of the following quiver • • modulo the relations x 2 x 1 + y 2 y 1 + w = 0 and u ± w = 0. Note that the corresponding ideal in the path algebra is not admissible and the arrow w is redundant. Applying the elementary transformation from Lemma 6.3 to the arrow w, we end up with the the path algebra T of the following quiver 0 and b 1 a 1 = b 4 a 4 , i.e. the degenerate tubular algebra from Introduction. Since the derived categories D b (Λ−mod) and D b (T −mod) are equivalent, the commutative diagram of categories and functors (3) is a special case of the setting from Corollary 5.6. Let S be the path algebra of the following quiver • modulo the following set of relations: i.e. any two paths with the same source and target are equal. and is an equivalence of triangulated categories, where B := End A (V ) • . Note that B is isomophic to the path algebra of the following quiver where N is the following representation of the quiver (77): Putting together all results obtained in this subsection, we get the following commutative diagram of triangulated categories and exact functors: where I is the canonical inclusion functor, E is a fully faithful functor, T is an equivalence of categories and P is an appropriate localization functor. It would be quite interesting to give an interpretation of this result in terms of the homological mirror symmetry in the spirit of the approach of [40]. 6.3. A purely commutative application of non-commutative nodal curves. Again, where all horizontal arrows are equivalences of triangulated categories. ) be the equivalences of triangulated categories, where the algebras Λ and Λ are the algebras corresponding, respectively, to Y and to Y as in Corollary 5.6. Recall that Consider the third gentle algebra We construct now a pair of equivalences of triangulated categories: • The first equivalence T 1 is just the elementary modification from Lemma 6.1, applied to the third vertex. • The second equivalence T 2 is given by the tilting complex The image of the localizing subcategory In an analogous way one can check that the image of the localizing subcategory K under the chain of equivalences of derived categories is again the triangulated category S 3 + , S 3 − , Z + , Z − . It proves the proposition. Recall that for any 1 ≤ k ≤ 2n we have: B is the Auslander order (21). Let S ± k be the simple torsion sheaf on E supported at the singular point o k which corresponds to the vertex ± of the quiver (22). Note that Proposition 6.8. We have a recollement diagram In particular, there exists an equivalence of triangulated categories: Proof. It is a consequence of the corresponding local statement (see Theorem 2.5) combined with the fact (following from (81)) that the functor is an equivalence of triangulated categories. Theorem 6.9. Let Υ = Υ n be the gentle algebra given by (7). Then the derived categories D b Coh(A) and D b (Υ−mod) are equivalent. be the tilting equivalence from Corollary 5.6. Recall from [14, Section 5.2] that Λ = Λ 2n is the path algebra of the following quiver For any 1 ≤ k ≤ n, consider the complexes Then we have: For any 1 ≤ k ≤ n, consider the following objects of D b (Λ−mod): which yields the desired statement. For any n ∈ N, consider the graded gentle algebra Θ = Θ n , given as the path algebra of the following quiver . . . where D b (Θ) denotes the derived category of Θ viewed as a differential graded category with trivial differential. As a consequence, the triangulated categories D b Coh(A) and D b (Θ) are equivalent, too. In other words, for any 1 ≤ k ≤ 2n, the pair of objects S + k , S − k forms a generalized 2-spherical collection. Let T k : [1]), the functor T k is an auto-equivalence of D b Coh(E) . For any 1 ≤ k, l ≤ 2n we have: It follows that the composition T := T 1 • T 3 • · · · • T 2n−1 induces an equivalence of triangulated categories . . . Then we have: As a consequence, the categories D b (Υ−mod) and For any 1 ≤ k ≤ n, consider the following object in D b (Λ−mod): One can show that A result of Bardzell (see [5,Theorem 4.1]) allows to write down a minimal resolution of Θ viewed as a module over its enveloping algebra Θ e := Θ ⊗ Θ • . From the explicit form of this resolution one can conclude that gl.dim Θ = 3. Moreover, one can show that in the category of graded left Θ e -modules the following vanishing is true: A result of Kadeishvili [33] implies that the algebra Θ is intrinsically formal, i.e. that any minimal A ∞ -structure on Θ is equivalent to the trivial one. According to Keller's work [35] Let Λ = Q/I be a gentle algebra [2] (see also [15] for the definition and main properties of this class of algebras). Let Q 0 be the set of vertices of Q and Q 1 be its set of arrows and s, t : Q 1 −→ Q 0 be the maps attaching to each arrow its source and target, respectively. A path in Q is a sequence π = a m . . . a 1 of elements of Q 1 such that t(a i ) = s(a i+1 ) for all 1 ≤ i ≤ m − 1; m = l(π) is the length of π. For any * ∈ Q 0 we have the trivial path e * of length zero with s(e * ) = t(e * ) = * . Following [4] we say that • π is a permitted path in Λ if a i+1 a i / ∈ I for all 1 ≤ i ≤ l(π) − 1; • π is a forbidden path in Λ if a i+1 a i ∈ I for all 1 ≤ i ≤ l(π) − 1. In this section we assume that gl.dim(Λ) < ∞. Then for any forbidden path π in Λ we have: l(π) ≤ gl.dim(Λ); see [5]. Now we recall the definition of the combinatorial invariant of Λ of Avella-Alaminos and Geiß [4]. Definition 7.1. The set Π of permitted threads in Λ is the set of all • maximal permitted paths in Λ; • trivial paths e * , where * ∈ Q 0 is a vertex such that there exists at most one a ∈ Q 1 such that t(a) = * and at most one b ∈ Q 1 such that s(b) = * , and ba / ∈ I provided both a and b do exist. Similarly, the set Φ of forbidden threads in Λ is the set of all • maximal forbidden paths in Λ; • trivial paths e * , where * ∈ Q 0 is a vertex such that there exists at most one a ∈ Q 1 such that t(a) = * and at most one b ∈ Q 1 such that s(b) = * , and ba ∈ I provided both a and b exist. The assumption that the algebra Λ is gentle implies that there exists a bijection Π ϑ + −→ Φ defined as follows. Consider two permutations σ, τ : Ξ −→ Ξ, given by the formulae and τ x ± i,j := x ± i,j−1 for all 1 ≤ i ≤ r, 1 ≤ j ≤ p ± i , where x ± i,0 := x ± i,p ± i . Finally, let ̺ = σ • τ and ̺ = ̺ 1 • · · · • ̺ c be its cyclic decomposition. Proof. Since gl.dim(Λ) = 2, all forbidden threads in Λ have length at most two. Moreover, we have a bijection Ξ −→ Φ, sending an element x ∈ Ξ to itself if x is untied and to the unique forbidden path of length two containing x if x is tied. It is a straightforward verification that the permutation ̺ defined above gets identified with the permutation ϑ from the definition of the AG-invariant. Let Σ be a compact oriented surface with non-empty boundary. According to the classification of surfaces, Σ is determined (up to a homeomorphism) by its genus g ∈ N 0 and the number b ∈ N of the boundary components. In what follows, we shall denote such surface by Σ g,b . Next, let S ⊂ ∂Σ be a finite subset such that S i := S ∩ ∂ i Σ = ∅ for all 1 ≤ i ≤ b, where ∂ i Σ is the i-th boundary component. Let P Σ be the projectivized tangent bundle of Σ and η ∈ Γ(Σ, P Σ ) be a line field (it follows from the assumption b ≥ 1 that Γ(Σ, P Σ ) = ∅). Here, we follow the terminology of the papers of Lekili and Polishchuk [37,38] (in the original version of [29] this category was called topological Fukaya category). According to [29,Lemma 3.3], for any datum (Σ, S, η) there exists a homologically smooth graded gentle algebra Λ and an equivalence of triangulated categories WFuk(Σ, S, η) ≃ D b dg (Λ), where D b dg (Λ) is the derived category of Λ viewed as a graded dg-algebra with trivial differential. Note that if the degrees of all arrows of Λ are zero then D b dg (Λ) is equivalent to the usual derived category D b (Λ−mod) of the abelian category of finite dimensional representations of Λ; see [35]. In [38], Lekili and Polishchuk proved that conversely, for any homologically smooth graded gentle algebra Λ there exists a datum (Σ, S, η) as above such that WFuk(Σ, S, η) is equivalent to D b dg (Λ). We are mostly interested in the case when the grading of a gentle algebra Λ = Q/I is trivial, i.e. we have: Formula (89) tells that the permutation ϑ given by (85) splits into a product of precisely b cycles: ϑ = ϑ 1 • · · · • ϑ b . Lekili and Polishchuk also prove that the order of the boundary components ∂ 1 Σ, . . . , ∂ b Σ can be chosen in such a way that S i = m i and w (i) η = m i − n i for all 1 ≤ i ≤ b, where (m i , n i ) = κ(ϑ i ) and w (i) η is the winding number of the line field η along ∂ i Σ; see [38,Theorem 3.2.2]. Since φ Λ is a derived invariant, it implies that the marked surface (Σ, S) is a derived invariant of Λ, too. See also a work of Opper, Plamondon and Schroll [41] for an alternative approach to attach a marked surface to a gentle algebra of possibly infinite global dimension. Remark 7.6. Let Λ = Λ( p, ≈) = Q/I. It is easy to see that |Q 1 | − |Q 0 | is equal to the number of two-cycles of the permutation σ, defined by (88). This gives a closed formula for the Euler characteristic of the surface of Λ. Example 7.7. Let (Σ, S) be the marked surface of the gentle algebra Υ from Example 7.3. Then Σ = Σ 1,2 is a torus with two boundary components. Moreover, |S| = 2 and on each boundary component of Σ lies one marked point. If η is a line field on Σ such that D b (Υ−mod) ≃ WFuk(Σ, S, η) then w (i) η = −2 for i = 1, 2. We have in this case: where is the Zhelobenko nodal curve, whose central curve is a cycle of two projective lines; see Subsection 6.4. (2) η = −3. According to [38,Corollary 3.25], in this case the AG-invariant φ Λ is a full derived invariant of Λ. Equivalently, for any line fieldη ∈ Γ(Σ, P Σ ) such that w
2018-05-14T13:58:23.000Z
2018-05-14T00:00:00.000
{ "year": 2018, "sha1": "edf4434f14d598924805537fa844734ae6dce3b1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "558fc7623f4964e7ca4bb6bc3cfed4215fc68a61", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7182190
pes2o/s2orc
v3-fos-license
Type I interferon in organ-targeted autoimmune and inflammatory diseases A significant role for IFNα in the pathogenesis of systemic lupus erythematosus is well supported, and clinical trials of anti-IFNα monoclonal antibodies are in progress in this disease. In other autoimmune diseases characterized by substantial inflammation and tissue destruction, the role of type I interferons is less clear. Gene expression analysis of peripheral blood cells from patients with rheumatoid arthritis and multiple sclerosis demonstrate an interferon signature similar to but less intense than that seen in patients with lupus. In both of those diseases, presence of the interferon signature has been associated with more significant clinical manifestations. At the same time, evidence supports an anti-inflammatory and beneficial role of IFNβ locally in the joints of patients with rheumatoid arthritis and in murine arthritis models, and many patients with multiple sclerosis show a clinical response to recombinant IFNβ. As can also be proposed for type I diabetes mellitus, type I interferon appears to contribute to the development of autoimmunity and disease progression in multiple autoimmune diseases, while maintaining some capacity to control established disease - particularly at local sites of inflammation. Recent studies in both rheumatoid arthritis and multiple sclerosis suggest that quantification of type I interferon activity or target gene expression might be informative in predicting responses to distinct classes of therapeutic agents. postulated to be either potential or current therapies for those diseases based on their anti-infl ammatory proper ties or on clinical experience that suggested some effi cacy. Th e present review will describe data demon strating activation of the type I interferon pathway in these infl ammatory diseases that target specifi c organs, and will attempt to sort out the relative roles of type I inter ferons, particularly IFNα and IFNβ, as pathogenic mediators versus attractive therapeutic agents in those diseases. Against the background of extensive data from patients with SLE, and more recently from murine lupus models [17], that demonstrate an association of interferon pathway activation with more severe disease and disease activity [18], the common and accepted use of recombinant IFNβ, alone or in combination with ribavirin, in patients with MS presents a conundrum [19]. If type I interferon is broadly pathogenic in systemic autoimmune diseases, why is IFNβ benefi cial in patients with MS? A similar question can be asked with regard to RA, where IFNβ has been demonstrated in the synovial membranes of RA patients and in several murine models of infl ammatory arthritis but is proposed to be anti-infl ammatory and protective rather than pathogenic [20][21][22][23]. Possible explanations for these queries include the following: IFNα and IFNβ have distinct properties that confer distinct functional eff ects on gene expression and the immune system; the pathophysiology of the classic systemic autoimmune diseases is substantially distinct from the pathophysiology of the autoimmune diseases that are characterized by infl ammation focused in specifi c organs; and the complex roles of both IFNα and IFNβ in host defense and immunoregulation allow for each of the interferons to play pathogenic and protective roles, depending on site of production or action, the disease context in which they act, or other factors ( Figure. 1). It is likely that each of these explanations accounts in part for the reality that type I interferon does contribute to autoimmune disease pathogenesis but can also control infl ammation in some situations. Common and distinct properties of IFNα and IFNβ Th e type I interferons are encoded in series on human chromosome 9p. Th ere are 13 functional IFNα genes, one IFNβ gene, one IFNκ gene encoding a protein that is preferentially expressed in skin, one IFNε gene that is expressed in placenta and fetal membranes, and one IFNω gene [24]. All of the protein products of the type I interferon family bind to a single heterodimeric receptor composed of IFNAR1 and IFNAR2. Consideration of a potential pathogenic or protective role for individual type I interferons, particularly IFNα and IFNβ, involves understanding the cell types that might preferentially produce these interferons, the distinct binding properties of IFNα and IFNβ for the type I interferon receptor (IFNAR), and whether these interferons engage distinct signaling pathways and activate distinct target genes. While classic teaching holds that IFNβ is most eff ectively produced by fi broblasts and that IFNα is primarily produced by plasmacytoid dendritic cells, many cell types can in fact produce both interferons, particularly in the setting of a viral stimulus [25]. Diff erential production of one or another type I interferon in diff erent contexts is probably in part cell-type related but also determined by the location of those cells [21,23,26,27]. IFNβ is produced by synoviocytes and keratinocytes, and in small amounts by monocyte-derived cells. IFNκ, a type I interferon that has attracted less attention than IFNβ and IFNα, is apparently predominantly made by keratinocytes, based on available data [28]. IFNβ-producing cells are located in tissue linings, and IFNβ can also be produced by stromal cells through a novel pathway that involves activation of lymphotoxin-β receptors [29]. In contrast, plasmacytoid dendritic cells are located in peripheral lymphoid organs and, at least in disease settings, in organs aff ected by infl ammation. In view of the widespread distribution and circulating nature of plasmacytoid dendritic cells, situations in which IFNα is produced expose the host to systemic type I interferon and might contribute to autoimmunity, while situations in which IFNβ is produced might result in more localized rather than systemic concentrations of the cytokine and abrogate infl ammation. At least one mechanism by which local type I interferon might reduce infl ammation has been suggested to be through inhibitory eff ects on TNF production [30,31]. Th e binding properties of each of the IFNα and IFNβ proteins for IFNAR1 can vary, depending on interaction of the cytokine with defi ned amino acids of the receptor [32]. Th e availability of signaling components of the Jak-Stat pathway can also impact the functional results of one of the type I interferons binding to its receptor. For example, absence of Tyk2 inhibits IFNα-mediated signaling but does not alter IFNβ-mediated signaling [33]. Th e downstream gene targets induced by IFNα and IFNβ appear to be highly similar, although some studies have demonstrated that IFNβ is more potent than IFNα in inducing gene expression [34]. Taken together, data comparing properties and functional eff ects of IFNα and IFNβ would suggest that the most important contributors to diff erential eff ects of those two type I interferons relate to the location of production (predominantly local in the case of IFNβ and systemic in the case of IFNα) and to affi nity of the interaction of interferon with the receptor and its impact on proximal signaling pathways. patients may harbor distinct gene expression patterns [35,36]. Of interest, a pathogen-response gene expression program characterized by increased expression of type I interferon-inducible genes was identifi ed in a subgroup of RA patients who also expressed high circulating anticyclic citrullinated peptide antibody levels, the autoantibodies associated with more destructive RA [35,36]. A recent demonstration of an association between the interferon signature and progression to arthritis in patients with arthralgias and anti-cyclic citrullinated peptide antibodies further supports a probable pathogenic role for type I interferon in RA, perhaps based on the tendency for systemic type I interferon to promote autoantibody formation [37]. In contrast, the potential relevance of type I interferon, and more specifi cally IFNβ, produced locally in the joint as a protective factor in RA is suggested by in vitro studies of RA synovial membrane and experiments in murine models of infl ammatory arthritis. In collageninduced and adjuvant arthritis models, intraperitoneal or intraarticular injection of IFNβ resulted in reduction of disease activity and inhibition of cartilage and bone destruction through a signifi cant decrease of TNF and IL-6 expression and an enhancement of IL-10 responses at the site of infl ammation [20,38,39]. Type I interferon might also positively aff ect arthritis by inhibiting the diff erentiation of monocytes into osteoclasts, thereby reducing bone resorption and erosions [40]. Studies of human tissue have indicated that IFNβ is present in RA synovial membranes and reduces synoviocyte proliferation in vitro -observations that have led to the suggestion that IFNβ is an anti-infl ammatory mediator with a protective role in RA [21][22][23]. Adminis tration of recombinant IFNβ in the context of a randomized, double-blind, placebo-controlled clinical trial for treatment of patients with active RA, however, showed no treatment eff ect with regard to clinical or radiographic scores [41]. Since synovial tissue from the patients who received the IFNβ therapy did not show a signifi cant diff erence in numbers of infi ltrating myeloid cells or T cells compared with the placebo group, it is possible that the dose or timing of IFNβ administration did not deliver suffi cient cytokine to the joint to demonstrate an anti-infl ammatory eff ect. Figure 1. IFNα is predominantly a product of the peripheral immune system. In systemic lupus erythematosus (SLE), IFNα is produced at high levels and has systemic eff ects on multiple immune system pathways, promoting autoimmunity and infl ammation. A more modest level of IFNα might also contribute to autoimmunity in type I diabetes mellitus (DM), multiple sclerosis (MS) and rheumatoid arthritis (RA), as demonstrated by data from murine models and an interferon-inducible gene signature in blood. IFNβ is produced in small amounts by myeloid cells but probably has its greatest impact locally where it is produced by fi broblasts and stromal cells. Type I interferon-inducible gene products, such as IL-10 and IL-1 receptor antagonist (IL-1ra), produced locally can blunt infl ammation. As type I interferons, particularly IFNβ, have been associated with anti-infl ammatory activities in the setting of RA, and in view of the variable expression of an interferon signature among RA patients [35,36], we postulated that expression of type I interferon might represent a positive predictor of response to TNF-antagonist therapy in RA patients, while low levels of type I interferon might identify RA patients who would be candidates for alternative therapeutic options. To investigate this hypothesis, type I interferon activity was determined in plasma samples from a previously described RA cohort [42] prior to and during the course of TNF-antagonist therapy. We showed that RA patients collectively express increased plasma type I interferon activity relative to levels in healthy controls [43]. Th e most signifi cant obser vation, and one that will require confi rmation in larger populations, was that higher levels of type I interferon activity prior to therapy with TNF inhibitors are associated with better outcomes as defi ned by the European League Against Rheumatism (EULAR) RA improve ment criteria [43]. In view of the data showing a protective role for IFNβ in murine models of infl am matory arthritis, we looked at which interferon was the major contributor to plasma type I interferon activity in the RA patients. Inhibition experiments using monoclonal anti-IFNα and anti-IFNβ antibodies revealed that both IFNα and IFNβ contribute to type I interferon activity in RA plasma [43]. Th is observation is in contrast to SLE, where anti-IFNβ antibodies have little eff ect on plasma type I interferon activity [44]. Moreover, a higher IFNβ/IFNα ratio prior to initiation of TNF inhibitor therapy was found to be associated with a better clinical response, pointing to IFNβ, rather than IFNα, as a key contributor to control of infl ammation and predictor for a better response to TNF-antagonist therapy. IFNβ has pleiotropic immunomodulatory actionsincluding decreased expression of the proinfl ammatory cytokines IL-1β and TNFα, and enhancement of the antiinfl ammatory cytokines IL-1 receptor antagonist, IL-10, and transforming growth factor beta [45][46][47][48]. IFNβ has also been shown to mediate inhibition of MHC class II expression on activated PBMC [48], inhibition of T-cell activation [49] and decreased expression of adhesion molecules [50]. Since IL-1 receptor antagonist, an antiinfl ammatory cytokine, can be induced by IFNβ, we measured IL-1 receptor antagonist levels in RA patient samples. A statistically signifi cant association was detected between baseline IL-1 receptor antagonist levels and therapeutic outcome, pointing to an elevated plasma IL-1 receptor antagonist level as an additional predictor of good response in TNF inhibitor-treated patients [43]. Perhaps consistent with our results, a report from Sekiguchi and colleagues described variability in peripheral blood gene expression of RA patients treated with infl iximab. Although not reaching statistical signifi cance, there was a trend toward increased expression of interferon-inducible genes prior to initiation of treatment in those patients who went on to respond to therapy as determined by meeting an American College of Rheumatology 50% improvement response rate at week 22 [51]. Gene expression patterns over time were variable among responders and nonresponders and with time after initiation of therapy, with a typical decrease in interferon-inducible gene expression at the 2-week time point followed by an increase in some patients. A recent report from Van Baarsen and colleagues described data derived from whole-blood, real-time PCR analysis of a panel of interferon-response genes in RA patients treated with infl iximab [52]. Th at group also observed a range of baseline values and changes after initiation of therapy. Rather than comparing patients based on EULAR clinical response criteria, these investigators segregated patients into two groups based on the ratio of their interferon-inducible gene expression scores before and after 1 month of therapy. Th ose patients who showed an increase in type I interferon-inducible gene expression at 1 month tended to have a poor clinical response to treatment as determined at 16 weeks. Analysis of a subset of their patients identifi ed as EULAR responders or nonresponders supported this trend. Th is pattern of an increase in interferon pathway activation in TNF antagonist nonresponders is consistent with our earlier study of Sjögren's syndrome patients, in which we observed a general increase in plasma type I interferon activity at 12 weeks after start of therapy in patients treated with etanercept but not in those who received placebo [16]. No conclusion could be reached regarding the relationship of interferon activity to therapeutic response as the etanercept treatment was not effi cacious in those patients. Our laboratory is currently conducting studies to determine the distinct gene expression profi le induced by plasma from patients who show a clinical response to TNF inhibitors compared with those patients who do not show a good clinical response. Taken together, the available data support a relationship between type I interferon activity or interferon-inducible gene expression and eff ects of TNF blockade, with at least a trend toward higher levels of type I interferon prior to therapy being associated with a clinical response, and suggest that early incremental increase in interferoninducible gene expression compared with baseline levels might predict poor response to therapy. While TNF inhibitors have been highly successful in improv ing clinical outcomes for patients with RA, some patients do not respond. Additional therapeutic approaches have been approved for patients who prove to be TNF inhibitor nonresponders -including treatment with rituximab, the B-cell-depleting monoclonal antibody that targets B-cell CD20. Preliminary data from our collaborators suggest that in contrast to our results showing superior responses to TNF inhibitor therapy in patients with increased plasma type I interferon activity at baseline, those patients who show a superior response to anti-B-cell therapy have low levels of type I interferon at baseline [53]. While it would be clinically useful to have a bio marker that permitted selection of a therapeutic approach that would prove most eff ective based on measurement of type I interferon levels, it is very likely that the nature of RA, the complexity of the genetic contributors to therapeutic response, and the variability in the complement of mediators produced in each patient will not allow a simple predictive test. Nonetheless, distinct relationships of systemic type I interferon levels in patients who respond to TNF inhibitors compared with those who respond to anti-B-cell therapy should stimulate new concepts regarding mechanisms of disease pathogenesis. Multiple sclerosis Th e moderate effi cacy of recombinant IFNβ in patients with MS suggests the obvious conclusion that type I interferon is therapeutic rather than pathogenic in that disease [19]. It should be noted, however, that the clinical development programs which led to the approval of IFNβ did not defi ne its mechanism of action. Nor has it been clear whether IFNβ off ers a benefi t diff erent from that seen after administration of IFNα. In fact, the diff erential eff ects of IFNα and IFNβ are diffi cult to demonstrate. In general, the gene expression programs that are induced by IFNα versus IFNβ are largely overlapping [34]. While subtle diff erences in the binding properties of each of the interferons to IFNAR, their common receptor, have been predicted based on analysis of their amino acid sequence and mutation studies, and there are demonstrated diff erences in engaging downstream signaling compo nents by the two type I interferon subtypes, their func tional impact on gene expression is quite comparable [32][33][34]. In light of the frequent administration of therapeutic IFNβ, it is perhaps surprising that gene expression analysis of patients with relapsing remitting multiple sclerosis (RRMS) (untreated with IFNβ) has demonstrated an interferon signature similar to the more classic signature seen in many patients with SLE [31,54,55]. Van Baarsen and colleagues were among the fi rst to discern the typical signature refl ecting type I interferon activation in whole blood in their study of 29 patients with RRMS and 25 healthy controls [54]. Along with a signature of immunoglobulin-related transcripts, one of the most prominent groups of transcripts was enriched in interferon-induced genes. Th e authors performed several analyses of the diff erentially expressed genes in their dataset in comparison with genes defi ned as either type I or type II (IFNγ)-inducible based on data in the literature, and concluded that type I interferon-inducible genes were increased in RRMS patients compared with control subjects whereas type II-induced genes were comparable between the two groups. Van Baarsen and colleagues went further, however, and analyzed the gene program with a view towards predicting whether bacteria -which tend to activate the immune response through NF-κB-activating TLR2 or TLR4 pathways -or viruses -which tend to activate the immune response through TLR3, TLR7 or TLR9 path ways and utilize MyD88 -are more likely to be respon sible for the gene program observed in the patients. Th e NF-κB program was not diff erent between patients and controls, but the interferon-induced gene program, similar to that induced by viruses, was diff erentially expressed. Th e study also compared the pattern of over expressed genes in the RRMS patients with those induced in macaques by smallpox infection, and found that more than 50% of the patients clustered with the virus-infected macaques. Th e diff erentially expressed genes that characterized this subset of RRMS patients corresponded to those that describe a common response pathway characterizing innate immune responses to microbes [54]. A role for type I interferon in RRMS is also supported by demonstration of IFNα, IFNβ, and MxA protein in brain lesions of patients with MS [56][57][58]. In acute lesions, astrocytes stained positive for IFNβ, macrophages expressed more IFNα, and endothelial cells sometimes expressed both IFNα and IFNβ. Chronic lesions were more likely to be positive for IFNα [56]. MXA, a type I interferon-inducible gene product, is present in astro cytes, in infi ltrating T lymphocytes, and in endothelial cells -and the presence of nearby plasmacytoid dendritic cells suggests that the interferon is produced locally [57,58]. MXA protein in peripheral blood of RRMS patients and elevated serum levels of type I interferon are also detected [55]. Since the assays used to detect type I interferon activity in MS sera are distinct from those that have been used by others to quantify that activity in SLE patients, the relative levels cannot be compared. Based on the requirement for IFNγ priming to detect MxA protein in IFNAR-positive WISH epithelial cells cultured with MS sera, however, it seems probable that the levels are likely to be lower in most MS patients than in SLE patients with detectable interferon activity. One interpretation of the data demonstrating local type I interferon and its induced protein products in MS brain is that the interferon is providing an immunosuppressive eff ect [56]. Th e paradigm of IFNα promoting systemic auto immunity versus IFNβ reducing local infl ammatory disease as an approach to understanding the role of type I interferons might apply to patients with MS treated with IFNβ. Consistent with the hypothesis that type I interferon inhibits TNF production are data from a study of RRMS patients treated for 18 to 24 months with IFNβ compared with patients not treated with IFNβ [31]. IL-12, TNF, and IFNγ levels were elevated in the plasma or culture supernatants from MS patients compared with controls, but TNF and IFNγ levels were signifi cantly lower in patients treated with IFNβ compared with those not treated. Of interest, TNF levels in whole blood cultures stimulated with lipopolysaccharide and IFNγ in supernatants of cultures stimulated with myelin basic protein were not diff erent from levels in healthy controls in patients who had been treated with IFNβ, but did increase further in RRMS patients who had not been treated. At least in the case of the TNF data, the results would support an inhibitory eff ect of IFNβ downstream of TLR4 that reduces target gene expression. A comprehensive analysis of IFNβ responders and nonresponders was recently published [59]. Th e study analyzed 47 patients with RRMS (29 responders and 18 nonresponders, with responders defi ned based on no increase in the Expanded Disability Status Sale and no relapses during 2 years of treatment). Comparison of baseline gene expression profi les in PBMC identifi ed diff erentially expressed genes in the two groups. Of great interest, type I interferon-inducible genes were generally overexpressed in the nonresponder patients and represented the pathway most signifi cantly associated with nonresponse to IFNβ. When assessed after 3 months of therapy, most IFNβ clinical responders showed a robust cellular response with increased expression of interferoninducible genes, while the nonresponder group showed modest or no increases in levels of expression of those genes. In fact, a prediction algorithm identifying the eight genes that best predicted IFNβ responders from non responders included fi ve typical type I interferoninducible genes (IFIT1, IFIT2, IFIT3, IFI44, and OASL). Th e conclusions from the study of this initial cohort were validated in a second cohort including 15 responders and 15 nonresponders [59]. Consistent with the increased level of inter feron-inducible gene transcripts in the non responder group, baseline phosphorylated-STAT1 levels were higher in nonresponder monocytes than in responder monocytes. In addition, type I interferon bioactivity was higher in the nonresponders than in responders or healthy donors. Th e authors of this highly informative study performed in vitro stimulation experiments to compare signaling downstream of IFNAR as well as in response to TLR ligands, and found roughly comparable responses in the two patient groups -with the exception of production of IFNα in response to lipopolysaccharide, which was signifi cantly lower in responders than in nonresponders or healthy donors, as was expression of IFNAR1. Th e interpretation of these results suggests a complex role for the type I interferon system in MS: consistent with the Van Baarsen and colleagues study, a subset of RRMS showed a type I interferon signature in blood in the absence of treatment, with Comabella and colleagues showing increased bioactive type I interferon in the nonresponder group -an observation confi rmed in a recent report [59,60]. Th e Comabella and colleagues study suggests that the high interferon group, those cases that do not respond to IFNβ, has an interferon pathway that is constitutively activated but is not further activated by administration of recombinant IFNβ. As the nonresponder group obviously has poorer outcomes than the IFNβ responders, one is led to the speculation that increased production of type I interferon in MS patients contributes to disease and refractoriness to therapy. Similar to mechanisms suggested relevant to SLE, myeloid dendritic cells in the nonresponder RRMS patients studied by Comabella and colleagues showed increased expression of the costimulatory molecule CD86, suggesting that those cells might be capable of eff ective activation of self-reactive T cells. One interpretation of the diff erent profi les in the IFNβ responders and nonresponders is that when presented with an innate immune stimulus (such as lipopoly saccharide), the responder monocytes engage cellular mecha nisms that reduce the capacity of the cells to produce type I interferon while the cells from nonresponder patients do not ramp down that pathway. Impaired production of inhibitors of the Jak-STAT pathways activated by interferon binding to IFNAR was not demonstrated by the authors, as SOCS1, SOCS2 and PIAS1 expression was comparable between responders and nonresponders. Taken together, the data draw attention to the regulatory mechanisms that modulate innate immune responses downstream of TLRs, with TLR4 the relevant pathway in the RRMS patients. Consideration of the demonstrated increased type I interferon bioactivity, increased expression of interferoninducible genes, and stimulatory dendritic cell phenotype in IFNβ-treated patients who do not respond to that treatment raises the possibility that, similar to the situation in SLE, type I interferon might be a pathogenic mediator in that subset of RRMS patients and might be an appropriate therapeutic target. Additional studies that characterize this interesting nonresponder group more completely from the immunologic and serologic parameters will be of great interest. Although auto antibodies are not presumed to play as signifi cant a pathogenic role in MS as T cells, it will be interesting to know whether the interferon high nonresponder group demonstrates higher levels of relevant autoantibodies than the interferon low responder group -as is the case in interferon high SLE patients [18]. Th e induction of BAFF by IFNβ has been demonstrated in MS as in other diseases and could be a mechanism that contributes to increased humoral immunity. It will also be productive to compare T-cell responses to relevant self-antigens, such as myelin basic protein, in the IFN high group -the prediction being that self-reactive T cells will be expanded or more readily activated by antigen-presenting cells in those patients. Th e somewhat counterintuitive data presented by Comabella and colleagues leave hanging the issue of how IFNβ results in a benefi cial eff ect in those patients who do respond. One should note there is general agreement that recombinant IFNβ produces only modest responses in some patients. One prediction that could be tested using samples from the published study cohorts is that patients who go on to respond to IFNβ therapy are those with more robust TNF production. While the mechanisms that account for inhibition of TNF by type I interferon are not fully elucidated, the cytokine data do show reduction in TNF in patients who complete 18 to 24 months of IFNβ therapy, many of whom are presumably clinical responders [31]. Augmentation of IL-10 by IFNβ through an IFNγ-dependent pathway might also contribute to amelioration of disease activity [60]. Th ere seem to be three categories of defect that are associated with the IFNβ nonresponder RRMS patients: production of interferon is high; in the setting of the in vivo stimuli that characterize MS, IFNAR1 expression and signaling through TLR4 are not reduced in the nonresponders as they are in the responders; and capacity to further activate transcription of type I interferoninducible genes is abrogated. Th e latter alteration might be due to a system in overdrive in which all available transcription factors are engaged; in eff ect, the patient's immune system is desensitized to further activation by IFNβ. It should be noted that extremely high-level expression of gene transcripts typically asso ciated with infl ammatory states, such as CXCL10 and PBEF1, achieves levels that are substantially higher in the responders after 3 months of IFNβ therapy [59]. Th is concurrence of improved clinical activity and increased expression of proinfl ammatory mediators, at least at the transcript level, indicates that increased proinfl ammatory gene expression does not necessarily translate into increased infl ammation. Perhaps the extremely high expres sion of IL1RN (IL-1 receptor antagonist) transcripts in the treated responders provides balance that counters the proinfl ammatory mediators. Type I diabetes mellitus If RA is an organ-focused systemic autoimmune and infl ammatory disease in which local type I interferon is primarily anti-infl ammatory, DM is an organ-targeted autoimmune disease in which type I interferon's major role, at least in murine models, is pathogenic. Stewart and colleagues were the fi rst to demonstrate the capacity of IFNα to promote diabetes in a mouse model [61]. Th ey showed increased expression of MHC class II and costimulatory molecules in the pancreas and linked the induction of activated antigen-presenting cells to development of self-reactive T cells. Other investigators have confi rmed the disease amplifying role of type I interferon in the nonobese diabetic murine diabetes model [62][63][64]. While direct data regarding type I interferon expression at the site of disease are limited in patients with DM, diabetes has been induced in those patients who have received therapeutic IFNα for hepatitis C -similar to the reports of development of lupus, infl ammatory arthritis or MS [65]. In view of the abundant data from murine models of diabetes demonstrating a probable pathogenic role for type I interferon, along with the induction of diabetes in some patients receiving IFNα, Stewart has suggested that inhibition of IFNα with a specifi c monoclonal antibody might be benefi cial [66]. An opposing view has been proposed by Brod, who has put forward the interesting concept that the three diseases reviewed -RA, MS and DM -represent IFNα defi ciency states, perhaps based on inadequate response to an undefi ned viral infection [67]. In that view, the high level expression of type I interferon and interferon-inducible genes would refl ect an active but insuffi cient eff ort of the innate immune system to control a more primary infl ammatory process. With this idea in mind, Brod has conducted clinical trials in which IFNα is given in oral form to patients, with the hypothesis that the IFNα will generate immunosuppressive alterations in immune function. Brod has demonstrated in a murine model that oral IFNα administration results in increased interferon-inducible gene expression in T lymphocytes [67]. In a study of patients with recent-onset DM, a trend toward preservation of pancreatic β-cell function was observed in those who received 5,000 units of recombinant IFNα by oral route daily, but not in those who received a higher dose, compared with those who received placebo [68]. No eff ect of treatment was seen in terms of hemoglobin A 1c levels. Additional placebocontrolled trials will be required to determine whether oral administration of low-dose IFNα has a therapeutic eff ect in autoimmune diseases. Conclusions In contrast to SLE -where a primary pathogenic role for IFNα in autoimmunity and disease pathogenesis is supported by data from studies of genetic polymorphisms that are associated with increased type I interferon, an interferon signature in PBMC, murine lupus studies in which IFNα accelerates disease, and preliminary data from human trials indicating a positive therapeutic response in some patients receiving anti-IFNα mono clonal antibody -the role of IFNα is more complex in the three diseases reviewed. Some of the diseaseassociated gene variants that have been associated with increased IFNα production in patients with SLE, such as IRF5, Tyk2 or PTPN22, have also shown an association with RA, MS or DM, but the associations are not as well documented in those diseases [69][70][71][72][73]. Data in the literature support a possible pathogenic role for type I interferon in RA, MS and DM, based on demonstration of an interferon signature in blood in RA and MS and based on data from murine models in the case of DM. At the same time, type I interferon appears to play an anti-infl ammatory protective role in the joint tissue of patients with RA and in several murine models of infl ammatory arthritis. Similarly, some patients with MS demonstrate a benefi cial therapeutic eff ect of IFNβ. Of note, those who show a positive clinical response tend to be those who do not demonstrate an interferon signature prior to therapy and whose PBMC are responsive to type I interferon in vivo. In the case of both RA and MS, while systemic type I interferon might play a contributing role in induction of autoimmunity, its antiinfl ammatory role might be more signifi cant. Studies in DM are less well developed, and whether blockade of type I interferon to inhibit expansion of the autoimmune process or administration of type I interferon to reduce destruction of β cells by an infl ammatory process or to inhibit replication of a putative virus would be more benefi cial will require further investigation. Competing interests MKC has received a research grant from Novo Nordisk and has had consulting relationships with Biogen-Idec, Bristol Myers Squibb, EMD Merck Serono, Genentech/Roche, Idera, and MedImmune.
2014-10-01T00:00:00.000Z
2010-08-25T00:00:00.000
{ "year": 2010, "sha1": "67fb161df5e23af4daad6e0f7f773cfe5e69b2a4", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar2886", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67fb161df5e23af4daad6e0f7f773cfe5e69b2a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2610619
pes2o/s2orc
v3-fos-license
Control Design for Markov Chains under Safety Constraints: A Convex Approach This paper focuses on the design of time-invariant memoryless control policies for fully observed controlled Markov chains, with a finite state space. Safety constraints are imposed through a pre-selected set of forbidden states. A state is qualified as safe if it is not a forbidden state and the probability of it transitioning to a forbidden state is zero. The main objective is to obtain control policies whose closed loop generates the maximal set of safe recurrent states, which may include multiple recurrent classes. A design method is proposed that relies on a finitely parametrized convex program inspired on entropy maximization principles. A numerical example is provided and the adoption of additional constraints is discussed. I. INTRODUCTION The formalism of controlled Markov chains is widely used to describe the behavior of systems whose state transitions probabilistically among different configurations over time. Control variables act by dictating the state transition probabilities, subject to constraints that are specified by the model. Existing work has addressed the design of controllers that optimize a wide variety of costs that depend linearly on the parameters that characterize the probabilistic behavior of the system. The two most commonly used tools are linear and dynamic programming. For an extensive survey, see [2] and the references therein. We focus on the design of time-invariant memoryless policies for fully observable controlled Markov chains with finite state and control spaces, represented as X and U, respectively. Given a pre-selected set F of forbidden states of X, a state is qualified as F-safe if it is not in F and the probability of it transitioning to an element of F is zero. Here, forbidden states may represent unwanted configurations. We address a problem on control design subject to safety constraints that consists on finding a control policy that leads to the maximal set of F-safe recurrent states X R F . This problem is relevant when persistent state visitation is desirable for the largest number of states without violating the safety constraint, such as in the context of persistent surveillance. We show in Section III that the maximal set of F-safe recurrent states X R F is well defined and achievable by suitable control policies. As we discuss in Remark 2.2, X R F may contain multiple recurrent classes, but does not intersect the set of forbidden states F. A. Comparison with existing work Safety-constrained controlled Markov chains have been studied in a series of papers by Arapostathis et al., where the state probability distribution is restricted to be bounded above and below by safety vectors at all times. In [4], [3] and [14], the authors propose algorithms to find the set of distributions whose evolution under a given control policy respect the safety constraint. In [18], an augmented Markov chain is used to find the the maximal set of probability distributions whose evolution respect the safety constraint over all admissible non-stationary control policies. Here we are not concerned with the maximization of a given performance objective, but rather in systematically characterizing the maximal set of F-safe recurrent states and its corresponding control policies. The main contribution of this paper is to solve this problem via finitely parametrized convex programs. Our approach is rooted on entropy maximization principles, and the proposed solution can be easily implemented using standard convex optimization tools, such as the ones described in [11]. B. Paper organization The remainder of this paper is organized as follows. Section II provides notation, basic definitions and the problem statement. The convex program that generates the maximal set of F-safe recurrent states is presented in Section III along with a numerical example. Further considerations are given in Section IV, while conclusions are discussed in Section V. II. PRELIMINARIES AND PROBLEM STATEMENT The following notation is used throughout the paper: The recursion of the controlled Markov chain is given by the (conditional) probability mass function of X k+1 given the previous state X k and control action U k , and is denoted as: We denote any memoryless time-invariant control policy by The set of all such policies is denoted as K. Assumption: Throughout the paper, we assume that the controlled Markov chain Q is given. Hence, all quantities and sets that depend on the closed loop behavior will be indexed only by the underlying control policy K. Given a control policy K, the conditional state transition probability of the closed loop is represented as: We define the set of recurrent states X R K and the set of F-safe recurrent states X R K,F under a control policy K to be: The maximal set of F-safe recurrent states is defined as: The problem we address in this paper is defined below: The following is a list of important observations on Problem 2.1: • The set X R F may contain more than one recurrent class and it will exclude any recurrent class that intersects F. • There is no K such that the states in X X R F can be F-safe and recurrent. • If the closed loop Markov chain is initialized in X R F then the probability that it will ever visit a state in F is zero. III. MAXIMAL F-SAFE SET OF RECURRENT STATES We propose a convex program to solve problem 2.1. Consider now the following convex optimization program: subject to: where H : P XU → ℜ ≥0 is the entropy of f XU , and is given by where we adopt the standard convention that 0 ln(0) = 0. The following Theorem provides a solution to Problem 2.1. Theorem 3.1: Let F be given, and assume that (2)-(4) is feasible and that f * XU is the optimal solution. In addition, adopt the marginal pmf f * X (x) = u∈U f * XU (x, u) and consider that G : U × X → [0, 1] is any function satisfying u∈U G(u, x) = 1 for all x in X. The following holds: where we use S f * X = {x ∈ X : f * X (x) > 0}. The proof of Theorem 3.1 is given at the end of this section. , j); and G ∈ [0, 1] 2×8 is any matrix whose the columns sum up to 1. It is important to highlight some interesting points that would otherwise not be clear if we were to consider a large system. Note that state 6 is not in X R F because regardless of which control action is chosen the probability of transitioning to F is always a positive. State 5 cannot be made recurrent even though it is a safe state. Furthermore, when the chain visits states 1 and 8, one of the two available control actions cannot be chosen since that choice leads to a positive probability of reaching F. In this scenario there are two safe recurrent classes: {1,2,3} and {7,8}. Note that the control action 1 cannot be chosen when the chain visits state 3 because that choice makes the states 1, 2, and 3 transient. R (x, u) > 0, where K * R is an optimal solution given by (5). It holds that A. Proof of Theorem 3.1 To facilitate the proof we first establish the following Lemma: Lemma 3.5: Let Y be a finite set and W be a convex subset of P Y , the set of all pmfs with support in Y. Consider the following problem: where H(f ) is the entropy of f in P Y and is given by H(f ) = − y∈Y f (y) ln(f (y)), where we adopt the convention that 0 ln(0) = 0. The following holds: Proof: Select an arbitrary f in W and define f λ we conclude that f λ is in W for all λ in [0, 1]. Since f * has maximal entropy, it must be that there exists aλ in Proof by contradiction:: Suppose that S f S f * and hence that there exists a y ′ in Y such that f (y ′ ) > 0 and f * (y ′ ) = 0. We have that d dλ f λ (y ′ ) ln(f λ (y ′ )) = −f (y ′ ) ln(f λ (y)) + 1 goes to ∞, as λ approaches 1, since lim λ→1 f λ (y ′ ) = 0. This implies that there exists aλ in [0, 1) such that which contradicts (6). See ( [8]) for an alternative proof that relies on the concept of relative entropy. Proof of Theorem 3.1: (a) (Proof that X R K,F ⊆ S f * X holds for all K in K.) Select an arbitrary control policy K in K. There are two possible cases: i) When X R K,F is the empty set, the statement follows trivially. ii) If X R K,F is non-empty then the closed loop must have an invariant pmf f K XU that satisfies the following: Equation (7) follows from the fact that f K XU is an invariant pmf of the closed loop, while (8)- (9) follow from the definition of X R K,F . Our strategy to conclude the proof (that X R K,F ⊆ S f * X holds) is to show that f K XU is a feasible solution of the convex program (2)-(4), after which we can use Lemma 3.5. In order to show the aforementioned feasibility, note that summing both sides of equation (7) over the set of control actions yields (3), where we use the fact that u + ∈U K(u + , x + ) = 1. Moreover, the constraint (4) and the F-safety equality in (9) are identical. Therefore, f K XU is a feasible pmf to (2)-(4). By Lemma 3.5, it follows that S f K X ⊆ S f * X and, consequently, that To prove that S f * X ⊆ X R F , select an optimal policy K * R as in (5), and note that the corresponding closed loop pmf f * XU is an invariant distribution, leading to: Consider any elementx in S f * X . Since u∈U f * XU (x, u) > 0 holds and from the fact that f * XU is an invariant distribution of the closed loop, we conclude that the following must be satisfied: This means thatx belongs to X R K * R . From (4), it is clear thatx is an F-safe state and, thus, belongs to X R K * R ,F . Hence, by definition,x belongs to X R F . Since the choice ofx in S f * X was arbitrary, we conclude that S f * X ⊆ X R F . (c) (Proof that X R K * R = X R F holds) Follows from the proof of (b). IV. FURTHER CONSIDERATIONS Computational complexity reduction. Consider the following convex program: where the objective function has been modified to be the entropy of the marginal pmf with respect to the state (rather than the joint entropy as in (2)). A simple modification of Theorem 3.1 leads to the conclusion that Sf X = S f * X . Therefore, the modified program also provides a solution to Problem 2.1 with the advantage that it requires fewer calls to the entropy function, thus reducing computational complexity. However, the optimal control policyK R (obtained in an analogous manner as in (5)) may differ from K * R . The most significant difference is that Remark 3.4 does not apply toK R . Additional constraints. Further convex constraints on f XU may be incorporated in the optimization problem (2)- (4) without affecting its tractability. For instance, consider constraints of the following type: where h is an arbitrary function and β an arbitrary real number. Let f * XU be an optimal solution to (2)-(4) and (10). If the Markov chain is initialized with f * XU (an invariant distribution), the following holds for each k: Moreover, when X R F contains only one aperiodic recurrent class, the following holds for any initial distribution: V. CONCLUSION This paper addresses the design of full-state feedback policies for controlled Markov chains defined on finite alphabets. The main problem is to design policies that lead to the largest set of recurrent states, for which the probability to transition to a pre-selected set of forbidden states is zero. The paper describes a finitely parametrized convex program that solves the problem via entropy maximization principles.
2012-11-08T10:02:23.000Z
2012-09-13T00:00:00.000
{ "year": 2012, "sha1": "7efdbe4c649d37c512c145a4094824ccdd642bd0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7efdbe4c649d37c512c145a4094824ccdd642bd0", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
268578748
pes2o/s2orc
v3-fos-license
Dual ligand-targeted Pluronic P123 polymeric micelles enhance the therapeutic effect of breast cancer with bone metastases Bone metastasis secondary to breast cancer negatively impacts patient quality of life and survival. The treatment of bone metastases is challenging since many anticancer drugs are not effectively delivered to the bone to exert a therapeutic effect. To improve the treatment efficacy, we developed Pluronic P123 (P123)-based polymeric micelles dually decorated with alendronate (ALN) and cancer-specific phage protein DMPGTVLP (DP-8) for targeted drug delivery to breast cancer bone metastases. Doxorubicin (DOX) was selected as the anticancer drug and was encapsulated into the hydrophobic core of the micelles with a high drug loading capacity (3.44%). The DOX-loaded polymeric micelles were spherical, 123 nm in diameter on average, and exhibited a narrow size distribution. The in vitro experiments demonstrated that a pH decrease from 7.4 to 5.0 markedly accelerated DOX release. The micelles were well internalized by cultured breast cancer cells and the cell death rate of micelle-treated breast cancer cells was increased compared to that of free DOX-treated cells. Rapid binding of the micelles to hydroxyapatite (HA) microparticles indicated their high affinity for bone. P123-ALN/DP-8@DOX inhibited tumor growth and reduced bone resorption in a 3D cancer bone metastasis model. In vivo experiments using a breast cancer bone metastasis nude model demonstrated increased accumulation of the micelles in the tumor region and considerable antitumor activity with no organ-specific histological damage and minimal systemic toxicity. In conclusion, our study provided strong evidence that these pH-sensitive dual ligand-targeted polymeric micelles may be a successful treatment strategy for breast cancer bone metastasis. Introduction Breast cancer is the most common malignant tumor among women worldwide.There were 2.3 million women diagnosed with breast cancer and 685,000 deaths globally in 2020 [1].Patients with breast cancer frequently suffer from metastatic relapse months to decades after the initial diagnosis and treatment, which dramatically decreases their life expectancy.The organ in which breast cancer metastasis most commonly occurs is bone [2], with bone metastasis secondary to breast cancer occurring in approximately 70% of patients with advanced disease [3].Bone metastases are generally osteolytic, as cancer cells obstruct the normal bone remodeling process and enhance osteoclast-mediated bone resorption [4].As a result, patients with bone metastases often suffer from pain and are at risk for skeletal complications such as fractures, hypercalcemia, and spinal cord compression, which all substantially decrease their quality of life [5,6].Despite advances in cancer treatment, therapeutic options for bone metastases remain inadequate and generally palliative [7,8].Current treatments aim to inhibit tumor growth and bone resorption.Inhibiting tumor growth remains a challenge, as many anticancer agents, such as chemotherapy drugs and radiopharmaceuticals, do not reach the bone in efficacious concentrations due to the hardness, poor permeability, and physiological and biochemical processes of bone [9].Increasing the dose of these drugs to achieve therapeutic effects is generally not possible because it leads to grievous systemic side effects due to their lack of tissue specificity [10,11].Slowing bone resorption, in contrast, is easier to establish by inhibiting the activity of osteoclasts, although this process has smaller effects on tumor growth.Therefore, a bone-targeted drug delivery system to simultaneously inhibit tumor growth and bone resorption would be a major advance in the treatment of bone metastases. Polymeric micelles as drug carriers have been demonstrated to be promising in cancer chemotherapy and are already under clinical evaluation [12][13][14].In our previous work [15], we synthesized polymeric micelles using Pluronic P123 (P123; polyoxyethylene-polyoxypropylenepolyoxyethylene triblock copolymer).We chose P123 due to its commercial availability, biocompatibility, and safety profile [16].The micelles can be easily prepared through the self-assembly of amphiphilic copolymers, which results in the formation of a core-shell structure.The hydrophilic shell makes the micelles water soluble, allowing for intravenous delivery, while the hydrophobic core carries the drug payload for therapy.The micelles provide various advantages, including drug solubilization, controlled drug release, escape from reticuloendothelial system uptake, and passive tumor targeting by enhanced permeability and retention effects [17,18].In addition, Pluronic molecules can inhibit the release of the drug-removal protein P-glycoprotein, which hinders the distribution of many drugs to multidrug-resistant tumors [19].Moreover, active targeting can be achieved by conjugating target-specific moieties (ligands) to the terminal hydroxyl groups of P123 onto the surface of micelles. Bisphosphonates are a group of drugs that prevent the loss of bone density by inducing apoptosis in osteoclasts and can be used as bone-targeting ligands.It has been shown that drugs or drug carriers conjugated with alendronate (ALN), a second-generation bisphosphonate, accumulate to a greater extent in bone than in healthy tissues [20,21].Additionally, the accumulation of ALN is 10-to 20-fold higher in the bone tumor environment than in healthy bone due to the extensive bone remodeling by cancer cells [22][23][24].Therefore, in this work, to improve the selectivity of P123 micelles to breast cancer bone metastases and simultaneously inhibit bone resorption, ALN was used to modify P123 micelles.To further target breast cancer cells while also decreasing the toxicity to normal cells, DMPGTVLP fused to the N-terminus of the p8 phage protein (DP-8) was added as a second ligand.This cancerspecific phage protein was isolated from the 8-mer (f8/8) phage landscape library by screening against MCF-7 breast cancer cells [25].DP-8 is able to bind to nucleolin [25,26], a multifunctional protein involved in several cellular processes that has been shown to be overexpressed on the surface of breast cancer cells and different other types of cancer cells [27,28].One of the diverse roles of nucleolin is to serve as a shuttling protein between the cytoplasm and nucleus, which is one of the mechanisms underlying the extracellular regulation of nuclear events [26].It was shown that DP-8 enhanced the accumulation and antitumor activity of nanomedicines in mouse models of breast cancer [29,30]. In this study, to enhance the therapeutic effect of doxorubicin (DOX) on breast cancer bone metastases, P123 micelles conjugated with ALN and DP-8 (P123-ALN/DP-8) were prepared by the thin-film hydration method, in which ALN, DP-8, and Pluronic were linked by amide bonds.ALN and DP-8 were innovatively selected as organic drug ligands to deliver chemotherapeutic drugs targeting DOX to bone metastasis sites.The antitumour effects and antibone resorption effects of bone-targeting nanoparticles were further investigated in vitro and in vivo. Cell culture Human triple-negative breast cancer (MDA-MB-231) cells were purchased from Nanjing KGI Biotechnology Development Company (Jiangsu, China). Animals Female athymic BALB/c-nu/nude mice (aged 4-6 weeks, 18 ± 2 g) and female Sprague-Dawley rats (aged 5-7 days, 2 ± 1 g) were purchased from the Suzhou Cavins Park Model Animal Research Company (Jiangsu, China).All animal procedures were performed in accordance with protocols approved by the Animal Care and Use Committee, Jiaxing University (approval number: JUMC2022-152). Synthesis of P123-ALN/DP-8 First, the Pluronic P123 copolymer was activated using NHS.Briefly, P123 copolymer (3.468 g, 0.6 mmol) and triphosgene (0.3560 g, 1.2 mmol) were dissolved in 30 mL of a solution of anhydrous toluene and anhydrous dichloromethane (2:1) at 25°C and stirred at 500 rpm overnight.After evaporating, the residue was dissolved in a 20 mL solution of anhydrous toluene and anhydrous dichloromethane (2:1).Next, NHS (0.1370 g, 1.2 mmol) and anhydrous triethylamine (0.2 mL) diluted with anhydrous dichloromethane (1 mL) were added dropwise and stirred at 200 rpm for 4 h.After the reaction was completed, the solution was filtered.The residue was dissolved in 100 mL ethyl acetate at 50°C and then filtered again.Ethyl acetate was removed from the solution by rotary evaporation.The solidified reaction product (named P123-NHS) was cooled and stored in dry conditions at −20°C. Second, ALN and DP-8 were conjugated to the P123 copolymer.Briefly, ALN (0.3900 g, 1.2 mmol) or DP-8 (0.85 g, 1.2 mmol) was slowly added to P123-NHS (0.85 g, 1.2 mmol) dissolved in 10 mL of PBS.The mixture was stirred at 200 rpm for 24 h under a nitrogen atmosphere.Then, the product was dialyzed in deionized water for 72 h, where the aqueous phase was changed every 24 h.Last, the product was lyophilized and stored at −20°C. Preparation of micelles The thin film hydration method was applied to prepare the P123-ALN/DP-8@DOX [15].In brief, 5.48 mg of DOX•HCl was added to 7.28 mL of methanol solution, then three times the molar amount of triethylamine was added, and the mixture was stirred magnetically overnight at room temperature to achieve full dissolution.Next, 50 mg of P123, 40 mg of P123-ALN and 10 mg of P123-DP-8 polymer material were added, and the solvent was removed by rotary evaporation.After that, 8.58 mL of sterilized water for injection was added for hydration and mixed with magnetic stirring for 30 min.The solution was further filtered with a 0.22-μm microporous membrane filter and freeze-dried overnight to prepare P123-ALN/P123-DP-8@DOX, which was stored at −20°C for further utlize.P123@DOX micelles were prepared and stored in the same manner. Characterization of P123-ALN/DP-8@DOX micelles The morphology of the micelles was observed by transmission electron microscope (JEOL, Tokyo, Japan).The micelles were dissolved in an appropriate amount of distilled water, and a drop of solution was placed on a copper mesh for observation, after which phosphotungstic acid was reduced.A nanoparticle size potentiometer (Mastersizer 3000, Malvern Instruments, Worcestershire, UK) was used to measure the particle size and zeta potential. Determination of drug entrapment efficiency (EE) and DOX loading (DL) capacity The concentration of entrapped DOX was determined by high-performance liquid chromatography (HPLC) using a Waters 2487 system (Waters Corporation, Milford, MA, USA) equipped with a C 18 column (4.6 mm × 250 mm), and the mobile phase consisted of a buffer (1.44 g of sodium dodecyl sulfate and 0.68 mL of phosphoric acid dissolved in 500 mL of water), acetonitrile and methanol (40:54:6, v:v:v), delivered at a flow rate of 1.0 mL/min.The injection volume was 20 μL.The detection wavelength was 254 nm, and the column temperature was 25°C.The concentration of DOX was determined based on the peak area.The encapsulation rate and drug load were calculated according to the following formulas: EE% = (measured amount of DOX)/(amount of DOX added) × 100% DL% = (weight of DOX in the micelle)/(weight of the micelle) × 100% In vitro DOX release The release of DOX from the micelles was analyzed by the dialysis method at different pH values (pH = 5.0, 6.8, and 7.4) with 1% Tween-80.Briefly, 2 mL of P123-ALN/DP-8@DOX micelles were separately dispersed in 20 mL of PBS buffer and loaded into a dialysis bag (molecular weightcutoff of 3.0 kDa).The release system was kept at 37°C under continuous stirring at 100 rpm, and 2 mL of release medium was collected with an equal volume of fresh PBS added at the predetermined time points (1, 2, 4, 8, 12, 24, 48, 72, 96 and 120 h).The cumulative released DOX in PBS was analysed by HPLC.The measurement was conducted in triplicate. HA binding assay The affinity of micelles for bone was evaluated using HA particles.HA microparticles (100 mg) were mixed with 10 mL of free DOX, P123@DOX, or P123-ALN/DP-8@DOX (final DOX concentration of 400 μg/mL) dissolved in PBS (pH = 7.4) and vortexed.HA microparticles were allowed to settle, and 2 mL supernatant was withdrawn for analysis at predetermined time intervals (15,30,60, and 90 min).The absorbance of suspensions at 589 nm before (A before ) and after (A after ) mixing with HA microparticles was recorded using a fluorescence spectrophotometer (Hitachi, Tokyo, Japan).The following equation was used to DUAL LIGAND-TARGETED PLURONIC P123 POLYMERIC MICELLES ENHANCE THE THERAPEUTIC EFFECT 771 determine the adsorption rate: Adsorption rate% = (A before − A after )/A before × 100% In vitro uptake analysis (confocal microscopy and flow cytometry) The internalization and intracellular localization of P123-ALN/DP-8@DOX were evaluated by confocal microscopy.MDA-MB-231 cells (cell concentration of 1 × 10 4 /mL, volume of 500 μL) were seeded in a small confocal dish and placed at 37°C in a 5% CO 2 incubator for 24 h.Then, free DOX, P123@DOX, and P123-ALN/DP-8@DOX solutions were added at a final DOX concentration of 10 µg/mL in each well.After 0.5 and 2 h of incubation, the cells were fixed with 0.5 mL of 4% paraformaldehyde and stained with 200 µL of DAPI (100 ng/mL in Milli-Q water) for 15 min.Finally, the cells were observed by confocal microscopy imaging using the Leica TCS SP2 system (Leica, Heidelberg, Germany).DAPI and DOX were excited at 405 and 620 nm, respectively. Quantification of cellular uptake was assessed using flow cytometry to confirm the internalization of the treatments.MDA-MB-231 cells (cell concentration of 1 × 10 4 /mL, volume of 2 mL) were inoculated in a six-well plate and placed at 37°C in a 5% CO 2 incubator for 24 h.Then, free DOX, P123@DOX, and P123-ALN/DP-8@DOX solutions were added at a final DOX concentration of 10 µg/mL in each well.After 0.5 and 2 h of incubation, the cells were washed with PBS, digested with 0.25% trypsin and centrifuged at 800 rpm for 5 min.The supernatant was discarded, and 500 μL of PBS-suspended cells was added for flow cytometry (Sony Biotechnology, Tokyo, Japan).Data were processed and plotted using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). In vitro cytotoxicity The MTT assay is widely used for the study of cell proliferation and cytotoxicity.Briefly, MDA-MB-231 cells (cell concentration of 1 × 10 4 /mL, volume of 100 μL) were seeded in 96-well plates and placed at 37°C in a 5% CO 2 incubator for 24 h, After washing once with PBS, free DOX, P123@DOX, and P123-ALN/DP-8@DOX solutions were added at a finalDOX concentrations ranging from 0-40 µg/mL in each well and incubated for 48 h under the same conditions.Then, 20 μL of MTT solution (5 mg/mL) was added to each well and incubated for another 4 h., Afterwards, the medium was replaced with 300 μL of DMSO.The optical density (OD) of each well was measured at 490 nm using a BioTek Synergy microplate reader (Bio-Tek Winooski, Vermont, USA).The cell viability was determined with the following equation: cell viability (%) = (OD sample -OD blank )/(OD control -OD blank ) × 100% The 50% inhibitory concentration (IC 50 ) was calculated from the cell viability at the corresponding concentration using GraphPad Prism. Analysis of the cell apoptosis mode and Western blotting The Annexin V-FITC/PI Apoptosis Detection Kit is a universal reagent kit for detecting apoptosis.Briefly, MDA-MB-231 cells (cell concentration of 1 × 10 4 /mL, volume of 2 mL) were seeded in a six-well plate and placed at 37°C in a 5% CO 2 incubator for 24 h.Then, free DOX, P123@DOX, and P123-ALN/DP-8@DOX solutions were added at a final DOX concentration of 10 µg/mL in each well.After culturing for 48 h, the cells were trypsinized, collected in 1.5-mL sterile centrifuge tubes and incubated with 5 µL annexin V-FITC (100 ng/mL) in the dark for 10 min.Then, 5 µL of propidium iodide (PI) was added to each group before analyzed by flow cytometry (Sony Biotechnology, Tokyo, Japan).Untreated cells were used as a control.The tests were conducted in triplicate, and the apoptosis rate was determined by FlowJo software (BD Biosciences, San Jose, CA,USA). For Western blotting, MDA-MB-231 cells were seeded and treated as described in the "Apoptosis assay" section.The cells were collected and lysed for 30 min on ice with RIPA buffer containing a cocktail of protease and phosphatase inhibitors.After centrifugation at 15,000 rpm for 20 min at 4°C, the supernatant was collected.The protein concentrations of the supernatant were measured using a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).The proteins were obtained by SDS-PAGE (5% acrylamide, 10% SDS, 6 μg/μL protein concentration, 10 µL/well loading amount, running voltage 80 V for 30 min, then 120 V for 90 min).Electrotransferred to polyvinylidene fluoride (PVDF) membranes (Thermo Fisher Scientific, 220 mA, 120 min, 4°C) and blocked with QuickBloc Blocking Buffer (Beyotime, Shanghai, China) for 10 min at RT.The blocked PVDF membranes containing the proteins were incubated with primary antibodies against caspase-3 (Abcam, Cambridge, MA, USA) and β-actin (Abcam, Cambridge, MA, USA) overnight at 4°C.Then, the membranes were washed and incubated with horseradish peroxidase-linked secondary antibodies (Abcam, Cambridge, MA, USA) for 1 h at RT.The target protein bands were visualized by enhanced chemiluminescence (Bio-Rad ChemiDoc XRS imaging system, Hercules, CA, USA).β-actin was used as a loading control. In vitro bone binding analysis The parietal bones from rats were used to further illustrate the specific targeting of bone tissue by P123-ALN/DP-8@DOX micelles.Five-to seven-day-old Sprague-Dawley rats were sacrificed, and their parietal bones, including the surrounding excess tissue, were dissected under sterile conditions.The bones were washed with normal saline until they were clean.Then, the bones were washed twice with PBS buffer.One piece of bone was placed into each of the wells of a six-well plate with DMEM containing 10% FBS and 5 mg/mL BSA.The medium was replaced with fresh medium containing free DOX, P123@DOX, or P123-ALN/DP-8@DOX at a final DOX concentration of 10 µg/mL for 48 h.Next, the bones were removed and split in two.Half of the bones were fixed with 2.5% glutaraldehyde for 2 h, dehydrated in a series of tert-butyl alcohol solutions (50%, 70%, 95%, and 100%) for 15 min, and gold-coated.Field emission scanning electron microscopy (JEOL, Tokyo, Japan) was used to observe free DOX, P123@DOX, and P123-ALN/DP-8@DOX in the bone tissue. Antitumor, antiosteoclastogenic, and anti-bone resorption activities in an in vitro 3D bone metastasis model of breast cancer To further evaluate the effects of P123-ALN/DP-8@DOX on bone metastases, an in vitro 3D model was constructed using the parietal bones of 5-to 7-day-old Sprague-Dawley rats to simulate the in vivo bone metastasis microenvironment.In brief, parietal bones were obtained as described in the section on bone targeting.One piece of bone was placed into a six-well plate and incubated with complete DMEM for 24 h.Then, MDA-MB-231 cells were seeded at a density of 1 × 10 5 cells/well and coincubated with the bones under standard culture conditions.At predetermined intervals of 0, 2, 4, and 6 days, the bones were removed, fixed with 2.5% glutaraldehyde for 2 h, dehydrated in a series of tert-butyl alcohol solutions of 50%, 70%, 95%, and 100% for 15 min, and gold-plated.Field emission scanning electron microscopy (JEOL, Tokyo, Japan) was used to observe the bone tissue tumor cells. After the successful establishment of a three-dimensional in vitro model of bone metastasis from the breast cancer, parietal bone was obtained, and MDA-MB-231 cells were incubated for 6 days in standard culture conditions.Then, the medium was replaced with fresh medium containing free DOX, P123@DOX, or P123-ALN/DP-8@DOX at a final DOX concentration of 10 µg/mL and incubated for 48 h.The bones were removed and split in two, and the following experiments were performed: (1) Half of the bones were fixed with 2.5% glutaraldehyde for 2 h and dehydrated in a series of tert-butyl alcohol solutions of 50%, 70%, 95%, and 100% for 15 min.The dehydrated bones were dried on a freeze-drying device and then gold-plated.Afterward, the tumor cells in the bone tissue were observed by SEM.(2) The other bones were incubated with DMEM supplemented with 70 μg/mL NR for 1 h under standard culture conditions.In this live bone culture approach, NR was rapidly taken up by osteoclasts [31].The number of osteoclasts was quantified using an optical CH-2 microscope (Olympus, Tokyo, Japan).Osteoclasts could be well distinguished from other osteocytes and stroma because they take up NR and are large and multinucleated.After NR staining and microscopic examination, the bone resorption area within the same bone was quantified.The bones were washed in PBS, fixed in 2.5% glutaraldehyde for 12 h, and counterstained with 1% silver nitrate solution for 30 min in the dark.The extent of bone resorption was assessed under the microscope where resorption regions were transparent to light. In vivo biodistribution analysis The breast cancer bone metastasis model was established by direct injection of MDA-MB-231 cells into the bone marrow cavity in the left tibia of mice [32][33][34].Briefly, BALB/c-nu/ nu mice were anesthetized with 10% (w/v) chloral hydrate.Then, MDA-MB-231 cells (1 × 10 6 cells in 100 μL of PBS) were injected into the left tibia with a 23-gauge needle.On the fourteenth day after the MDA-MB-231 cells were implanted, the mice received 200 μL of saline, free DOX, P123@DOX or P123-ALN/DP-8@DOX micelles at a final DOX concentration of 5 mg/kg body weight by tail vein injection.At predetermined time intervals of 2, 6, 12, and 24 h, the distribution of DOX in the tumor-bearing mice was observed by using a whole body imager (Shanghai Instrument Experiment Factory, Shanghai, China; λ ex = 480 nm; λ em = 620 nm). In vivo antitumor efficacy and safety About 24 tumor-bearing mice were randomly divided into four groups: saline, free DOX, P123@DOX, and P123-ALN/ DP-8@DOX.Then, the mice were injected intravenously with saline, free DOX, P123@DOX, P123-ALN/DP-8@DOX at a final concentration of 5 mg/kg DOX, g and 200 µL injection volume or saline via the tail vein once a week for 2 weeks.The general health of the mice was monitored daily during the experiment.The animals were weighed, and tumor length (L) and width (W) were measured with a Vernier caliper every 2 days.The tumor volume (V) was calculated by V = 0.5 [L × W 2 ].At the end of the experiment, plasma samples were collected from the eyeball vein for the determination of white blood cell (WBC) numbers, alanine aminotransferase (ALT) and creatinine (Cr) levels.The mice were sacrificed by cervical dislocation, and the liver, spleen, heart, kidneys, lungs, and left hind leg were dissected.Fluorescence of the excised organs was imaged using an IVIS Spectrum small animal imaging system (PerkinEImer, Waltham, MA, USA; λ ex = 480 nm; λ em = 620 nm).The tissues were fixed with 10% formalin (Beyotime, Shanghai, China) for 48 h, embedded in paraffin, and cut into 5-μm sections.The sections were stained with a hematoxylin and eosin (H&E) staining kit (Abcam, Cambridge, MA, USA) and imaged under a light microscope (Leica, Wetzlar, Germany).Tumor tissue of the left hind limb was separated and weighed.The tumor growth inhibition rate (IRT) was calculated according to the following formula: IRT% = (tumor weight controltumor weight treated )/ tumor weight control × 100% Statistical analysis Statistical analysis was performed in GraphPad Prism (GraphPad Software, San Diego, CA, USA) and SPSS (Statistical Package for Social Sciences; IBM, Chicago, IL, USA).Values are given as the mean ± standard deviation (SD).Student's t test was used for intergroup analysis.A p value of 0.05 was considered statistically significant. Results and Discussion The Characterization of P123-ALN/DP-8@DOX micelles The particle size and size distribution of P123-ALN/DP-8@DOX micelles are shown in Fig. 1A.The diameter of the micelles was 122.97 ± 4.72 nm.P123-ALN/DP-8@DOX micelles were in a size range suitable for accumulation in tumor tissue through the enhanced permeability and retention effect [35].The zeta potential of P123-ALN/DP-8@DOX micelles was −12.60 ± 1.90 mV, as shown in Fig. 1B.The negative surface charge of the micelles was attributed to the presence of ALN and DP-8 [36].A TEM image of P123-ALN/DP-8@DOX is depicted in Fig. 1C, DUAL LIGAND-TARGETED PLURONIC P123 POLYMERIC MICELLES ENHANCE THE THERAPEUTIC EFFECT 773 displaying micelles with a spherical morphology and a smooth surface.The micelles showed a high EE and DL capacity of 76.87% ± 9.72% and 3.44% ± 0.69%, respectively, and DOX was tightly wrapped in the hydrophobic core. In vitro DOX release The release of DOX from P123-ALN/DP-8@DOX was investigated in PBS containing 1% Tween-80 under simulated normophysiological conditions (pH = 7.4) and an acidic microenvironment (pH = 6.8 and 5.0) at 37°C.As shown in Fig. 1D, the rate and amount of DOX released from the micelles were pH-dependent, with faster DOX release at pH 5.0 and 6.8 than at pH 7.4.In an acidic environment, the cleavage of hydrogen bonds within micelles in response to an acidic environment enables the rapid release of DOX.These chemical environmentdependent release kinetics are favorable for a drug delivery system [37], as it is expected that the micelles will be stable in the blood circulation (pH 7.4).Once the micelles are internalized in the endolysosome of cancer cells, DOX will be released from the micelles due to the acidic microenvironment of the endolysosome and diffuse through the cytoplasm to the nucleus [38]. Hydroxyapatite affinity assay High affinity to bone is essential for targeted drug delivery to metastatic bone tissue.There is ample evidence that ALN has a strong affinity for bone and is able to attach to hydroxyapatite binding sites on bone surfaces, especially bone surfaces that undergo remodeling due to the increased osteoclast activity in metastatic bone tissue [39][40][41][42].To evaluate the binding capacity of P123-ALN/DP-8@DOX to bone, an HA affinity assay was carried out in vitro.As demonstrated in Fig. 1E, after 30 and 90 min of incubation, approximately 38% and 52% of P123-ALN/DP-8@DOX, respectively, was bound with HA.In contrast, after 90 min, 28% and 32% of free DOX and P123@DOX, respectively, were bound to HA.These results indicate that P123-ALN/DP-8@DOX is able to quickly target lytic bone metastases. In vitro uptake and intracellular localization The efficacy of drug delivery systems is related to their internalization by cells and the subsequent release of the drug payload.The internalization of P123-ALN/DP-8@DOX by MDA-MB-231 cells and their intracellular localization was investigated by confocal microscopy.The intrinsic fluorescence of DOX was used to image DOX in MDA-MB-231 cells after 2 h of incubation with free DOX, P123@DOX, and P123-ALN/DP-8@DOX (Fig. 2A).Cellular uptake occurred in a time-dependent manner as the fluorescence intensity of DOX increased in all treatment groups when the incubation time increased from 0.5 to 2 h.MDA-MB-231 cells incubated with free DOX for 2 h exhibited very weak fluorescent clusters mainly located in the cytoplasm and to a lesser extent in the nucleus.The fluorescence intensity of DOX increased in cells treated with P123@DOX or P123-ALN/DP-8@DOX compared with cells treated with free DOX, with the strongest fluorescence intensity observed in cells treated with P123-ALN/DP-8@DOX.These results indicated that the addition of both P123 and DP-8 increased the cellular uptake of DOX.Our findings are in line with other studies [41][42][43].In general, nanoparticles enter the lysosomes of cells through endocytosis [43,44].After the cells were treated with P123-ALN/DP-8@DOX for 2 h, DOX fluorescence was almost completely localized in the nucleus of the cells.This finding confirmed that DOX was released from intracellular P123-ALN/DP-8@DOX micelles and had translocated to the nucleus. The cellular uptake of P123-ALN/DP-8@DOX micelles by MDA-MB-231 cells was further evaluated by flow cytometry, as shown in Fig. 2B.Consistent with the findings of confocal microscopy, the uptake of P123@DOX or P123-ALN/P123-DP-8@DOX by MDA-MB-231 cells was DUAL LIGAND-TARGETED PLURONIC P123 POLYMERIC MICELLES ENHANCE THE THERAPEUTIC EFFECT 775 significantly higher than that of free DOX.The highest uptake was observed in cells treated with P123-ALN/DP-8@DOX micelles. In vitro cytotoxicity Before transitioning to in vivo proof-of-concept studies, the cytotoxicity of P123-ALN/DP-8@DOX towards MDA-MB-231 cells was evaluated via MTT assay.As shown in Fig. 3A, there was no significant difference in the survival rate of cells among the free DOX, P123@DOX and P123-ALN/DP-8@DOX groups when the DOX concentration was 0.08 µg/mL, when the maximum administration concentration was 40 µg/mL, the cell survival rates in the free DOX, P123@DOX and P123-ALN/P123-DP-8@DOX groups were 31%, 17% and 20%, respectively.According to GraphPad Prism 9, the IC50 values of free DOX, P123@DOX and P123-ALN/-DP-8@DOX were 4.69, 0.839, and 0.989 µg/mL, respectively.Both P123@DOX and P123-ALN/DP-8@DOX showed more potent anticancer activity than free DOX.The micelles could enhance the cellular uptake of DOX and increase cytotoxicity, which was verified by laser confocal microscopy and flow cytometry. Mode of cell apoptosis The effect of P123-ALN/P123-DP-8@DOX on MDA-MB-231 cell apoptosis was measured by the Annexin V-FITC/PI double staining method using flow cytometry.The proportions of cells in early (AV+/PI−) and late apoptosis (AV+/PI+) as well as necrosis (AV−/PI+) were determined in the following order: blank cells, cells treated with free DOX, cells treated with P123@DOX, and cells treated with P123-ALN/DP-8@DOX (Fig. 3B).The cell survival rates of cells treated with free DOX, P123@DOX and P123-ALN/ DP-8@DOX were 86.6%, 61.6% and 44.0%, respectively.The results were consistent with the cell viability (Fig. 3A) and indicated that the enhanced micellar delivery of DOX potentiated chemotherapeutic efficacy in vitro.DOX has been mainly known to mainly induce apoptosis [45].This experiment confirmed that DOX induced apoptosis independent of the mode of administration, as evidenced by the similar pattern of cell death induced by free DOX and micellar DOX (Fig. 3B).However, there was relatively more necrotic cell death in cells treated with free DOX and P123-ALN/DP-8@DOX. To verify the mechanism of apoptosis, the changes in the apoptosis-related protein caspase-3 were analyzed by western blotting.The data are summarized in Fig. 3C.Compared with the expression of caspase-3 in free DOX-treated cells, the expression of caspase-3 protein in P123@DOX-treated cells was upregulated, and P123-ALN/DP-8@DOX further upregulated caspase-3 expression.The results were consistent with those observed by flow cytometry, suggesting that P123-ALN/DP-8@DOX induced the most apoptosis. In vitro bone affinity When free DOX, P123@DOX micelles, and P123-ALN/DP-8@DOX micelles were incubated with parietal bones, the adsorption of the micelles was visualized by SEM.Assessment of the surface morphology of bone fibers revealed dense accumulation of P123-ALN/DP-8@DOX micelles on the surface of the bone fibers, while free DOX and P123@DOX micelles were relatively few (Fig. 4).There was no accumulation on the surface of the bone fibers in the control group.These results indicate that ALN played an important role in the adsorption of micelles on the parietal bone surface and that P123-ALN/DP-8@DOX could provide a means to effectively deliver drugs to bone. Therapeutic efficacy and bone resorption in an in vitro 3D model of breast cancer bone metastasis The parietal bones of Sprague-Dawley rats aged 5-7 days were incubated with MDA-MB-231 cells to establish an in vitro 3D model of breast cancer bone metastasis.Bone resorption and cancer cell adhesion on the parietal surface were observed by SEM.Representative SEM images of the bones are shown in Fig. 5A.The surface of the bones was smooth, and the bone fibers were arranged in an orderly manner in the control.After coincubation with MDA-MB-231 cells for 2, 4, and 6 days, bone lacunas of different depths gradually appeared, and cancer cells attached to the fractured bone fibers around the bone lacunas, indicating that the 3D model of bone metastasis from breast cancer was successfully established. After the 3D bone metastasis model was incubated with free DOX, P123@DOX, and P123-ALN/DP-8@DOX, the DUAL LIGAND-TARGETED PLURONIC P123 POLYMERIC MICELLES ENHANCE THE THERAPEUTIC EFFECT 777 surface of the bones was observed using SEM to evaluate the micelles adhesion, cancer growth and bone resorption.As shown in Figs.5B and 5C, bones incubated with P123-ALN/ DP-8@DOX showed the most accumulation of micelles on the surface of bone fibers and fewer bone lacunae, while the free DOX group and P123@DOX group showed less accumulation on the surface of bone fibers and had larger and deeper bone lacunae.Consistent with the bone-targeting validation of MDA-MB-231 cell coincubation, there was more accumulation of P123-ALN/DP-8@DOX on the bone surfaces in the 3D bone metastasis model.This result indicated that, as a result of ALN, P123-ALN/DP-8 @ DOX attached to bone hydroxyapatite binding sites on the surface of reshaped ossified cancer tissue.These results suggested that P123-ALN/DP-8@DOX could inhibit tumor growth and reduce bone resorption. In the 3D model of breast cancer bone metastases, the effect of free DOX, P123@DOX and P123-ALN/DP-8@DOX on osteoclast activation is shown in Figs.5D and 5E.The bones were stained with NR to detect osteoclasts.In the free DOX group, there were multiple multinucleated osteoclasts in the bones.The number of osteoclasts was markedly decreased in the P123-ALN/DP-8@DOX group compared to the free DOX group. The effects of free DOX, P123@DOX, and P123-ALN/ DP-8@DOX on bone resorption are shown in Figs.5F and 5G.The bones were counterstained with silver nitrate to provide a global view of the bone resorption regions, whereby nonreabsorbed areas are black and reabsorbed areas are white.Compared that in the free DOX group, the amount of bone resorption was markedly reduced in the P123-ALN/DP-8@DOX group.The results were complementary to those observed by SEM, suggesting that P123-ALN/DP-8@DOX had a protective effect on bone tissue. In vivo tumor targeting and biodistribution Research on targeting and in vivo biological distribution is essential to evaluate the safety and potential efficacy of drug delivery systems [46].To further verify the targeting ability of P123-ALN/DP-8@DOX micelles, we used an in vivo imaging system to observe DOX distribution in vivo.As demonstrated in Fig. 6A, after injection of free DOX into tumor-bearing BALB/c nude mice, red DOX fluorescence was distributed in the body without obvious accumulation in the tumor area.In contrast, after the intravenous injection of P123@DOX micelles at 12 and 24 h, red DOX fluorescence accumulated in the tumor region due to the enhanced permeability and retention effect.The most profound red DOX fluorescence at the tumor region was observed in mice treated with P123-ALN/DP-8@DOX micelles.These results confirm the utility of the conjugation of ALN and DP-8 to P123 micelles to target breast cancer bone metastasis. In vivo therapeutic efficacy and safety of P123-ALN/DP-8@DOX micelles The potential antitumour effect of P123-ALN/DP-8@DOX micelles was evaluated in a bone metastasis model of female BALB/c nude mice inoculated with MDA-MB-231 cells.Fourteen days after inoculation, the mice received the first intravenous injection and 7 days later the second intravenous injection of treatment.After the treatment, tissues and organs were harvested for fluorescence imaging.None of the intervention groups showed DOX accumulation, as shown in Fig. 6B. Therapeutic efficacy was further evaluated by monitoring body weight and tumor volume, which are known to be negatively affected by the growth of MDA-MB-231 xenografts in mice [47][48][49].As illustrated in Figs.6C-6E, the body weight of the mice in all groups slightly increased during the first 14 days after inoculation.Then, at the start of treatment, the body weight in the control group rapidly increased in the last 14 days, with the tumor volume increasing proportionally.In contrast, the treatment group progressively lost weight over the last 14 days.Compared to mice treated with free DOX and P123@DOX micelles, mice treated with P123-ALN/DP-8@DOX micelles exhibited the least deterioration in body weight and the greatest reduction in tumor volume and tumor weight. To detect possible toxicities from the treatments, routine blood analysis and histological studies were performed.The number of WBCs and the levels of ALT and Cr were evaluated to investigate bone marrow suppression and liver and kidney damage, as shown in Fig. 6F.Compared to the control mice, the mice treated with free DOX showed lower WBC numbers and Cr levels and higher ALT levels.Mice treated with P123@DOX showed the most signs of bone marrow suppression and liver and kidney damage.Compared to the P123@DOX group, the P123-ALN/DP-8@DOX group exhibited fairly normal WBC, ALT and Cr values, indicating that P123-ALN/DP-8@DOX caused less damage to bone marrow, liver and kidney functions than free DOX.Histological analysis of the major organs (heart, lungs, liver, kidney, spleen, bone) and tumor tissue sections after completion of the therapeutic regimens was carried out to further assess the therapeutic effect of P123-ALN/DP-8@DOX, and the results are shown in Fig. 7.The morphologies of the major organs were normal, and no areas of acute or chronic inflammation, apoptosis, or necrosis were found in any of the four groups of animals, suggesting that the intervention did not cause adverse effects.Sections of the left tibia showed the largest tumor areas in the control group.The tumor site was smaller and exhibited increased collagen fibers between the cancer cells, more disorderly arranged cancer cells, and cytoplasmic vacuolization in the treatment groups compared to the control group.The smallest tumors were found in the P123-ALN/DP-8@DOX micelle group.The bone in the mice in the control group showed varying degrees of deformation.This deformation was less pronounced in the other groups, especially in the P123-ALN/DP-8@DOX group.Mice treated with free DOX, P123@DOX, and P123-ALN/DP-8@DOX showed bone marrow hyperplasia.The bone marrow of the mice treated with P123-ALN/DP-8@DOX showed the least bone marrow changes compared to the bone marrow of the mice treated with free DOX and P123@DOX groups.The above data implied that P123-ALN/DP-8@DOX induced tumor cell apoptosis and protected the bone tissue from damage. Discussion Bone metastasis is one of the most common complications of malignant tumors.Bone metastasis is a complex and multistep process that is usually formed by a series of dynamic interactions between tumor cells and host cells, causing tumor cells to leave the primary regions and generate distal lesions.Since most patients with bone metastases are in the advanced stages of cancer, the treatment is mainly palliative.In 1986, Pierce first proposed the concept of "bone DUAL LIGAND-TARGETED PLURONIC P123 POLYMERIC MICELLES ENHANCE THE THERAPEUTIC EFFECT 779 targeting", that is, compound molecules that have the ability to selectively bind to bone calcium [50].This proposal has attracted the attention of many scholars, resulting in new methods for the treatment of bone diseases, such as osteotropic drug delivery systems (ODDSs).Based on this idea, we designed novel bone-targeting micelles (P123-ALN/ DP-8@DOX) that used a dual ligand to deliver the traditional antitumor drug doxorubicin to the bone metastasis site to inhibit the resorption of bone and the proliferation of tumor cells.The first ligand, ALN, can target osteoclasts at the metastatic site to inhibit bone resorption.Since the strong adsorption between osteoclasts and ALNs hinders the further delivery of nanoparticles from the bone matrix to tumor cells, the second ligand DP-8 (DMPGTVLP) is added, and the two can collect nanoparticles at the tumor cells through the synergistic effect to further inhibit the proliferation of tumor cells.The second ligand of DMPGTVLP fuses the n-terminal protein (DP-8) of the p8 phage we added, which has proven to have immunogenicity.Importantly, the P123-ALN/DP-8@DOX micelles may pose a potential risk of immunogenicity when used as a therapeutic agent.The ideal bone-targeting agent should have a high bone affinity and be able to release drug properties at the tumor site.We studied the drug release properties of P123-ALN/DP-8@DOX in different pH media.The results showed that the prepared nanoparticles were Ph dependent.Compared with the pH 7.4 releasing medium, a large amount of DOX could be released under the acidic environment of pH 5.0 and 6.5 under the same conditions, indicating that P123-ALN/DP-8@DOX could exist stably in the blood microenvironment.Once it enters the tumor cells, it can be quickly released to kill the tumor cells. Confocal microscopy and flow cytometry were used to investigate the qualitative and quantitative uptake of P123-ALN/DP-8@DOX by MDA-MB-231 cells.The results showed that P123-ALN/DP-8@DOX had more DOX entering cells than free DOX and P123@DOX, showing a significant time-dependent relationship.Moreover, P123-ALN/DP-8@DOX has a stronger lysosomal escape ability than P123@DOX, which enables more DOX to easily enter the nucleus and enhances the fluorescence intensity in the nucleus, which may be caused by the targeting peptide ligand DP-8.The results were consistent with those of the cytotoxicity test. Annexin V-FITC/PI double staining was used to investigate the apoptotic effect of P123-ALN/DP-8@DOX on MDA-MB-231 cells.The results showed that the apoptosis rates of P123@DOX and P123-ALN/DP-8@DOX FIGURE 7. Representative micrographs of histology (H&E staining) in organs and tumor sites (left leg) from breast cancer xenograft model mice following administration of saline, DOX, P123@DOX, and P123-ALN/DP-8@DOX.For all treatments, the DOX concentration was the same (5 mg/kg).None of the collected organs showed acute or chronic inflammation, or apoptotic or necrotic regions.In all groups due to tumor growth the bone showed varying degrees of deformation with the bone in the P123-ALN/DP-8@DOX group being least affected.In the free and micellar DOX groups the bone marrow showed hyperplasia with the bone marrow in the P123-ALN/DP-8@DOX group being least affected.Scale bar = 50 μm. were higher than those of the free DOX group, and the overall apoptosis level was upregulated.The total apoptosis rate of P123-ALN/DP-8@DOX was slightly lower than that of P123@DOX because part of the late apoptosis appeared in the Q1 region.The overall apoptosis rate was decreased, but the results showed that P123-ALN/DP-8@DOX had a significantly stronger effect on promoting apoptosis than free DOX and P123@DOX.At the same time, western blot analysis also proved that P123-ALN/DP-8@DOX could induce an increase in caspase-3 protein content in apoptosis.The mechanism of apoptosis caused by the exogenous caspase 8 pathway or the endogenous caspase 9 pathway needs to be further explored. MDA-MB-231 cells are the most commonly used cells to study the mechanism of bone metastasis of cancer, and their role in bone tissue is mainly to disrupt the balance between osteoblasts and osteoclasts.Once erodes into bone, factors that promote osteoclast differentiation are produced, such as interleukin-6 (IL6), IL1, prostaglandins, and colonystimulating factors (CSFs), leading to destruction of the bone matrix.In this study, MDA-MB-231 cells were cocultured with calcaneal bone to simulate the mechanism of cancer-bone interaction under physiological conditions and establish a 3D bone metastasis model.Compared with the control group, the cancer cells colonized the bone surface, and an osteolytic phenomenon occurred.Different depths of bone depression and broken bone fiber were observed everywhere.Free DOX, P123@DOX and P123-ALN/DP-8@DOX were coincubated with calvarial bone, and P123-ALN/DP-8@DOX adhered to the surface of calvarial bone in large quantities.Compared with free DOX and P123@DOX, P123-ALN/-DP-8@DOX had fewer osteoclasts, fewer bone depressions, and a smaller bone resorption area, suggesting that P123-ALN/DP-8@DOX reduced bone resorption by inhibiting the proliferation of MDA-MB-231 cells and reducing the number of osteoclasts. After the administration of free DOX, P123@DOX and P123-ALN/DP-8@DOX to tumor-bearing nude mice, a large amount of P123-ALN/DP-8@DOX accumulated in the bone tumor site over time, while free DOX was distributed throughout the whole body.During the treatment period, P123-ALN/DP-8@DOX significantly reduced the tumor volume and weight of tumor-bearing nude mice and significantly prolonged the life span of tumor-bearing nude mice. Conclusion P123-ALN/DP-8@DOX has good release properties, has obvious cytotoxicity to MDA-MB-231 cells, enhances the uptake of tumor cells, induces apoptosis, inhibits osteoclast activity, and reduces the bone absorption area.At the same time, tumor growth was significantly inhibited in tumorbearing nude mice. FIGURE 2 . FIGURE 2. Uptake and intracellular distribution of DOX and DOX-containing micelles in MDA-MB-231 cells.(A) Cells were incubated with free DOX, P123@DOX and P123-ALN/DP-8@DOX at a final DOX (red) concentration of 10 µg/mL for 0.5 and 2 h.DAPI was used to stain the cell nucleus.Cells were imaged by confocal microscopy.Images were edited in Adobe Photoshop for hue, lightness, saturation, and contrast in a clustered manner to preserve relative differences in staining pattern while accentuating the red fluorescence (DOX).(B) Association of DOX, P123@DOX, and P123-ALN/DP-8@DOX with MDA-MB-231 cells as quantified by flow cytometry.Data are expressed as the mean ± SD of DOX fluorescence intensity (n = 3). FIGURE 3 . FIGURE 3. In vitro cytotoxicity.(A) Cell viability of MDA-MB-231 cells after incubation with free DOX, P123@DOX, and P123-ALN/DP-8@DOX for 48 h measured with the MTT colorimetric assay.Data were normalized to the average value of the control (untreated) cells.Data are expressed as the mean ± SD (n = 6), **p < 0.01 vs control.(B) The mode of cell death was analyzed after staining the cells with FITCconjugated AV and PI by flow cytometry.The proportion of cells undergoing early apoptosis (AV+/PI−), late stage of apoptosis (AV+/PI +), and necrosis (AV−/PI+) are shown in the lower right quadrant (Q4), upper right quadrant (Q2), and upper left quadrant (Q1), respectively.(C) Protein expression of caspase-3 in MDA-MB-231 cells. FIGURE 5 . FIGURE 5.In vitro 3D model of breast cancer bone metastasis.(A) Parietal bones from 5-to 7-day-old Sprague Dawley rats were incubated with MDA-MB-231 cells.Tumor cells and bone lacuna appeared on the surface of the bones after different days.The bones were imaged by scanning electron microscopy (SEM).(B) Bone surfaces and (C) Bone lacuna number.(D) Observation of osteoclasts and (E) Osteoclastnumber.(F) Bone resorption areas after incubation with free DOX, P123@DOX, and P123-ALN/DP-8@DOX for 48 h Osteoclasts were stained with neutral red, and are indicated with yellow arrows.The bone resorption areas were counterstained with silver nitrate, and are indicated with green arrows.(G) Bone resorption (%).Data are expressed as the mean ± SD (n = 3), *p < 0.05, **p < 0.01. FIGURE 6 . FIGURE 6.In vivo biodistribution and therapeutic efficacy in a murine breast cancer bone metastasis model.Saline (control), free DOX, P123@DOX micelles, and P123-ALN/DP-8@DOX micelles were intravenously administered on days 14 and 21 (DOX dosage of 5 mg/kg).(A) Representative whole body fluorescence images of tumor bearing mice 2, 6, 12, and 24 h after intravenous injection of treatments.(B) Representative fluorescence images of the organs at the end of the experiment.(C) Body weight of mice as a function of time after intratibial MDA-MB-231 cell inoculation and therapy.(D) Tumor volume.(E) Tumor weight at the end of the experiment.(F) Blood chemistry at the end of the experiment.Data are expressed as the mean ± SD (n = 6).*p < 0.05, **p < 0.01.
2024-03-22T16:23:25.599Z
2024-03-20T00:00:00.000
{ "year": 2024, "sha1": "3b643c2f6abda87b5e7050bdc8636f29ebc8122a", "oa_license": "CCBY", "oa_url": "https://file.techscience.com/files/or/2024/TSP_OR-32-4/OncolRes-32-04-44276/OncolRes-32-44276.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1cb3596aabd08e0951253fda73b7deee4b968e26", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262182470
pes2o/s2orc
v3-fos-license
Efficient Precoding and Power Allocation Techniques for Maximizing Spectral Efficiency in Beamspace MIMO-NOMA Systems Beamspace MIMO-NOMA is an effective way to improve spectral efficiency. This paper focuses on a downlink non-orthogonal multiple access (NOMA) transmission scheme for a beamspace multiple-input multiple-output (MIMO) system. To increase the sum rate, we jointly optimize precoding and power allocation, which presents a non-convex problem. To solve this difficulty, we employ an alternating algorithm to optimize the precoding and power allocation. Regarding the precoding subproblem, we demonstrate that the original optimization problem can be transformed into an unconstrained optimization problem. Drawing inspiration from fraction programming (FP), we reconstruct the problem and derive a closed-form expression of the optimization variable. In addition, we effectively reduce the complexity of precoding by utilizing Neumann series expansion (NSE). For the power allocation subproblem, we adopt a dynamic power allocation scheme that considers both the intra-beam power optimization and the inter-beam power optimization. Simulation results show that the energy efficiency of the proposed beamspace MIMO-NOMA is significantly better than other conventional schemes. Introduction With the coverage of mobile connections expanded, wireless communications systems are facing an escalating demand for data traffic, which poses challenges in terms of spectral efficiency and energy efficiency.Non-orthogonal multiple access (NOMA) technology has emerged as a key solution for improving spectral efficiency and supporting massive links, as it enables multiple users to share the same spectrum resource simultaneously.The application of NOMA in conventional terrestrial communication systems, benefiting from its superior spectral efficiency capability and capacity to accommodate massive connectivity, has been thoroughly investigated in many aspects [1].Additionally, beamspace multiple-input multiple-output (beamspace MIMO) as another key technology also has several advantages.It leverages the abundant spectrum resources in the millimeter wave band, enabling terminal equipment to achieve high-rate data transmission.Furthermore, by employing large-scale MIMO, beamspace MIMO forms directional beams with high gain, effectively mitigating the challenge of substantial signal transmission path loss inherent in millimeter wave communication.Consequently, beamspace MIMO is recognized as a promising technology for future wireless communications [2].Does this mean we can combine the NOMA and beamspace MIMO technologies to effectively leverage their advantages in the power and spatial domains, leading to improved spectral efficiency?The answer is affirmative.Specifically, considering the characteristics of these two technologies, beamspace MIMO requires a large number of radio frequency (RF) chains, which leads to high energy consumption and renders the all-digital structure unsuitable for direct application [3].Moreover, in beamspace MIMO, the number of supported users cannot exceed the number of RF chains, thereby limiting the system's capacity to accommodate users.However, NOMA excels in increasing the number of system access users.Consequently, the integration of NOMA with mmWave massive MIMO, known as beamspace MIMO-NOMA, has emerged as a promising solution for significantly increasing the number of connections and further enhancing spectral efficiency.This approach has garnered growing research interest [4]. Prior Works Typically, the optimization of precoding and power allocation designs is considered a means to improve the performance of beamspace MIMO-NOMA systems.These problems have been investigated jointly or partially.However, the presence of inter-beam and intra-beam interference makes these problems non-convex and challenging to solve [5].Fortunately, researchers have developed efficient algorithms to tackle these challenges. Some works have focused on separately designing precoding or power allocation to improve performance.In [6], the authors propose a ZF precoding scheme to mitigate interference between users and employ the Karush-Kuhn-Tucker (KKT) conditions to investigate the power allocation problem for maximizing the sum rate.Furthermore, [7] explores energy efficiency maximization through power allocation and presents a two-layer iterative algorithm to tackle the non-convex optimization problem.The outer layer converts the original fractional objective function by using the Dinkelbach method, while the inner layer utilizes alternating optimization to solve the transformed problem.In [8], the authors introduce a low-complexity iterative algorithm called mean square error-based dynamic power allocation algorithm (MSE-DPA), which achieves near-perfect performance.Ref. [9] proposes a criterion based on correlation for user pairing to reduce inter-user interference, with ZF precoding applied to the paired users.The results demonstrate that the proposed scheme achieves higher spectral efficiency compared to the conventional scheme.In [10], the main objective is to design a low-complexity hybrid precoder (HP), where the authors propose a symmetric successive over-relaxation (SSOR) algorithm combined with complex regularized zero-forcing (CRZF) linear precoding. In addition, a significant focus of the work is to design a joint optimization scheme for precoding and power allocation to enhance system performance.In [11], the authors adopt a ZF-based precoding scheme to mitigate inter-beam interference and propose a dynamic power allocation method based on minimum mean square error (MMSE) to maximize the achievable sum rate in beamspace MIMO-NOMA systems.Ref. [12] addresses the limitations of complicated successive interference cancellation (SIC) that were disregarded in [11].Based on the ZF beamforming technique, the power allocation optimization problem is represented as a fractional programming (FP) problem, which was transformed into a convex optimization problem using sequential convex approximation (SCA) and secondorder cone (SOC) transformation.In [13], the authors formulate a joint hybrid beamforming and power allocation problem to maximize the sum rate.They employ the approximate ZF method to design the digital beamforming for minimizing inter-group interference and solve the analog beamforming problem with a constant-modulus constraint using a proposed boundary-compressed particle swarm optimization algorithm.In [14], the authors design ZF precoding matrices and evaluate power allocation coefficients based on optimal spectral efficiency to mitigate intra-beam interference.Additionally, they derive a tight closed-form formula for optimal spectral efficiency using KKT analysis.In [15], from the perspective of spectral efficiency, the authors propose a joint optimization framework and employ the quadratic transformation (QT) method to convert the non-convex power allocation problem into a convex problem.They also design an iterative approach to obtain optimal power allocation and digital beamforming.In [16], the authors propose a hybrid precoder that combines user channel alignment and the ZF algorithm to enhance the SINR.Furthermore, they address the non-convex optimization problem by transforming it into a convex optimization problem for inter-cluster power allocation, which can be solved by using the KKT conditions. Motivations and Contributions While the aforementioned research contributions have established a strong foundation for beamspace MIMO-NOMA, further investigation and improvements are still necessary to address practical considerations.Firstly, there is scope for enhancing the optimization of key performance indicators that impact spectral efficiency through various methodologies.Secondly, there is a need for research to focus on reducing computational complexity while improving spectral efficiency simultaneously, which remains an open area of exploration.These observations have inspired our primary research objectives in this study.In this work, our main goal is to maximize the sum rate of beamspace MIMO-NOMA in downlink communications and propose an optimal design scheme for joint precoding and power allocation, building upon the previous research.Against this backdrop, we emphasize the following four aspects that constitute the contributions of our paper: • Firstly, we employ block optimization to optimize the joint problem of precoding and power allocation in beamspace MIMO-NOMA systems.In the precoding optimization part, we demonstrated that the original constrained problem can be transformed into an unconstrained problem.Moreover, we elucidated the quantitative relationship between the solutions of the original problem and the equivalent unconstrained problem.For the power allocation part, we adopted a dynamic power allocation method based on a joint power optimization problem, taking into account power optimization within and between beams. • Secondly, we devised a precoding scheme based on FP to decouple the optimization variables, effectively transforming the unconstrained problem into three equivalent subproblems.Subsequently, we derived closed expressions for the optimization variables. • Thirdly, as the number of antennas at the BS and the number of users accessing the system increase, the hardware and signal processing complexity also escalates.Since the precoding optimization algorithm involves complex matrix inversion operations, its calculation complexity is O N RF 3 , which grows cubically with the increase in the number of RF connections.To mitigate this complexity, we utilized the Neumann series expansion (NSE) method to approximate the inverse of the precise matrix and expand the lower-order terms, thereby reducing the complexity of the matrix inversion operation to O N RF 2 . • Finally, we validated the performance of the proposed scheme through simulation.The results demonstrated that the algorithm significantly improves spectral efficiency.Furthermore, the simulation results confirmed that the proposed precoding and power allocation scheme outperforms the benchmark methods. Organization and Notations The remainder of the paper is organized as follows.Section 2 outlines the system model of beamspace MIMO-NOMA.Based on this model, Section 3 formulates the maximum sum rate problem, and an introduction to the proposed algorithm is provided.Section 4 presents the simulation results to evaluate the performance.Finally, Section 5 concludes the paper. Notation: Denote C as the set of complex numbers, and Re{•} as the real part.We use the superscript to denote the Hermitian transpose of a matrix and overline the complex conjugate.The bold lower-case letter denotes a vector; the bold upper-case letter denotes a matrix; the calligraphic upper-case letter denotes a set.I n denotes the identity matrix of dimension n.(•) T , (•) H , (•) −1 and • F denote transpose, Hermitian transpose, inversion, and Frobenius norm operations, respectively. System Model and Problem Formulation In this section, we first review the beamspace MIMO system model, followed by a detailed description of the beamspace MIMO-NOMA system model. System Model of Beamspace MIMO As illustrated in Figure 1, the system depicted represents a single-cell downlink mmWave MIMO communication system.The BS is equipped with N antennas and N RF RF chains, serving K randomly distributed single-antenna users simultaneously [17].Employing the usual uniform linear array (ULA) structure, utilizing a well-designed lens antenna array at the BS.The received signal vector y = [y 1 , y 2 , • • • , y K ] T is represented as: where s = [s 1 , s 2 , • • • , s K ] T ∈ C K×1 represents the transmitted signal vector for all K users satisfied with E ss the Rayleigh fading channel matrix, where h k ∈ C N×1 denotes the channel vector between the BS and the kth user.In addition, n is the noise vector that follows the distribution CN 0, σ 2 I K .We consider the widely used Saleh-Valenzuela channel model for mmWave communications, h k can be represented as where β k,0 denotes the LoS complex path gain, while a(θ k,0 ) represents the array steering vector for the LoS path, similarly, β k,l and a(θ k,l ) denote the complex gain and steering vec- tor for the lth NLoS path, respectively.Furthermore, L denotes the number of NLoS paths. System Model and Problem Formulation In this section, we first review the beamspace MIMO system model, followed by a detailed description of the beamspace MIMO-NOMA system model. System Model of Beamspace MIMO As illustrated in Figure 1, the system depicted represents a single-cell downlink mmWave MIMO communication system.The BS is equipped with N antennas and RF N RF chains, serving K randomly distributed single-antenna users simultaneously [17]. Employing the usual uniform linear array (ULA) structure, utilizing a well-designed lens antenna array at the BS.The received signal vector , , , is represented as: where , , , represents the transmitted signal vector for all K users satisfied with ( ) , , , is the diagonal power allocation matrix, , , , is the precoding matrix, and , , , is the Rayleigh fading channel matrix, where × ∈ N k h C 1 denotes the channel vector between the BS and the k th user.In addition, n is the noise vector that follows the distri- bution ( ) . We consider the widely used Saleh-Valenzuela channel model for mmWave communications, k h can be represented as where k β ,0 denotes the LoS complex path gain, while ( ) k a θ ,0 represents the array steering vector for the LoS path, similarly, k l β , and ( ) k l a θ , denote the complex gain and steering vector for the lth NLoS path, respectively.Furthermore, L denotes the num- ber of NLoS paths.For the typical ULA, the expression of a(ϕ) is can be expressed as follows [18]: where for n = 1, 2, • • • , N are the predefined spatial directions.Then, the received signal vector y in the beamspace MIMO systems is given by where H = UH is the beamspace channel matrix.We employ the classic maximummagnitude-based beam selection method to choose a subset of the N orthogonal beams to serve all K users without obvious performance loss [19].Consequently, the number of RF chains is reduced from N to N RF .Thus, the received signal can be written as where H r = H(m, :), m ∈ M is the dimension-reduced beamspace channel matrix with size |M| × K, and M is the index set of selected beams.It is important to note that in this system, one RF chain generates one beam, resulting in the number of selected beams |M| being equal to the number of RF chains N RF [11].In addition, the dimension-reduced digital precoding matrix W r , has a size of |M| × K. Since W r has a smaller row dimension compared to the original precoding matrix W, the number of required RF chains can be significantly reduced [20].Notwithstanding, reducing the number of RF chains also presents a challenge of limited connections.To overcome this fundamental limit, a novel transmission scheme known as beamspace MIMO-NOMA, which combines the concept of NOMA with beamspace MIMO, has been proposed.By incorporating NOMA into beamspace MIMO systems, both spectral efficiency and connection density can be further enhanced [6]. System Model of Beamspace MIMO-NOMA As shown in Figure 2, this is a typical beamspace MIMO-NOMA wireless communication system.We consider that there are N RF groups assigned to provide service, and we denote the set of users S m served by the mth beam with S m ∩ S n = ∅ for m = n and The received signal ŷm,n of the nth user in the mth beam can be expressed as follows: where s m,n is the transmitted signal for the nth user in the mth beam with normalized power, and p m,n is the corresponding transmitted power, w m = W r (:, m) represents the mth beam digital precoding vector, and v m,n ∼ CN 0, σ 2 refers to the noise.Based on the principle of NOMA, intra-beam interference can be mitigated by utilizing SIC.Supposing within the same beam, the ith user can sequentially detect the jth user (for all j > i) and remove the detected signals from its received signals [21].In the mth beam, after employing SIC to decode the nth user's signal, the remaining received signal can be expressed as follows: Alternating Optimization of Beam-Specific Digital Precoding and Power Allocation In this section, we begin by formulating the optimization problem.Next, we present an alternating optimization method to obtain the solution for beam-specific digital precoding.Finally, we maximize the achievable sum rate by solving the joint power optimization problem using a dynamic power allocation scheme. Problem Formulation Our objective is to maximize the achievable sum rate problem by jointly optimizing the beam-specific digital precoding and power allocation, while adhering to the maximum transmit power constraint of the BS.The optimization problem can be formulated as follows: : . Obviously, three problems need to be addressed to optimize  1 .As shown in ( 9), the presence of both intra-beam interference and inter-beam interference in the system results in the optimization variable { } p , is carried out at the user level.This implies that both aspects are difficult to optimize simultaneously. To tackle the complexity of the original problem  1 , we decompose it into two sub- problems:   beam and  power for optimization.For the sub-problem   beam , we first convert the constrained optimization problem into an unconstrained optimization problem.Then, we employ the FP algorithm to handle the NP-hard problem, leading to the derivation of a closed expression for precoding W . Additionally, we leverage the NSE to reduce the complexity of the precoding process.As for the sub-problem of power allocation, we utilize a dynamic power allocation scheme to obtain a closed-form expression for the power distribution, ensuring lower complexity.Therefore, the signal-to-interference-plus-noise ratio (SINR) at the nth user in the mth beam can be expressed as follows: where Hence, the corresponding achievable rate can be expressed as follows: Consequently, the overall achievable sum rate of the beamspace MIMO-NOMA scheme is: Indeed, precoding optimization helps mitigate inter-beam interference, but intra-beam interference endures within beamspace MIMO-NOMA systems.Power allocation effectively mitigates this inter-beam interference, thus enhancing overall system performance.It is noteworthy that expressions ( 9)-( 11) illustrate the substantial influence of power allocation parameters {p m,n } and precoding vectors {w m } on maximizing the sum rate.Thus, the system performance can be further enhanced through the meticulous design of precoding and power allocation strategies.Jointly optimizing precoding and power allocation is pivotal for maximizing overall system performance.While this may add complexity, thoughtful design, and analysis allow for performance improvements without imposing significantly higher computational demands.In the following section, we will explore these ideas in greater detail. Alternating Optimization of Beam-Specific Digital Precoding and Power Allocation In this section, we begin by formulating the optimization problem.Next, we present an alternating optimization method to obtain the solution for beam-specific digital precoding.Finally, we maximize the achievable sum rate by solving the joint power optimization problem using a dynamic power allocation scheme. Problem Formulation Our objective is to maximize the achievable sum rate problem by jointly optimizing the beam-specific digital precoding and power allocation, while adhering to the maximum transmit power constraint of the BS.The optimization problem can be formulated as follows: P 1 : max Obviously, three problems need to be addressed to optimize P 1 .As shown in ( 9), the presence of both intra-beam interference and inter-beam interference in the system results in the optimization variable {p m,n } and {w m } appears in both the nomination and denominator of γ m,n .Consequently, the problem becomes a non-convex optimization problem that is difficult to solve directly.Furthermore, it is highly nonlinear.Additionally, the optimization of precoding {w m } is performed at the beam level, while the optimization of power allocation {p m,n } is carried out at the user level.This implies that both aspects are difficult to optimize simultaneously. To tackle the complexity of the original problem P 1 , we decompose it into two subproblems: P beam and P power for optimization.For the sub-problem P beam , we first convert the constrained optimization problem into an unconstrained optimization problem.Then, we employ the FP algorithm to handle the NP-hard problem, leading to the derivation of a closed expression for precoding W. Additionally, we leverage the NSE to reduce the complexity of the precoding process.As for the sub-problem of power allocation, we utilize a dynamic power allocation scheme to obtain a closed-form expression for the power distribution, ensuring lower complexity. The Proposed Beam-Specific Digital Precoding Optimization In this subsection, we focus on optimizing the beam-specific digital precoding vectors {w m } for a given set of power allocation parameters {p m,n }.To accomplish this, we trans- form the non-convex precoding optimization problem into an unconstrained optimization problem.To be specific, the precoding problem can be formulated as follows: Specifically, inspired by [22], we establish the following definition and proposition.According to Proposition 1, it can be inferred that: Hence, the problem P beam can be transformed into the following unconstrained form: We can express the objective function as follows: where The following proposition establishes the relationship between P beam and P beam . Proposition 2. The following relationship exists between the optimal solution w o of the problem P beam and the optimal solution w o of the new unconstrained optimization problem P beam . This implies that if we find solution w o , then solution w o can be obtained according to Proposition 2. Obviously, the objective function f beam (w) remains non-convex, making it difficult to solve in polynomial time.To address this, we employ the Lagrangian dual transform to reframe the unconstrained problem P beam , as demonstrated below [23].where u refers to a set of auxiliary variables {u m,n }, and the objective function of problem P beam is formulated as follows: When w m is held fixed, the optimal u m,n can be obtained by solving Now, we incorporate u o m,n into ( 21) and obtain where const(u) = log 2 (1 + u m,n ) − u m,n is a constant term.Applying the multidimensional quadratic transform further transforms (23) and leads to the following expression: where v is the collection {v m,n }.With u m,n fixed, the optimal v m,n can also be determined by setting = 0, and the optimal value v o m,n can be expressed as follows: Likewise, with the other variables fixed, the optimal w m satisfies the expression The proposed algorithm is summarized in Algorithm 1.Unfortunately, although N RF is much smaller than K, the matrix inversion in the expression of w o m still remains high-dimensional, resulting in computational complexity of O N RF 3 in each iteration, which may result in significant processing delays.To address this issue, the NSE has been explored as an alternative for approximating matrix inversion [24], we leverage the NSE to simplify the matrix inversion of w o m as follows.Letting Sensors 2023, 23, 7996 10 of 22 we can observe that the matrix A exhibits diagonal dominance.In such cases, the inversion of A can be equivalently expressed as follows [25]: By decomposing the matrix A as A = D + E, where D is a diagonal matrix consisting of the main diagonal elements of A, and E is a hollow matrix consisting of the remaining elements.Replace P in (28) with D and rewrite it as follows Due to the high complexity of the full NSE algorithm, the truncated NSE, which aims to retain only the first k orders (k + 1 terms) of the Neumann series, is a more commonly used approach.The specific formula can be expressed as follows: It should be noted that as the unfolding order increases (denoted as 'k > 1'), the computational complexity of the proposed NSE-based algorithm may exceed the complexity of O N RF 3 .Therefore, to strike a balance between closely approximating the original precoding while reducing complexity, we choose k = 1, then Based on this estimation, the NSE-level approximation algorithm can reduce the computational complexity from O N RF 3 to O N RF 2 .By combining the aforementioned updates, Algorithm 1 provides a detailed description of the proposed precoding optimization algorithm.m for ∀m. In Algorithm 1, it can be demonstrated that the computational complexity is primarily determined by line 5. Within each iteration, the complexity of obtaining the optimal values u (t) m,n in (22) and v Additionally, the complexity of finding the optimal value w (t) m in (31) is O N RF 2 , due to the utilization of NSE.Consequently, the computational complexity is significantly lower than the complexity of O N RF 3 stated in (26). The Adopted Optimization Power Allocation The initial optimization problem P 1 can be transformed into the following problem when {w m } is fixed. P power : max Note that the problem remains challenging.To address this difficulty, we introduce Lemma 1 to simplify problem P power .Lemma 1.Let f (a) = − ab ln 2 + log 2 a + 1 ln 2 and a ∈ R 1×1 be a positive scalar, we have where the optimal solution of a is a o = 1 b . Proof.Since f (a) is a convex function, and the optimal solution of f (a) can be obtained by setting where the maximum value of f (a) is − log 2 b. Moreover, if we use the minimum mean square error (MMSE) to estimate s m,n , then have the following expression: where c m,n ∈ C 1×1 denotes the channel equalization coefficient, y m,n is defined previously in (8).Substituting into (35), we obtain: According to [11], the optimal equalization coefficient c m,n can be obtained by the following formula: and c m,n can be calculated by ∂e m,n ∂c m,n = 0, then we have Substituting (38) into (36), we can obtain the optimal MMSE expression as follows: According to the extension of the Sherman-Morrison-Woodbury formula [27], Thus, (1 + γ m,n ) −1 can be reformulated as We observe that the expression (41) has the same form as the MMSE expression (39).i.e., we have (1 Using Lemma 1, we can equivalently rewrite P power as Ppower : max where a m,n > 0 is an introduced slack variable.We propose to iteratively optimize {p m,n }, {c m,n } and {a m,n } by using the alternating optimization algorithm.The optimal solution can be obtained by: After obtaining the optimal values c o m,n and a o m,n in the iteration, the optimal value p o m,n can be obtained by solving the following problem: We observe that P power is a convex optimization problem, which can be solved by using the following Lagrange function: where λ ≥ 0.Then, the Karush-Kuhn-Tucker (KKT) condition of problem P power can be obtained as follows. ∂L(p, λ) ∂p Finally, the optimal solution p o m,n from (45) can be found as follows: We can see that the values of c o m,n , a o m,n and p o m,n obtained in each iteration are closed optimal solutions because (37), ( 33) and ( 45) are all convex after a sequence of transformations.The iterative update of c o m,n , a o m,n and p o m,n will only increase or maintain the objective function in (43).A monotonically growing sequence of objective function values in (43) can be obtained through iterative updating.However, it has an upper bound because of the transmission power restriction.Therefore, the proposed iterative optimization algorithm for power allocation will converge to a stationary solution of problem Ppower .The power allocation optimization technique is described in detail in Algorithm 2. We summarize the proposed algorithm in Algorithm 3. The computational complexity of the proposed algorithm mainly arises from the iteration part.We observe that in each iteration, the complexity of obtaining the optimal values c o m,n in (38) and a o m,n in (44) is linear with the number of users, i.e., O(K).λ in (48) can be obtained by using the Newton or bisection methods, both of which have a complexity of O K 2 log 2 δ , where δ represents the desired accuracy.The overall complexity of the suggested power allocation algorithm can be calculated to be O T max K 2 log 2 δ , where T max is the maximum number of repetitions.Therefore, the complexity of the proposed joint precoding design and power allocation optimization algorithm is O T max K 2 log 2 δ + T max N RF 3 .While the computational complexity of the algorithm without NSE processing is O T max K 2 log 2 δ + T max N RF 4 . Simulation Result The performance of the proposed joint optimization algorithm for the mmWave beamspace MIMO-NOMA scheme is evaluated by using numerical simulations in this section. Simulation Setup In this paper, we consider a typical single-cell downlink mmWave massive MIMO system.The BS is equipped with a ULA of N = 256 transmit antennas that communicate with K users simultaneously.The system bandwidth is assumed to be 1 Hz, and the total transmit power is set to P = 32mW (15 dBm) [11].For all users' channels, we assume L = 1 LoS component and L = 2 NLoS components, where β k follows a uniform distribution within − 1 2 , 1 2 for 1 ≤ l ≤ L. The SNR is set as P σ 2 , the maximum number of iterations T max = 50.We consider the following four typical mmWave massive MIMO solutions for comparison, and we aim to use the same system configuration in these systems to conduct a fair comparison: "traditional fully digital MIMO" (FDM), "traditional beamspace MIMO" (BM), "traditional MIMO-OMA"(MO), in particular, we compared our approach with the reference [11], which is a particularly classic and highly effective method based on a "beamspace MIMO-NOMA" (BMN) system, as a benchmark. We evaluated the performance in terms of energy efficiency and spectral efficiency of each of the four baseline systems mentioned above.According to [20], energy efficiency can be expressed as: where P t represents the total transmit power, P RF represents the power consumed by each RF, P SW represents the power consumed by each switch, and P BB represents the power consumed at the baseband.For the parameters, we have adopted the following common values: R RF = 300mW, P SW = 5mW and P BB = 200mW. Simulation Results The performance evaluation of the proposed MIMO-NOMA system was carried out in three different cases: performance comparison at different SNRs, performance comparison at different numbers of users, and performance comparison at different numbers of antennas. x Comparison of performance with different SNRs Figure 3 depicts the comparison of spectral efficiency versus SNRs with K = 32 and K = 128.As the SNR increases, both sets of curves demonstrate an increase in spectral efficiency.The proposed optimization structures namely proposed 'BMN' and proposed 'beamspace MIMO-NOMA with NSE' (BMNN), exhibited very similar results in terms of spectral efficiency growth.This indicates that our precoding scheme, approximated by the NSE, not only reduced the complexity of the original algorithm but also achieved comparable performance.These findings highlight the effectiveness of the NSE approximation algorithm. NSE, not only reduced the complexity of the original algorithm but also achieved comparable performance.These findings highlight the effectiveness of the NSE approximation algorithm.Figure 4 presents a comparison of spectral efficiency versus SNRs for the proposed system and the baseline systems.In particular, we compared the spectral efficiency of the proposed algorithm in the beamspace MIMO-NOMA system with the classical BMN [11] algorithm, both for 128 users and 32 users.The results indicate that in both scenarios, BMNN outperformed BMN [11], with the advantage becoming more pronounced as the number of users increased.When there were 32 users, the proposed BMNN scheme outperformed the BMN [11], BM, and MO schemes in terms of spectral efficiency.Particularly, compared to BMN [11], the performance gain of BMNN came mainly from the optimization of precoding for different beams in the first stage.Moreover, the proposed BMNN exhibited significantly better performance than BM, benefiting from the integration of beamspace MIMO and NOMA technologies, which enabled simultaneous service to multiple users within each beam and effectively improved spectral efficiency.Since NOMA can achieve higher spectral efficiency than OMA, it is evident that the proposed BMNN also outperforms MO in terms of spectral efficiency.Figure 4 presents a comparison of spectral efficiency versus SNRs for the proposed system and the baseline systems.In particular, we compared the spectral efficiency of the proposed algorithm in the beamspace MIMO-NOMA system with the classical BMN [11] algorithm, both for 128 users and 32 users.The results indicate that in both scenarios, BMNN outperformed BMN [11], with the advantage becoming more pronounced as the number of users increased.When there were 32 users, the proposed BMNN scheme outperformed the BMN [11], BM, and MO schemes in terms of spectral efficiency.Particularly, compared to BMN [11], the performance gain of BMNN came mainly from the optimization of precoding for different beams in the first stage.Moreover, the proposed BMNN exhibited significantly better performance than BM, benefiting from the integration of beamspace MIMO and NOMA technologies, which enabled simultaneous service to multiple users within each beam and effectively improved spectral efficiency.Since NOMA can achieve higher spectral efficiency than OMA, it is evident that the proposed BMNN also outperforms MO in terms of spectral efficiency.Figure 4 presents a comparison of spectral efficiency versus SNRs for the propose system and the baseline systems.In particular, we compared the spectral efficiency of th proposed algorithm in the beamspace MIMO-NOMA system with the classical BMN [1 algorithm, both for 128 users and 32 users.The results indicate that in both scenario BMNN outperformed BMN [11], with the advantage becoming more pronounced as th number of users increased.When there were 32 users, the proposed BMNN scheme ou performed the BMN [11], BM, and MO schemes in terms of spectral efficiency.Particu larly, compared to BMN [11], the performance gain of BMNN came mainly from the op timization of precoding for different beams in the first stage.Moreover, the propose BMNN exhibited significantly better performance than BM, benefiting from the integr tion of beamspace MIMO and NOMA technologies, which enabled simultaneous servic to multiple users within each beam and effectively improved spectral efficiency.Sinc NOMA can achieve higher spectral efficiency than OMA, it is evident that the propose BMNN also outperforms MO in terms of spectral efficiency.that increasing SNR can lead to a substantial growth in energy efficiency, and within the same system, for both 32 users and 128 users, our algorithm outperformed BMN [11].Furthermore, in different systems with 32 users, the energy efficiency of the proposed BMNN was higher than that of the other four baseline systems.Specifically, compared to BM, our proposed BMNN achieved higher energy efficiency, by integrating NOMA and beamformed MIMO, allowing each beam to serve multiple users. Figure 5 illustrates the comparison of energy efficiency versus SNRs with K = 32 an K=128 users for the proposed system and the baseline systems.It can be clearly seen th increasing SNR can lead to a substantial growth in energy efficiency, and within the sam system, for both 32 users and 128 users, our algorithm outperformed BMN [11].Furthe more, in different systems with 32 users, the energy efficiency of the proposed BMNN w higher than that of the other four baseline systems.Specifically, compared to BM, our pr posed BMNN achieved higher energy efficiency, by integrating NOMA and beamforme MIMO, allowing each beam to serve multiple users. ② Comparison of performance with different users The aforementioned results were obtained while considering varying SNR, howeve in real communication systems, especially in massive MIMO systems, the number of a cessed users plays a significant role.Therefore, we further investigated the spectrum effi ciency performance of the two proposed solutions under different user scenarios. Figure 6 depicts how spectrum efficiency varies with the number of users.Bo curves exhibit an upward trend with increasing user count, and the spectrum efficien growth curves of the two proposed optimization structures yield similar results.The aforementioned results were obtained while considering varying SNR, however, in real communication systems, especially in massive MIMO systems, the number of accessed users plays a significant role.Therefore, we further investigated the spectrum efficiency performance of the two proposed solutions under different user scenarios. Figure 6 depicts how spectrum efficiency varies with the number of users.Both curves exhibit an upward trend with increasing user count, and the spectrum efficiency growth curves of the two proposed optimization structures yield similar results. Sensors 2023, 23, x FOR PEER REVIEW 17 of 24 Figure 5 illustrates the comparison of energy efficiency versus SNRs with K = 32 and K=128 users for the proposed system and the baseline systems.It can be clearly seen that increasing SNR can lead to a substantial growth in energy efficiency, and within the same system, for both 32 users and 128 users, our algorithm outperformed BMN [11].Furthermore, in different systems with 32 users, the energy efficiency of the proposed BMNN was higher than that of the other four baseline systems.Specifically, compared to BM, our proposed BMNN achieved higher energy efficiency, by integrating NOMA and beamformed MIMO, allowing each beam to serve multiple users. ② Comparison of performance with different users The aforementioned results were obtained while considering varying SNR, however, in real communication systems, especially in massive MIMO systems, the number of accessed users plays a significant role.Therefore, we further investigated the spectrum efficiency performance of the two proposed solutions under different user scenarios. Figure 6 depicts how spectrum efficiency varies with the number of users.Both curves exhibit an upward trend with increasing user count, and the spectrum efficiency growth curves of the two proposed optimization structures yield similar results.Figure 7 illustrates a comparison of the spectrum efficiency of the four schemes unde different user scenarios at 0 dB.The BMNN scheme outperformed the BMN [11], BM, an MO schemes.Moreover, compared to the traditional BM schemes, the BMNN optimiza tion scheme proposed in this study further improved spectrum efficiency.Figure 8 displays the energy efficiency performance for all considered schemes as th number of users increases.It is obvious that the proposed algorithm remained superio among the five schemes, which proves the effectiveness of the proposed scheme.Anothe noteworthy observation is that the performance of our proposed BMNN algorithm su passed that of BMN [11] in terms of energy efficiency.This is mainly attributed to the fa that BMN [11] utilizes the ZF algorithm commonly employed in many studies in the pre coding part, whereas our proposed algorithm optimizes the precoding parameter thereby validating the necessity of optimizing precoding design parameters in our algo rithm.Figure 9 shows how spectral efficiency varies with an increasing number of users a Figure 8 displays the energy efficiency performance for all considered schemes as the number of users increases.It is obvious that the proposed algorithm remained superior among the five schemes, which proves the effectiveness of the proposed scheme.Another noteworthy observation is that the performance of our proposed BMNN algorithm surpassed that of BMN [11] in terms of energy efficiency.This is mainly attributed to the fact that BMN [11] utilizes the ZF algorithm commonly employed in many studies in the precoding part, whereas our proposed algorithm optimizes the precoding parameters, thereby validating the necessity of optimizing precoding design parameters in our algorithm. Sensors 2023, 23, x FOR PEER REVIEW 18 of 24 Figure 7 illustrates a comparison of the spectrum efficiency of the four schemes under different user scenarios at 0 dB.The BMNN scheme outperformed the BMN [11], BM, and MO schemes.Moreover, compared to the traditional BM schemes, the BMNN optimization scheme proposed in this study further improved spectrum efficiency.Figure 8 displays the energy efficiency performance for all considered schemes as the number of users increases.It is obvious that the proposed algorithm remained superior among the five schemes, which proves the effectiveness of the proposed scheme.Another noteworthy observation is that the performance of our proposed BMNN algorithm surpassed that of BMN [11] in terms of energy efficiency.This is mainly attributed to the fact that BMN [11] utilizes the ZF algorithm commonly employed in many studies in the precoding part, whereas our proposed algorithm optimizes the precoding parameters, thereby validating the necessity of optimizing precoding design parameters in our algorithm.Figure 9 shows how spectral efficiency varies with an increasing number of users at SNR levels of −5 dB, 0 dB, and 5 dB.It is important to note that, across all these SNR Figure 9 shows how spectral efficiency varies with an increasing number of users at SNR levels of −5 dB, 0 dB, and 5 dB.It is important to note that, across all these SNR conditions, the BMNN algorithm we propose consistently outperformed the other schemes, and its superiority becomes even more pronounced as the SNR increases.conditions, the BMNN algorithm we propose consistently outperformed the other schemes, and its superiority becomes even more pronounced as the SNR increases. ③ Comparison of performance with different antennas From Figure 10, it can be observed that, BMNN exhibited a clear advantage over other algorithms until the number of antennas increases to 200.Beyond this point, the spectrum efficiency of the FDM algorithm surpassed that of the others.This is primarily attributed to the increase in the number of antennas in the FDM algorithm.With more antennas, precise beamforming becomes possible, allowing for more accurate signal focusing.This allows signals to be aimed more accurately at the receivers, reducing signal scattering and interference, ultimately leading to improved spectral efficiency.However, it is worth noting that the FDM algorithm typically requires more hardware and signal processing resources, which can lead to higher power consumption.As Figure 11 corroborates, the energy efficiency of the FDM tends to be lower.Nevertheless, as seen in the graph, our proposed BMNN algorithm achieved the highest energy efficiency among all algorithms, highlighting its potential to enhance system performance with a clear advantage.z Comparison of performance with different antennas From Figure 10, it can be observed that, BMNN exhibited a clear advantage over other algorithms until the number of antennas increases to 200.Beyond this point, the spectrum efficiency of the FDM algorithm surpassed that of the others.This is primarily attributed to the increase in the number of antennas in the FDM algorithm.With more antennas, precise beamforming becomes possible, allowing for more accurate signal focusing.This allows signals to be aimed more accurately at the receivers, reducing signal scattering and interference, ultimately leading to improved spectral efficiency. conditions, the BMNN algorithm we propose consistently outperformed the other schemes, and its superiority becomes even more pronounced as the SNR increases. ③ Comparison of performance with different antennas From Figure 10, it can be observed that, BMNN exhibited a clear advantage over other algorithms until the number of antennas increases to 200.Beyond this point, the spectrum efficiency of the FDM algorithm surpassed that of the others.This is primarily attributed to the increase in the number of antennas in the FDM algorithm.With more antennas, precise beamforming becomes possible, allowing for more accurate signal focusing.This allows signals to be aimed more accurately at the receivers, reducing signal scattering and interference, ultimately leading to improved spectral efficiency.However, it is worth noting that the FDM algorithm typically requires more hardware and signal processing resources, which can lead to higher power consumption.As Figure 11 corroborates, the energy efficiency of the FDM tends to be lower.Nevertheless, as seen in the graph, our proposed BMNN algorithm achieved the highest energy efficiency among all algorithms, highlighting its potential to enhance system performance with a clear advantage.However, it is worth noting that the FDM algorithm typically requires more hardware and signal processing resources, which can lead to higher power consumption.As Figure 11 corroborates, the energy efficiency of the FDM tends to be lower.Nevertheless, as seen in the graph, our proposed BMNN algorithm achieved the highest energy efficiency among all algorithms, highlighting its potential to enhance system performance with a clear advantage. Conclusions In this research, we addressed the joint optimization problem of precoding and power allocation in massive MIMO-NOMA networks, aiming to maximize the sum rate for all devices.To tackle this challenge, we transformed the original optimization problem into an unconstrained problem for the precoding subproblem.We employed the FP approach to handle the non-convex problem, resulting in three equivalent problems and a closed expression for precoding.For the power allocation subproblem, which remains nonconvex, we utilized the MMSE-based dynamic power allocation scheme to solve it.Simulation results demonstrated that the proposed beamspace MIMO-NOMA system outperforms the baseline in terms of both spectrum and energy efficiency.In future work, we intend to extend the proposed optimization framework for precoding from beambased optimization to user-based optimization, aiming to further improve system performance. Conclusions In this research, we addressed the joint optimization problem of precoding and power allocation in massive MIMO-NOMA networks, aiming to maximize the sum rate for all devices.To tackle this challenge, we transformed the original optimization problem into an unconstrained problem for the precoding subproblem.We employed the FP approach to handle the non-convex problem, resulting in three equivalent problems and a closed expression for precoding.For the power allocation subproblem, which remains nonconvex, we utilized the MMSE-based dynamic power allocation scheme to solve it.Simulation results demonstrated that the proposed beamspace MIMO-NOMA system outperforms the baseline in terms of both spectrum and energy efficiency.In future work, we intend to extend the proposed optimization framework for precoding from beam-based optimization to user-based optimization, aiming to further improve system performance. Specifically, the gradient of the of the nth user in the mth beam with respect to the variable w m can be expressed as: This equation implies that w o m satisfies the first-order optimality condition with respect to precoding, where λ is the Lagrange multiplier.Moreover, since w o m satisfies the power constraint and also satisfies the complementary slackness condition, it further satisfies the KKT condition of the original problem P beam .Thus, w o m is a nontrivial stationary point of P beam .The sufficiency proof is finally complete. The sufficiency of the proposition can be demonstrated by reversing the steps of sufficiency proof. Figure 1 . Figure 1.The system model of beamspace MIMO architecture. is a symmetric set of indices centered around zero.The spatial direction of the channel is defined as θ = d λ sin(ϕ), λ represents the wavelength, d = λ 2 denotes the antenna spacing, and ϕ denotes the physical direction of the corresponding path satisfying − π 2 ≤ θ ≤ π 2 .The lens antenna array serves as a discrete Fourier transformation matrix U, defined as , the problem becomes a non-convex optimization problem that is difficult to solve directly.Furthermore, it is highly nonlinear.Additionally, the optimization of precoding { } m w is performed at the beam level, while the optimization of power allocation { } m n (t) m,n in(25) is linear in the number of RF chains, i.e., O(N RF ). Figure 3 . Figure 3. Spectrum efficiency comparison versus SNRs of the two schemes with different users. Figure 4 . Figure 4. Spectrum efficiency comparison versus SNRs with different users. Figure 3 . Figure 3. Spectrum efficiency comparison versus SNRs of the two schemes with different users. Figure 3 . Figure 3. Spectrum efficiency comparison versus SNRs of the two schemes with different users. Figure 4 . Figure 4. Spectrum efficiency comparison versus SNRs with different users. Figure 4 . Figure 4. Spectrum efficiency comparison versus SNRs with different users. Figure 5 Figure 5 illustrates the comparison of energy efficiency versus SNRs with K = 32 and K = 128 users for the proposed system and the baseline systems.It can be clearly seen Figure 5 . Figure 5. Energy efficiency comparison versus SNRs with different users. Figure 6 . Figure 6.Spectrum efficiency comparison versus users of the two schemes at = SNR 0 dB. Figure 5 . Figure 5. Energy efficiency comparison versus SNRs with different users.y Comparison of performance with different users Figure 5 . Figure 5. Energy efficiency comparison versus SNRs with different users. Figure 6 . Figure 6.Spectrum efficiency comparison versus users of the two schemes at = SNR 0 dB. Figure 6 . Figure 6.Spectrum efficiency comparison versus users of the two schemes at SNR = 0 dB. Figure 7 Figure 7 illustrates a comparison of the spectrum efficiency of the four schemes under different user scenarios at 0 dB.The BMNN scheme outperformed the BMN [11], BM, and Figure 9 . Figure 9. Spectral efficiency comparison versus users with different SNRs. Figure 9 . Figure 9. Spectral efficiency comparison versus users with different SNRs. Figure 9 . Figure 9. Spectral efficiency comparison versus users with different SNRs. 2 2 p l,k +σ 2 = gradient of the rate of the nth user in the xth(x = m) beam with respect to variable w m can be expressed as: ∇ w m R x,y = m , ∀m, (A8) and (A9) can be simplified, the expression after bringing them into (A4) can be specifically expressed as: m R sum ( w o m ) − λ( w o m ) λ > 0. (A11) Definition 1. (Trivial Stationary Point): If a point X satisfying HX = 0, which results in a zero-sum rate, we say that it is a trivial stationary point of the original problemP 1 . Proof.See Appendix A. Algorithm 1 Proposed Precoding Framework.Beamspace channel vectors: h m,n for ∀m, n; Power allocation parameters: p m,n for ∀m, n; Noise variance: σ 2 ; Maximum iteration times: T max . Algorithm 2 Proposed Power Allocation Framework. m,n for ∀m, n.Algorithm 3 Proposed Joint Precoding and Power Allocation Framework.m,nfor ∀m, n.
2023-09-24T15:31:51.361Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "06e2356406cefc362acb0d2a32cb06d8b79459a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/18/7996/pdf?version=1695212678", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61887150bb33ad679948c7a1151b09c5350edc21", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
259701805
pes2o/s2orc
v3-fos-license
Trends in disease burden of hepatitis B infection in Jiangsu Province, China, 1990–2021 Background The incidence of hepatitis B virus (HBV) has decreased year by year in China after the expansion of vaccination, but there is still a high disease burden in Jiangsu Province of China. Methods The year-by-year incidence data of HBV in Jiangsu Province from 1990 to 2021 were collected. The incidence rates of males and females age groups were clustered by systematic clustering, and the incidence rates of each age group were analyzed and studied by using Joinpoint regression model and age-period-cohort effect model (APC). Results Joinpoint regression model and APC model showed a general decrease in HBV prevalence in both males and females. In addition, the results of the APC model showed that the age, period, and cohort effects of patients all affected the incidence of HBV, and the incidence was higher in males than in females. The incidence is highest in the population between the ages of 15 and 30 years (mean: 21.76/100,000), especially in males (mean: 31.53/100,000) than in females (mean:11.67/100,000). Another high-risk group is those over 60 years of age (mean: 21.40/100,000), especially males (mean: 31.17/100,000) than females (mean: 11.63/100,000). The period effect of the APC model suggests that HBV vaccination is effective in reducing the incidence of HBV in the population. Conclusions The incidence of HBV in Jiangsu Province showed a gradual downward trend, but the disease burden in males was higher than that in females. The incidence is higher and increasing rapidly in the population between the ages of 15 and 30 years and people over 60 years of age. More targeted prevention and control measures should be implemented for males and the elderly. Introduction Hepatitis B, an acute and chronic infectious disease caused by the hepatitis B virus (HBV), is one of the most common infectious diseases worldwide and a leading cause of end-stage liver disease, hepatocellular carcinoma, and death (Bhattacharya & Thio, 2010). According to the World Health Organization (WHO), 350 million to 400 million people in the world are infected with chronic HBV (Din et al., 2020;Libbus & Phillips, 2009). China is a country with a large population, but also a country with a large number of HBV infections, and chronic HBV accounts for a large proportion, more than 93 million people are currently infected with HBV virus, bringing a great burden to the country. Yue T et al. studied the overall burden of HBV and HCV in China, noting that the burden of HBV/HCV infection in China has decreased over the past 30 years, but the incidence of HBV remains high, especially in males (Yue et al., 2022). Ji W et al. studied the disease burden of HBV in four regions of Xinjiang Autonomous Region and found that the disease burden of HBV might be expanding in the future and there was a difference in the incidence between males and females (Ji et al., 2019). In addition, there are a few of studies about the HBV disease burden in China, especially in Jiangsu province, which needs more research. In 1992, the Ministry of Health of China incorporated the HBV vaccine into the management of children's planned immunization and promulgated the Implementation Plan of National HBV Vaccine Immunization. In 2002, it was approved by the State Council to incorporate the HBV vaccine into children's planned immunization (Bai et al., 2022;Jin et al., 2021). Jiangsu Province began to fully implement the neonatal HBV vaccination in 2003. In 2005, all neonatal HBV vaccines were completely free of charge. In 2009, HBV vaccination for people under 15 years old was checked and retook (Wang & Cui, 2014). According to the historical statistics of Jiangsu Province, the incidence of HBV in Jiangsu Province decreased from 242.00/100,000 to 15.42/100,000 and the serological survey in 1992 showed that 20.0% of the general population were HBV carriers. According to the results of the serological survey in 2006, the prevalence of HBV surface antigen in the population aged 1e29 years was 7.7% (Sun et al., 2021). In this study, we collected the data of the incidence of HBV in all age groups of males and females in Jiangsu Province from 1990 to 2021. By cluster analysis, we clustered the incidence of hepatitis B in different age groups of males and females respectively, and discussed the situation of the same category. The trend of the clustering results was fitted by Joinpoint regression model, and the trends of different categories and their influences on the trend of all age groups were discussed. APC model was used to analyze the influence of age, period and cohort effect on the incidence of HBV respectively. Data collection The Jiangsu CDC obtained the incidence population and HBV incidence rates for men and women of all ages, as well as the male and female populations in Jiangsu Province from 1990 to 2021. World standard population data from World Standard (https://seer.cancer.gov/stdpopulations/world.who.html). Cluster analysis System clustering is a multivariate statistical analysis method, the basic idea of which is: each sample was regarded as one class, the Euclidean distance between classes was calculated, the two classes with the closest distance were combined into a new class, and then the new class that had been clustered and other classes were combined again according to the distance, and the above steps were repeated many times until all the subclasses are combined into one class (Kabir et al., 2011;Zhao et al., 2011). Joinpoint regression model Joinpoint is a common statistical software developed by the American Cancer Research Center to study the potential trends of cancer mortality or morbidity. The Joinpoint regression model of this software was mainly used to explore the changing trends of diseases, usually consisting of several continuous line segments. The model is based on identifying the inflection points of the model, i.e. using a grid search method to determine the location and number of Joinpoint model points, splitting the data into segments based on the location and number of points, and calculating the annual percentage change, average annual percentage change and confidence intervals for each segment to derive the trend of incidence or mortality of the disease in question. In determining the segmentation points, the Monte Carlo permutation test is used for model selection, and the sum of squared errors and mean squared errors of the various segmentation point distributions are calculated, and the distribution with the smallest mean squared error is selected as the best segmentation point, and the model with the smallest sum of squared errors is the best model, and generally no more than five segmentation points are set. (Qiu et al., 2009;Wong et al., 2018). Age-period-cohort effect model The age-period-cohort effect of HBV incidence was estimated in this study using the APC network tool (https://dceg. cancer.gov/tools/analysis/apc) (Gao et al., 2020). The age-period-cohort effect (APC) model is a commonly used statistical model, which is usually used to describe and explain the long-term trend of individual or population diseases with time. The model decomposes the cohort data according to three dimensions: age, period, and cohort, and analyzes the influence of these three factors on diseases. Among the main results, the age effect mainly reflects the impact of individuals or groups on the incidence of disease or mortality with age. The period effect refers to the impact on disease incidence or mortality due to the change in period. The cohort effect is often used to describe the long-term change trend of the disease incidence rate or mortality rate in the population born in the same age (Yue et al., 2022;Zou et al., 2020). Longitudinal age-specific incidences from the model were age-specific incidences adjusted for period bias and represented fitted longitudinal age-specific ratios in the reference cohort. Cross-sectional age-specific incidence was an age-specific incidence adjusted for cohort bias and represented a fitted horizontal age-specific ratio in the reference cohort. The period rate ratio is the age-specific rate ratio of the period with the selected control period as a reference. The cohort rate ratio is the age-specific rate ratio for the cohort concerning the selected control cohort (Kupper et al., 1985;Ma et al., 2021). P < 0.05 was considered as statistical significance in this study. R 4.2.2 software was used for data processing in the early stage and drawing in the later stage, and all the incidence rates were 1/100,000. Time trend of disease burden in Jiangsu Province From 1990 to 2021, the cumulative incidence of HBV in Jiangsu was 378,486 in males and 140,452 in females, and the agestandardized incidence rate in males (mean: 29.69/100000) was higher than that in females (mean: 10.94/100000).The agestandardized incidence rate trends for males and females from 1990 to 2021 are shown in Fig. 4A and B. It is also evident from both figures that overall the incidence of HBV is higher in males than in females. From the cohort of males and females ( Fig. 1) and heat map ( Fig. 2), it shows that the incidence is mainly concentrated in 15e30 years old, with an average incidence of 31.53/100,000 for males and 11.67/100,000 for females. The incidence of HBV in males and females in the 0e45 age group has been declining since 2005, while it has been increasing in the 45e84 age group. Among males born between 195 and 1962, the age group had the highest incidence in the 30e34 age group, with an average annual incidence of 16.82/100,000. Among females born between 1963 and 1967, the 25e29 age group had the highest incidence, with an average annual incidence of 5.96/100,000. The maximum difference between male incidence and female incidence is 2.82 times. Cluster analysis The clustering results are shown in Fig. 3, males and females are clustered into three groups according to their age groups. For males, 55e85 years old and above are grouped into the first group, 0e14 years old into the second group, and 15e54 years old into the third group. For females, the first group is 50e85 years old and above, the second group is 15e24 years old, and the third group is 0e14 years old and 25e49 years old. Joinpoint regression model analysis In this study, the world standard population was used to standardize the incidence of HBV in Jiangsu Province and the standardized incidence was calculated. Then the Joinpoint regression model was used to fit the standardized incidence rate of HBV for males and females, respectively, and the maximum number of turning points was set at 5 according to the number of turning points recommended by the software algorithm. The final turning points are two for males in 2006 and 2012, and three for females in 2001, 2004, and 2010 ( Fig. 4A and B). For female population of all age groups, there have been three transitions: 1990e2001, 2001e2004, 2004e2010 and 2010e2021. The annual percentage change rate of standardized incidence rate of HBV among the female population in 1990e2001 is À6.18% (95% CI: À8.0%, À4.3%), and the difference was statistically significant. From 2004 to 2010, the annual percentage change rate was À7.09% (95% CI: À8.0%, À4.3%), and the difference was statistically significant. Since the connection line between 2001-2004 and 2010e2021 in the model was not statistically significant, only 1990e2001 and 2004e2010 were studied. For male population (Fig. 5A), the 15e54 age group had the largest mean normalized incidence of 55.22/100,000, 34.92/ 100,000, and 25.10/100,000, respectively, among the three stages, with the fastest declines occurring in the 0e14 age group, at 86.45% and 45.53%, respectively. For female population (Fig. 5B), the 15e24 age group had the highest mean standardized incidence of 18.02/100,000, 20.40/100,000, and 14.27/100,000, respectively, in the first three stages, and the 50e85 age group and above had the highest mean standardized incidence of 13.12/100,000 in the fourth stage. Age-period-cohort analysis of HBV incidence in Jiangsu Province APC model was used to analyze the incidence of HBV in male and female population from 1992 to 2021. The chi-square test results of each index were statistically significant, as shown in Table 1. The net drift represents the overall annual percentage change in the age group adjusted with time. From 1992 to 2021, the net drift in the incidence rate of HBV among males was À3.3718 (95%CI: À3.8202%,-2.9214%) per year, indicating that the incidence rate of HBV among males gradually decreased at the rate of 3.3718% per year. The net drift in the incidence rate of HBV among females was À1.3538% (95% CI: À1.7913%, À0.9143%) per year. The local drift represents the annual percentage change in each age group, and the net drift and local drift results were shown in Fig. 6A. As shown in Fig. 6B, the longitudinal age-specific HBV incidence rates of males and females increased sharply at first and then decreased, but the longitudinal age-specific HBV incidence rate of males was higher than that of females all the time, and the two rates were approximately equal until the age of 80. As shown in Fig. 6C, the incidence of cross-sectional age-specific HBV in males and females increased sharply at first and then stabilized, but the incidence of cross-sectional age-specific HBV in males was always higher than that in females. The changing rate ratio (Fig. 6D), taking June 2004 as a reference (RR ¼ 1), first decreased and then showed an upward trend for females, and the difference was statistically significant (P < 0.001). For males, the rate ratio also decreased and then increased, and the difference was statistically significant (P < 0.001). The changing rate ratio of the cohort (Fig. 6E), compared with 1962 (RR ¼ 1), increased first and then decreased, but the difference was statistically significant (P < 0.001). For males, the rate ratio also increased first and then decreased, but the difference was statistically significant (P < 0.001). Discussion Based on the data of Jiangsu Province for 32 years, this study estimates the influence of age, period, and cohort on the incidence of HBV for the first time. It has found that age has the greatest influence on the incidence of HBV, in which people aged 15e30 and over 60 are the high-risk groups. It is of great significance to take reasonable control measures to prevent the occurrence and infection of HBV and reduce the harm caused by it. were Joinpoint regression model diagrams of standardized incidence rate of HBV in all age groups of males and females in Jiangsu Province, respectively. C, E, and G were Joinpoint regression model diagrams of standardized incidence rate of HBV for males grouped according to the clustered age group. D, F, and H were Joinpoint regression model diagrams of standardized incidence rate of HBV among females grouped according to the clustered age group. Note: the graphic order is consistent with the clustering order above). Epidemiological characteristics The advent of HBV vaccine and the implementation of the national expanded immunization program have greatly reduced the infection of HBV in the population (Stasi et al., 2016;Walayat et al., 2015). In 2003, Jiangsu Province began to implement the strategy of neonatal HBV vaccination in an all-around way. The incidence of HBV in 0e10 years old before 2003 was higher than that after 2003, indicating that neonatal HBV vaccination played a great role in reducing the infection of HBV in children. Before 2003, the incidence rate in children and young people was high. After 2003, the incidence rate in the old increased gradually and became dominate. Because vaccine policies were not implemented at birth at this time for older adults, most The local drift and net drift of the incidence of HBV and its 95% confidence interval; B. Longitudinal age-specific incidence of HBV and its 95% confidence interval; C. cross-sectional age-specific incidence of HBV and its 95% confidence interval; D. The period change of the incidence rate of HBV and its 95% confidence interval; E. Incidence ratio of HBV in birth cohort and its 95% confidence interval). may not have been vaccinated against HBV, and as they age and their body's resistance weakens, they are known as a susceptible population for HBV. Therefore, it is considered that reseeding HBV vaccine for this population to reduce the risk of HBV infection is of significance. Applying statistical models to analyze infectious diseases is important for policy adjustment and disease prevention and control. In Joinpoint regression model, the standardized incidence of hepatitis B among males in all age groups decreased by 1.4% per year from 1990 to 2016, and by 9.91% per year from 2006 to 2012. The standardized incidence of hepatitis B among females in all age groups decreased by 6.18% per year from 1990 to 2001 and by 7.00% per year from 2004 to 2010. This may be related to the free vaccination of hepatitis B vaccine for newborns in Jiangsu Province in 2005 and the replanting of hepatitis B vaccine for people under 15 years old in 2009. The implementation of these policies has greatly increased the rate of decline of hepatitis B incidence, especially for males. For the male population, the standardized incidence trend of the whole age group is very similar to that of the 15-54-year-old population. In each period, the average standardized incidence of the 15-54-yearold population is higher than that of other cluster groups, which indicates that this age group has a great influence on the incidence of HBV in the whole age group. This group of people is mostly young and middle-aged. People in this age group have a wide-ranged and various activities, which greatly increases the infection rate of HBV. These people are mainly infected through blood transmission, sexual transmission, and contact transmission, so strengthening the intervention of these modes of transmission can effectively reduce the infection of HBV (Lavanchy, 2005;Thompson et al., 2021). However, the standardized incidence of this part of the population is decreasing, while the incidence of people aged 50e85 and above is gradually increasing. This part of the population is mainly middle-aged and old, among which the elderly account for a large proportion, mainly because the people aged 15e54 are getting older with time, and most of these people have not been vaccinated against HBV. Therefore, it is obvious that everyone should be vaccinated against HBV, and it is an effective measure to vaccinate those who have not been vaccinated against HBV. In 1990e2001, the incidence trend of female HBV was mainly determined by the 0e49 age group, which was mainly children, young people, and middle-aged people. Because most of the people had not been vaccinated at this time, and their activities were diverse, they were more likely to be infected with HBV. In 2004e2010, the incidence trend of HBV in females was mainly determined by the 15-24-year-old age group. This part of the population was in adolescence or just entered the society, and it was difficult to resist the temptation. It might have more dangerous behaviors, and it was a high incidence of HBV (Darmawan et al., 2015). Age-period-cohort effect model When building the APC model, in order to study whether COVID-19 will affect the model results, we removed the data during the COVID-19 period and built the APC model again. The results show that the occurrence of COVID-19 has little influence on the model parameters, so the APC model is constructed by using the data of HBV incidence from 1992 to 2021 (Supplementary material).In APC model, the net drift of both males and females was below 0, and the difference had statistical significance, which indicates that the incidence of HBV among males and females in Jiangsu Province is decreasing year by year, which is closely related to the improvement of economic level and medical and health conditions, especially the vaccination for HBV (Das et al., 2019;Udomkarnjananun et al., 2020). After the age of 50, the local drift of female population is greater than 0, and that of male population is greater than 0 around the age of 60, all of which reach the highest at the age of 80. This may be due to the accumulation of HBV in human body, the decrease of human immunity, and the gradual decline of vaccination effect with the increase of age, which leads to the infection of HBV in these people. From 1992 to 2021, the incidence of HBV by longitudinal age-specific and cross-sectional age-specific in Jiangsu Province shows that the incidence of HBV in both males and females increases rapidly and reaches the peak at the age of 15e30, which indicates that this age group is a high-risk group of HBV. Many people in this age group have unhealthy living habits, such as multiple sexual partners between young males and females, same-sex behaviors, smoking and drinking due to excessive stress, and even some people may take drugs, which are all risk factors for HBV (Akman et al., 2010;Kuo et al., 2004;Yin et al., 2013). In addition, the study also shows that the incidence of HBV in males is higher than that in females, so we should pay more attention to this group, especially for males. For example, regular screening of HBV in bars, karaokes and other high-risk places of HBV can effectively control the occurrence of HBV. At the same time, people in this age group should also pay attention to self-protection, avoid intimate contacts with strangers, avoid blood exposure, regularly check their HBV antibodies, and avoid being infected with HBV (Funk et al., 2021). The period effect shows that incidence rate was 1 in June 2004. This may be due to the full implementation of HBV vaccination for newborns in Jiangsu Province since 2003. After 2005, the incidence rate ratio of males and females was less than 1. Before that, the incidence rate ratio of males and females was greater than 1. This may be related to the free HBV vaccine policy implemented in 2003 and 2005. Because vaccination reduces the susceptibility of people to HBV, its incidence rate was the highest in 1995, and the incidence rate of females was higher than that of males. After 2015, its incidence rate increased and showed an upward trend, especially that of females, which deserves attention. With the outbreak of the COVID-19 epidemic in late 2019, people paid more attention to the possibility that COVID-19's attention to other diseases might decrease, so Jiangsu still needs to strengthen the attention and control of HBV. The cohort effect shows that the incidence ratio of people born before 1962 increased with the birth time. At that time, the people were in poverty, hunger and war. The medical conditions were relatively backward and it was more likely to cause the disease epidemic. After 1962, the incidence rate ratio gradually decreased, with the greatest decrease in 1992. This may be since the Ministry of Health of China included HBV in the management of children's immunization in 1992. Therefore, people born in this period had the opportunity to receive HBV vaccine, resulting in a significant decrease in the incidence rate ratio of people born in this period. Later, with the gradual improvement of the policy and the improvement of people's living standards and medical conditions, the incidence rate ratio of HBV in the birth cohort showed a downward trend every year (Gao et al., 2020;Ji et al., 2019). In general, the implementation of the national expanded immunization program had a large impact on the APC model. First, the longitudinal age-specific incidence rate and the cross-sectional age-specific incidence rate were in a decreasing trend during 0e10 for both males and females due to the implementation of the national expanded immunization program, which may be due to the protective effect of the HBV vaccine. Second, the incidence ratio of the period effect was greater than 1 before the implementation of the national expanded immunization program, while its incidence ratio of the period effect was less than 1 after the implementation of the national expanded immunization program, indicating that the national expanded immunization program had a significant effect on the incidence of HBV. In the cohort effect, the incidence rate ratio of the cohort population vaccinated against HBV was lower than that of the cohort population not vaccinated against HBV, and the subsequent value was also consistently lower than that of the pre-vaccination population, indicating that the national expanded immunization program influenced the change in the value of the incidence rate ratio. This study also has some limitations. As the APC model requires that the age group distance of data must be the same as the period group distance, the incidence data of a single age group from 0 to 9 years old are combined into age groups of 0e4 years old and 5e9 years old every 5 years. At the same time, the data of 1990 and 1991 are excluded from the model, which undoubtedly loses the information about the data. In addition, in this study, the data was modeled based on their own characteristics, without considering the effects of climate, season, temperature, and other factors on HBV. It may be more scientific and accurate to add these variables to the model in the future to analyze the incidence of HBV. Conclusions Generally, the incidence of HBV in Jiangsu Province showed a gradual decline from 1990 to 2021, and the incidence of males was higher than that of females. The incidence rate of people aged 15e30 was higher and increased rapidly. Another high-risk group was people aged over 60. More targeted prevention and control measures should be taken for males and this part of people. Funding This study was partly supported by the Fundamental Research Funds for the Central Universities, 20720230001, selfsupporting Program of Guangzhou Laboratory, SRPG22-007 and research Project on Education and Teaching Reform of Undergraduate Universities of Fujian Province, FBJG20210260.
2023-07-12T05:25:44.828Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "f699f1c4fc72ccb38ec50f1840c477a4af2c7daa", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.idm.2023.07.007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b046543fdc873acf0a9add16b3b15ba2dd6a2c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
232774490
pes2o/s2orc
v3-fos-license
Research for the Optimal Flux-Cored Arc Welding Process of 9% Nickel Steel Using Multi Object Optimization with Solidification Crack Susceptibility The environment of the global shipbuilding market is changing rapidly. Recently, the International Maritime Organization (IMO) has tightened regulations on sulfur oxide content standards for marine fuels and tightened sulfur oxide emission standards for the entire coastal region of China to consider the environment globally and use LNG as a fuel. There is a tendency for the number of vessels to operate to increase significantly. To use cryogenic LNG fuel, various pieces of equipment, such as storage tanks or valves, are required, and equipment using steel, which has excellent impact toughness in cryogenic environments, is required. Four steel types are specified in the IGG Code, and 9% Ni steel is mostly used for LNG fuel equipment. However, to secure safety at cryogenic temperatures, a systematic study investigating the causes of quality deterioration occurring in the 9% Ni steel welding process is required and a discrimination function capable of quality evaluation is urgent. Therefore, this study proposes a plan where the uniform quality of 9% Nickel steel is secured by reviewing the tendency of the solidification crack susceptibility among the quality problems of cryogenic steel to establish the criteria for quality deterioration and to develop a system capable of quality discrimination and defect avoidance. Introduction Recently, there has been a growing interest in preventing air pollution around the world and, as a result, the International Maritime Organization (IMO) is elevating regulations on marine sulfur oxides (SOX) and nitrogen oxides (NOX). In 2015, the emission concentration of sulfur oxides in the ECA (Emission Control Area) had already been strictly reduced from 1% to 0.1%, and the emission concentration of sulfur oxides in the high seas is expected to be reduced from 3.5% to 0.5% from 2020. Liquefied natural gas (LNG) is the most popular fuel that can smoothly comply with the regulations in such a situation and is currently evaluated as the only ship fuel that can meet the emission gas environmental regulations. Due to the characteristics of an LNG storage tank, which is a key facility in the LNG industry, there is no concept of overhaul once its operation is started under the current laws and standards. Therefore, it has the advantage of continuous operation for a lifetime. In Korea, LNG fuel is stably supplied according to KGS AC 115 (standards for facility, technology and inspection of LNG storage tank manufacturing) [1][2][3][4]. An LNG storage tank is mostly made of 9% Ni steel, and its strength level is classified as high-tensile steel and is used in applications with a temperature below −170 • C after QT (Quenching and Tempering) heat treatment. The 9% Ni steel is used for LNG tank production, as it has high impact toughness in cryogenic conditions, and its cost is inexpensive Experimental Works In the experiment, a 600A class FCAW welding machine (ProPAC, HYOSUNG, Mapogu, Seoul, Korea) and torch, welding feeder, straight welding carriage, and a guide rail were configured. Ethyl alcohol (DUKSAN, Ansan-si, Gyeonggi-do, Korea) was used for cleaning the specimen, and sandpapers were applied for the same reason. Rust or oxide on the surface can cause welding defects. Figure 1 shows the experimental setup and the schematic diagram of the flux-cored arc welding process. The test piece used in the welding test was used in a size of 150 mm (W) × 200 mm(H) × 15 mm (H) of 9% Ni steel. Moreover, the chemical composition of 9% Ni steel and welding wire used in the test are shown in Table 1, and mechanical properties are shown in Table 2. Information on 9% Ni steel and welding wire was made using public data from manufacturers. Experimental Works In the experiment, a 600A class FCAW welding machine (ProPAC, HYOSUNG, Mapo-gu, Seoul, Korea) and torch, welding feeder, straight welding carriage, and a guide rail were configured. Ethyl alcohol (DUKSAN, Ansan-si, Gyeonggi-do, Korea) was used for cleaning the specimen, and sandpapers were applied for the same reason. Rust or oxide on the surface can cause welding defects. Figure 1 shows the experimental setup and the schematic diagram of the flux-cored arc welding process. The test piece used in the welding test was used in a size of 150 mm (W) × 200 mm(H) × 15 mm (H) of 9% Ni steel. Moreover, the chemical composition of 9% Ni steel and welding wire used in the test are shown in Table 1, and mechanical properties are shown in Table 2. Information on 9% Ni steel and welding wire was made using public data from manufacturers. As the input variables of the flux-cored arc welding applied in this experiment, the welding current, arc voltage, and welding speed were selected. These variables have a clear influence on the shape and weldability of a GMA weldment. The mechanical characteristics, such as the bead shape, hardness, impact amount, weldment component, and fracture surface, etc., were selected as output variables for weldability analysis. Figure 2 shows the bead shape of a weldment [19]. In this experiment, the full factorial design (FFD) was used, which can estimate all factor effects of the response of output variables according to the change in input variables and detect the high-order interaction effects. A full factorial design is a general K n factorial method DOE with n number of factors and k level, and experiments are designed in a combination of levels between all factors. Therefore, even without repeated experiments, the number of K n experiments should be performed. The factor experiment by the factor placement method has the advantage of being able to estimate all factor effects (main ef- As the input variables of the flux-cored arc welding applied in this experiment, the welding current, arc voltage, and welding speed were selected. These variables have a clear influence on the shape and weldability of a GMA weldment. The mechanical characteristics, such as the bead shape, hardness, impact amount, weldment component, and fracture surface, etc., were selected as output variables for weldability analysis. Figure 2 shows the bead shape of a weldment [19]. In this experiment, the full factorial design (FFD) was used, which can estimate all factor effects of the response of output variables according to the change in input variables and detect the high-order interaction effects. A full factorial design is a general K n factorial method DOE with n number of factors and k level, and experiments are designed in a combination of levels between all factors. Therefore, even without repeated experiments, the number of K n experiments should be performed. The factor experiment by the factor placement method has the advantage of being able to estimate all factor effects (main effects and interactions). The appropriate level and range of input variables (welding current, arc voltage, welding speed) were selected through preliminary experiments. Three different values of the welding current and two different values for the arc voltage were used, while for the welding speed, two different values were used, so the number of total experiments is 18 (3 2 × 2 = 18). The levels of the input variables and the experimental conditions are shown in Tables 3 and 4. Tables 3 and 4. Bead Geometry The BOP (bead on plate) welding test was performed. To properly represent the cross-section of the test piece, a solution containing 90% ethanol and 10% nitric was used for etching the cross-section part and an optical microscope system was used for accurate bead shape measurement. Table 5 shows the weld cross-section and the bead shape taken with a 10x optical microscope. Top-bead geometry was measured according to Figure 2, and the measurement precision of the optical microscope used was 0.0001 mm. Bead Geometry The BOP (bead on plate) welding test was performed. To properly represent the cross-section of the test piece, a solution containing 90% ethanol and 10% nitric was used for etching the cross-section part and an optical microscope system was used for accurate bead shape measurement. Table 5 shows the weld cross-section and the bead shape taken with a 10× optical microscope. Top-bead geometry was measured according to Figure 2, and the measurement precision of the optical microscope used was 0.0001 mm. Measurement of Hardness A strength decrease in the welded area when the flux-cored arc welding was solidified can be checked with a hardness test and the impurities came upward due to the difference of density. As the upper part was vulnerable to the hardness because the impurities floated, in the upper part, the Vickers hardness test was applied. 17 12 Measurement of Hardness A strength decrease in the welded area when the flux-cored arc welding was solidified can be checked with a hardness test and the impurities came upward due to the difference of density. As the upper part was vulnerable to the hardness because the impurities floated, in the upper part, the Vickers hardness test was applied. 18 13 Measurement of Hardness A strength decrease in the welded area when the flux-cored arc welding was solidified can be checked with a hardness test and the impurities came upward due to the difference of density. As the upper part was vulnerable to the hardness because the impurities floated, in the upper part, the Vickers hardness test was applied. Measurement of Hardness A strength decrease in the welded area when the flux-cored arc welding was solidified can be checked with a hardness test and the impurities came upward due to the difference of density. As the upper part was vulnerable to the hardness because the impurities floated, in the upper part, the Vickers hardness test was applied. The load was 0.5 N and the intervals were 0.83 mm, which was for not affecting other measures. Figures 3 and 4 show the tester and the points of the hardness test, and Table 6 shows the results of the upper part and the heat affected part. The upper hardness of a flux-cored arc weldment was between 250.1 and 262.6 Hv, which is considered to have sufficient weldability because the hardness is higher than the hardness 243 Hv, which is a standard of 9% Ni steel. Materials 2021, 14, x FOR PEER REVIEW The load was 0.5 N and the intervals were 0.83 mm, which was for not affec measures. Figures 3 and 4 show the tester and the points of the hardness test, an shows the results of the upper part and the heat affected part. The upper hard flux-cored arc weldment was between 250.1 and 262.6 Hv, which is considere sufficient weldability because the hardness is higher than the hardness 243 Hv a standard of 9% Ni steel. The load was 0.5 N and the intervals were 0.83 mm, which was for not affecting othe measures. Figures 3 and 4 show the tester and the points of the hardness test, and Table shows the results of the upper part and the heat affected part. The upper hardness of flux-cored arc weldment was between 250.1 and 262.6 Hv, which is considered to hav sufficient weldability because the hardness is higher than the hardness 243 Hv, which a standard of 9% Ni steel. Measurement of Chemical Composition after Welding To measure the impurities of Ti, Nb, Mo, and Si components that affect the crack susceptibility on the penetration and weld surface of the welding test piece, and to check the tendency of impurity grain boundaries that change according to the welding process variables, EDS was measured by dividing sections into nine points in Figure 5. In order to analyze the combination of various alloying elements and compositions, the influence of alloying elements on the microstructure and mechanical properties was analyzed. The FE-ESEM equipment shown in Figure 6 was used. The location of the component analysis was selected in consideration of the fact that it rises to the top due to the difference in density during the solidification process, and Figure 7 shows the grain boundaries of the upper impurities. Table 7 shows the average value of analysis for the four components, i.e., Ti, Nb, Mo, and Si. Measurement of Chemical Composition after Welding To measure the impurities of Ti, Nb, Mo, and Si components that affect the crack susceptibility on the penetration and weld surface of the welding test piece, and to check the tendency of impurity grain boundaries that change according to the welding process variables, EDS was measured by dividing sections into nine points in Figure 5. In order to analyze the combination of various alloying elements and compositions, the influence of alloying elements on the microstructure and mechanical properties was analyzed. The FE-ESEM equipment shown in Figure 6 was used. The location of the component analysis was selected in consideration of the fact that it rises to the top due to the difference in density during the solidification process, and Figure 7 shows the grain boundaries of the upper impurities. Table 7 shows the average value of analysis for the four components, i.e., Ti, Nb, Mo, and Si. Measurement of Chemical Composition after Welding To measure the impurities of Ti, Nb, Mo, and Si components that affect the crack susceptibility on the penetration and weld surface of the welding test piece, and to check the tendency of impurity grain boundaries that change according to the welding process variables, EDS was measured by dividing sections into nine points in Figure 5. In order to analyze the combination of various alloying elements and compositions, the influence of alloying elements on the microstructure and mechanical properties was analyzed. The FE-ESEM equipment shown in Figure 6 was used. The location of the component analysis was selected in consideration of the fact that it rises to the top due to the difference in density during the solidification process, and Figure 7 shows the grain boundaries of the upper impurities. Table 7 shows the average value of analysis for the four components, i.e., Ti, Nb, Mo, and Si. Solidification Crack Susceptibility A nickel-based alloy has an austenite structure and tends to solidification crack. Thus, solving the solidification crack during the welding process of 9% Ni steel is a critical issue. The variables of the welding process are the main factors of the crack resistance to solidification crack, also cracks are more likely to occur with a higher welding current or operating ratio. Nakao reviewed the solidification crack susceptibility of a nickel-based alloy in a melt welding, and formulated the correlation between crack susceptibility and impurity elements. That is solidification crack susceptibility index (P SC ) described as Equation (1) [20]. P SC = 69.2Ti + 27.3Nb + 9.7Mo + 300Si − 55. 3 (1) Because the flux-cored arc welding is a type of melt welding, the Psc was used to investigate the solidification crack susceptibility of 9% Ni steel. For the evaluation, the solidification crack susceptibility was calculated with Equation (1). One of the purposes of this research was to confirm the phenomenon that the hardness of an upper weldment diminishes with the grain boundary relaxation when the crack susceptibility increases. It was also to define the criteria for crack susceptibility. Psc was between 148.7 and 153.0. Moreover, it was found that the hardness of an upper weldment was stable when Psc was 150.6 or less, as shown in Figure 8. Psc can be used as an index of evaluation index, the score 150.6 is standard in this research. If it is a higher score, it means that there could be crack susceptibility for an upper weldment. This standardized score can be used to obtain data to establish the drop of grain boundary strength owing to crack susceptibility and can help prevent the microcracking with impurity grains in a 9% Ni steel weldment with the flux-cored arc welding (Table 8). Discriminant Analysis To discriminate the solidification crack susceptibility of the flux-cored are welding for 9% Ni steel, the discrimination model based on learning the data from experiments is developed, and it is used as an estimation model [21][22][23]. The solidification crack susceptibility discrimination system is based on the SVM (Support Vector Machine) technique to determine solidification cracking tendency by finding a hyperplane that maximizes a margin within two classes capable of linear separation based on Equation (2) in the Vapnik-Chervonenkis theory [24]. The variables for learning in the discrimination model are the welding process (Welding Current, Arc Voltage, and Welding Speed), bead shape (Top-Bead Width, Top-Bead Height), hardness (upper part, heat-affected zone), and solidification crack susceptibility. Psc can be used as an index of evaluation index, the score 150.6 is standard in this research. If it is a higher score, it means that there could be crack susceptibility for an upper weldment. This standardized score can be used to obtain data to establish the drop of grain boundary strength owing to crack susceptibility and can help prevent the micro-cracking with impurity grains in a 9% Ni steel weldment with the flux-cored arc welding (Table 8). Discriminant Analysis To discriminate the solidification crack susceptibility of the flux-cored are welding for 9% Ni steel, the discrimination model based on learning the data from experiments is developed, and it is used as an estimation model [21][22][23]. The solidification crack susceptibility discrimination system is based on the SVM (Support Vector Machine) technique to determine solidification cracking tendency by finding a hyperplane that maximizes a margin within two classes capable of linear separation based on Equation (2) in the Vapnik-Chervonenkis theory [24]. The variables for learning in the discrimination model are the welding process (Welding Current, Arc Voltage, and Welding Speed), bead shape (Top-Bead Width, Top-Bead Height), hardness (upper part, heat-affected zone), and solidification crack susceptibility. One hundred sixty-two cases were used as input data with these nine variables. The Unstable Group, in terms of the solidification crack susceptibility, was defined as 1, and the other was defined as 0. Table 9 shows the learning data, and Table 10 shows the difference between the measured result and the predicted result. Figure 9 shows the performance of the discrimination model. Mathematical Model for Optimization To optimize the flux-cored arc welding process, the interaction formula among the input variables and objective function value was defined. The response surface method is Mathematical Model for Optimization To optimize the flux-cored arc welding process, the interaction formula among the input variables and objective function value was defined. The response surface method is known to be proper to the multi-input variables cases, it is applied to this research as in the previous research, which is related to fiber laser welding [25]. The method of calculating the estimated values of β 0 and β 1 that minimizes the sum of squares of the residual e, which is the deviation between the observed value Y and the estimated value ofŶ i , is called the method of least squares. That is, if the sum of squares of the residuals is S as follows, if S is partially differentiated and summarized, the least-squares estimate, β , is obtained. The functional relationship between the input variable x 1 , x 2 , x 3 , · · · x k and the output variable y is expressed in Equation (3). This research also used the second-order regression model, as shown in Equation (4). By the least squares method, Equation (4) is replaced by Equation (5). when the number of input variables is 3, k is 3 and Equation (5) changes to Equation (6). For efficient data acquisition, a complete factor design that is proper to the secondorder regression model was applied. The coefficient of each term was obtained with Minitab. With the above theories, the prediction model of bead shape (Top-Bead Width, Top-Bead Height), hardness (upper part, HAZ), and solidification crack susceptibility were expressed as Equation (7) to Equation (11). To confirm the consistency of the prediction models, Figure 10 shows the error range by comparing the average value of the measured welding factors and the predicted welding factors. The prediction model error range is generally reliable, which is shown in Table 11. +0.0422 + 1.2576 − 1.9294 + 0.0029 To confirm the consistency of the prediction models, Figure 10 shows the error range by comparing the average value of the measured welding factors and the predicted welding factors. The prediction model error range is generally reliable, which is shown in Table 11. Besides, the result of variance analysis of the prediction model confirmed 98.9% in top-bead width and 73.0% in the upper hardness of a weldment. That means the interaction of the input variables is also considered. Optimization of the Welding Process In this research, the multi-objective optimization (MOO) algorithm is applied, which is known to be proper to solve the optimization problems with multiple purposes [26][27][28]. Besides, the result of variance analysis of the prediction model confirmed 98.9% in top-bead width and 73.0% in the upper hardness of a weldment. That means the interaction of the input variables is also considered. Optimization of the Welding Process In this research, the multi-objective optimization (MOO) algorithm is applied, which is known to be proper to solve the optimization problems with multiple purposes [26][27][28]. As the previous research related to the optimization of the fiber laser welding process used that algorithm for optimization and described that technique [25], this article omitted the details. In short, that technique imitated the evolution process in the ecosystem, the weighted sum method was used for solving the multi-objective problem. In Figure 11 MOO algorithm was described, and MATLAB was used. To optimize the welding process variables, the 162 data points described in Table 9 were used. The variables and levels for the MOO algorithm are shown in Table 12. Parameters Values Range of Local Parameters The range of flux-cored arc welding process variables was selected from the minimum (150 A, 21 V, 0.3 m/min) to the maximum (170 A, 25 V, 0.4 m/min). The aim was to analyze a multi-purpose optimization problem, which considers the solidification crack susceptibility as a standard to access the quality deterioration characteristics after welding. The objective function is mathematical modeling of system characteristics, and its constraints represent the conditions that the system variables can have. Therefore, Equation (12), Equation (13), and Equation (14), respectively, show the objective function ( ) of an arbitrary system having as a variable and the constraints and ranges required to optimize the function [29]. The range of flux-cored arc welding process variables was selected from the minimum (150 A, 21 V, 0.3 m/min) to the maximum (170 A, 25 V, 0.4 m/min). The aim was to analyze a multi-purpose optimization problem, which considers the solidification crack susceptibility as a standard to access the quality deterioration characteristics after welding. The objective function is mathematical modeling of system characteristics, and its constraints represent the conditions that the system variables can have. Therefore, Equation (12), Equation (13), and Equation (14), respectively, show the objective function f (x) of an arbitrary system having x as a variable and the constraints and ranges required to optimize the function [29]. Range of Local Parameters Optimize f (C, V, S) (12) g(C, V, S) P SC < 150.6 (14) The cases where the solidification crack susceptibility occurred were selected for verifying the MOO algorithm. The solidification crack susceptibility occurred in Tests 2, 6, and 14, the improvement of the welding process through the optimization algorithm was checked. Table 13 shows the improvement with changing the variables, such as C, V, and S, also shows that Psc is lower than 150.6, respectively. Figure 12 shows the attempt to confirm the solidification crack susceptibility by applying the improved input variables. It was confirmed that all points selected in the flux-cored arc welding process satisfy the solidification crack susceptibility limit condition of 150.6 or less. Moreover, the quality deterioration characteristics that appeared in the existing process variables are improved by the modified variables. Figure 12 shows the attempt to confirm the solidification crack susceptibility by applying the improved input variables. It was confirmed that all points selected in the fluxcored arc welding process satisfy the solidification crack susceptibility limit condition of 150.6 or less. Moreover, the quality deterioration characteristics that appeared in the existing process variables are improved by the modified variables. Conclusions The following objectives were attempted in this study: To optimize the FCAW process for 9% Ni steel used in the cryogenic condition, to establish the criteria for the solidification crack susceptibility in the welding process, to develop learning in the discrimination function, and to optimize the variables that cause solidification crack susceptibility. Thus, the following results were obtained. (1) Appropriate weldability was checked by measuring the bead shape, mechanical strength, and chemical composition. The solidification crack susceptibility was suggested as a standard of welding quality. When that index is 150.6 or more, it is difficult to secure a stable upper hardness. (2) To determine the solidification crack, the SVM technique was used to check Conclusions The following objectives were attempted in this study: To optimize the FCAW process for 9% Ni steel used in the cryogenic condition, to establish the criteria for the solidification crack susceptibility in the welding process, to develop learning in the discrimination function, and to optimize the variables that cause solidification crack susceptibility. Thus, the following results were obtained. (1) Appropriate weldability was checked by measuring the bead shape, mechanical strength, and chemical composition. The solidification crack susceptibility was suggested as a standard of welding quality. When that index is 150.6 or more, it is difficult to secure a stable upper hardness. (2) To determine the solidification crack, the SVM technique was used to check whether it can accurately identify a group where quality deterioration occurs. The accuracy of the prediction model was checked and verified. (3) A prediction model based on the response surface method was suggested, it is applied to the optimization method. Multi-objective optimization algorithm was also used and verified.
2021-04-04T06:16:22.960Z
2021-03-28T00:00:00.000
{ "year": 2021, "sha1": "9c03cfc6572a82771b37f923bfbca718451e1a64", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/7/1659/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8795e95f4241744d97fe044a5a75774b2a9dc13a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
210440696
pes2o/s2orc
v3-fos-license
GENDER AND VICTIMIZATION IN MARGARET ATWOOD'S SURFACING Margaret Atwood’s Surfacing (1972), a contemporary classic nowadays, has raised the interest of all kinds of critics. Some of the most remarkable elements in the novel concern feminism, a movement with which the Canadian author has been highly committed. This paper deals with two specific aspects in Atwood’s work in relation to the aforementioned critical approach: gender and victimization. A thorough reading of the novel is thus done in order to detect and subsequently dissect the main instances of both aspects. Special attention is paid to female characters (Anna and the unnamed protagonist), hypersexualized and victimized in the patriarchal microcosms rendered in the story. Introduction Surfacing (1972) is the second novel by the Canadian writer Margaret Atwood. It narrates the story of an unnamed woman who goes, with her boyfriend Joe and another couple (David and Anna), to her homeland in Canada (Northern Quebec) to investigate the mysterious disappearance of her father, of which she has been informed through a letter sent by Paul, a friend from the place. What follows is a journey of self-discovery: the unnamed protagonist (UP hereafter, for mere communicative pragmatism) immerses herself in a process of remembrance through which her past and present intertwine. Whereas in the book Atwood reflects upon a variety of topics (identity, language, memory, imagination and hallucination or human and animal life, to mention but a few of them), this paper will be exclusively focused on gender issues, certainly one of the fundamental aspects in the construction of the novel. In particular, special attention will be paid to the sense of victimization on the part of the two female characters of the story: UP and Anna, different women that are somehow connected in the plot because of their female condition. Since a close reading of the novel has been done to write this essay, passages from the book will be frequently quoted in order to illustrate the ideas covered; this will be the main point of reference for the elaboration of the work. In a complementary way, we will also occasionally include some specific academic references that may corroborate our thesis or, simply, add relevant information to the topic of the paper. Specifically, the main bibliographical sources used belong to the gender studies field and offer a feminist approach to Atwood's novel, with special emphasis on the notion of victimization and on its relation with naturethat is, ecofeminism-through the characters and elements shaping the work. In this way, the current article intends to revise such aspects and expand on them by doing, as stated by New Criticism, a close reading of the text. This is precisely where the originality of our paper may mostly lie. When the fragments quoted correspond to Atwood's book, we will simply specify between brackets the number of the page(s) from which each quotation has been extracted. Preliminary notes on gender issues in the novel The course of events in Surfacing is intelligently unfolded by Margaret Atwood in such a way that the sense of female subalternity (adapting Antonio Gramsci's [2005] wider term) is gradually perceived in a more conspicuous manner. Already at the beginning of chapter 1, UP is in David's car ("a lumbering monster," [4] as she labels it) on her way to Quebec and remarks: "David says they can't afford a newer one, which probably isn't true. He's a good driver, I realize that, I keep my outside hand on the door in spite of it" (4). At this early point of the story, readers may wonder if they are reading the words of an overly dubious narrator or those of a victimized woman in a precarious position; throughout the novel we notice it is the latter. Needless to say, to reach such conclusion we have to rely on the perception of an unnamed narrator who seems to suffer from hallucination. Although we consider that her voice is perfectly reliable regarding the description of factual events (past and present) and we will develop our thesis accordingly, we anyway believe that mentioning it is not pointless: the novel is written in first-person, which inevitably entails a series of specificities and narrative limitations. Reductio ad absurdum or not, the reader is never told explicitly that UP is a woman; synopsis in the back cover of the book aside, we must infer it from the way she interacts with the other characters and from some particularly clarifying reflections on marriage, school life or men's tastes, such as: "My status is a problem, they obviously think I'm married. But I'm safe, I'm wearing my ring, I never threw it out, it's useful for landladies" (24); "it was worse for a girl to ask questions than for a boy" (124); "men's magazines were about pleasure, cars and women" (143). By extension, it also seems quite congruent that the book is the work of a female writer; would a man have been able to convey the same feelings of the protagonist in the way Margaret Atwood does? 1 Evidently enough, the fact that the protagonist is a woman is paramount in shaping the narrative mood of the novel and its subsequent feminist interpretations: the whole story is filtered through her perception of reality as a woman. To give but one example, if the story were narrated by David, the scene in which Anna is forced to take off her clothes would obviously still be an abuse, but it would, however, be probably approached as 'mere fun' -or similar-on the part of the narrator. In other words, whereas facts in the story remain the same regardless of the narrating voice, the process of filtering and analyzing them for a feminist study such as this is different and, probably, shorter when we are told the 1 Most likely, cross gender writing requires a higher degree of reason and objectivity on the part of writers, who cannot transfigure their own experiences in their fictional alter ego in the way they would do with a character of their same sex. As psychologist Dr. Vivian Diller (quoted in Willens) explains, "Authors who write about their own gender use their internal experience and speak from the inside out. When they write about the opposite sex, their perspective has to shiftfrom the outside in. Neither is necessarily better but rather they try different points of view." This obviously does not necessarily mean that a female author cannot shape a complex male character or vice versa (Gustave Flaubert's Madame Bovary [1857] or León Tolstói's Anna Karénina [1877] are two eloquent instances of it); it simply is, in any case, a worth considering tendency. story by a female character. The voices of both the first-person narrator and of Margaret Atwood herself strengthen and facilitate dealing with the book from a female perspective. Notwithstanding that a biographical reading of the novel is not our main aim, we consider it pertinent to mention here that the thoughts and insight of the author and the protagonist may intertwine in more than one occasion. 2 One year after the publication of Surfacing, in 1973, Margaret Atwood divorced from her first husband; the novel -maybe as a correlative fiction-is brimming with reflections on marriage and divorce, two of the elements from the past of the protagonist that linger on her present and that more clearly configure her distress and victim status (which we will explore more deeply in the next section). In chapter 3, UP brings to mind her marriage and divorce for the first time, when thinking of her parents: "they never forgave me, they didn't understand the divorce; I don't think they even understood the marriage, which wasn't surprising since I didn't understand it myself" (32). Later on, she remembers her ex-husband from the point of view of failure: "It was good at first but he changed after I married him, he married me, we committed that paper act" (46). Concurrently, a sentiment of forced pregnancy is noticeable in her words: "It was my husband's [child], he imposed it on me, all the time it was growing in me I felt like an incubator" (38-39); or "He wanted a child, that's normal, he wanted us to be married" (56). In other cases, the wedding ring is even depicted as a symbol of oppression: "I wore his ring, too big for any of my fingers, around my neck on a chain, like a crucifix or a military decoration" (62). All these somehow repressed feelings will derive in the character's reluctance to become engaged again, which will trigger the arguments between her and Joe in the second part of the novel. The following passage illustrates how her former marriage still haunts her in a powerful way: 'Look', I said, 'I've been married before and it didn't work out. I had a baby too'. My ace, voice patient. 'I don't want to go through that again.' It was true, but the words were coming out of me like the mechanical words from a talking doll, the kind with the pull tape at the back; the whole speech was unwinding, everything in order, a spool. I would always be able to say what I'd just finished saying: I've tried and failed, I'm inoculated, exempt, classified as wounded. It wasn't that I didn't suffer, I was conscientious about that, that's what qualified me. But marriage was like playing Monopoly or doing crossword puzzles, either your mind worked that way, like Anna's, or it didn't; and I'd proved mine didn't. A small neutral country. The ideas about marriage and giving birth appear throughout the whole novel and, especially, in the first part, where we find one of Atwood's most famous quotations: "A divorce is like an amputation, you survive but there's less of you" (49). Judging by the author's own sentimental situation when writing the book, it should thus be no surprise that, albeit not exactly an alter ego, UP is used by Atwood to channel some of her own thoughts, perhaps particularly intense at that time. Critics have indeed celebrated Surfacing, as Fiona Tolan (35) notes, "as the work that most closely associates Atwood's novel writing to her poetry." The writer being a woman is definitely an important aspect to be taken into consideration when analyzing the feminist framework of the novel. Victimization in female characters As it has been already remarked, there is more than one element conveying a sense of victimization in Surfacing; namely, the American pervasive politics and way of life (specifically, affecting nature and Canadian identity) and the dominance of an overly patriarchal culture. In this regard, Emily Denommé (2) alludes to a double sense of victimization: "The journey of Atwood's narrator highlights the problematic groupings that her society demands in terms of nationality and gender. Under these categories, the narrator is doubly victimized as a Canadian and as a woman". While it is convenient to have the former in mind too, we will henceforth refer almost exclusively to the latter: victimization of women as the result of masculine dominance. The sense of gender victimization in the novel is, in turn, split between the two main female characters of the story, Anna and UP (the two other women characters being Madame -Paul's wife-and a random clerk from a store, both of whom appear only in the first chapters). Interestingly enough, although both of them are arguably victims of patriarchal society in general and of their respective relationships with males in particular, their attitudes towards the situation differ significantly: whereas Anna seems to accept her position willingly and gives a rather submissive image of herself in certain moments, UP is distraught by gender problems since her childhood (at school and even with family) and is unable to engage with Joe because of it. This being the case, we have opted for analyzing the role of each of them by separate for the sake of clarity. Anna In the same way the story unfolds gradually, the actual victim role of Anna in relation to her husband, David, is discovered throughout the chapters. Margaret Atwood is, again, very skillful in revealing it in a gradual manner. At the beginning, the reader may easily get the impression that they make a happy and healthy couple, but, nothing further from the truth, Anna undoubtedly suffers abuse at the hands of her husband. One of the first instances is found in chapter 4, when David asks someone to bring him a beer and "Anna brings him one and he pats her on the rear and says 'That's what I like, service'" (41). At this point, it is already surprising that she does it automatically, as though she were a robot at his beck and call; all the same, the abuse goes in crescendo. In the next chapter, UP sees Anna putting on makeup and realizes that she has never seen her without it. When asked about the reason by the narrator, "Anna says in a low voice, 'He doesn't like to see me without it'", and then, contradicting herself, "He doesn't know I wear it" (52). Later on, there is a scene in which Anna has forgotten to make herself up, and converse with the protagonist as follows: "God," she said, "what'm I going to do? I forgot my makeup, he'll kill me." I studied her: in the twilight her face was grey. "Maybe he won't notice," I said. "He'll notice, don't you worry. Not now maybe, it hasn't all rubbed off, but in the morning. He wants me to look like a young chick all the time, if I don't he gets mad." … "He watches me all the time, he waits for excuses. Then either he won't screw at all or he slams it in so hard it hurts. I guess it's awful of me to say that" … "But if you said any of this to him he'd just make funny cracks about it, he says I have a mind like a soap opera, he says I invent it. But I really don't you know" (156) So deep-rooted is her fear towards David that, after these words, Anna does not want to tell UP anything else, afraid that she "would talk to him about it behind her back" (157). David's blatant disregard for her wife is continuous and evinces his sexist ideology; he makes several commentaries that leaves no room for doubts in this respect, such as "'It turns me on when she bends over,' … 'She's got a neat ass. I'm really into the whole ass thing. Joe, don't you think she's got a neat ass?'" (114). As might be expected, this behavior evolves into aggression in one of the most shocking passages of the entire novel, in which David asks Anna to take off her clothes in order to record a video -Joe and him are on the trip to record a video with their camera-of her completely naked. She logically refuses, but, even despite Joe's dissuasive attitude ("'I won't take her if she doesn't want to', Joe said" [172]), David perseveres and the tense situation explodes: "It's token resistance," David said, "she wants to, she's an exhibitionist at heart. She likes her lush bod, don't you? Even if she is getting too fat" "Don't think I don't know what you're trying to do," Anna said, as though she'd guessed a riddle. "You're trying to humiliate me." "What's humiliating about your body, darling?" David said caressingly. "We all love it, you ashamed of it? That's pretty stingy of you, you should share the wealth; not that you don't." Anna was furious now, goaded, her voice rose. "Fuck off, you want bloody everything don't you, you can't use that stuff on me." "Why not," David said evenly "it works. Now just take it off like a good girl or I'll have to take it off for you." When Joe tries to stop the quarrel, David yells "Shut up, she's my wife" (173) and goes on. The situation ends with Anna naked, being recorded on the sand while crying. It is the natural consequence of her lenient and submissive attitude, which has not but strengthened David's chauvinist mentality. Denommé (2016: 6-7) has accurately explained the reason behind Anna's overtolerance by quoting Atwood's most well-known work of literary criticism, Survival: A Thematic Guide to Canadian Literature: Anna herself, though clearly a victim of sexist ideology, willingly chooses to back her abuser when she must choose where to position herself. This follows Atwood's logic of the first victim position of denied victimhood, where the victim is "afraid to recognize they are victims for fear of losing the privileges that they possess" and often direct their anger "against one's fellow-victims, particularly those who try to talk about their victimization ["]. (Atwood,Survival 36) This permissiveness on the part of Anna (in the sense of sexual freedom, as David exemplifies when trying to have intercourse with UP in chapter 18, towards the end of part two) is unmistakably obvious in a conversation between the two female characters, narrated this way by the protagonist: "She gives me an odd glance, as though I've violated a propriety, and I'm puzzled, she told me once you shouldn't define yourself by your job but by who you are. When they ask her what she does she talks about fluidity and Being rather than Doing; though if she doesn't like the person she just says 'I'm David's wife'" (70). If we accept the aforementioned idea that Margaret Atwood hybridizes many of her own thoughts about feminism with those of UP -the vehicle through which she, in a sense, indirectly theorizes in Surfacing-, then this passage is to be taken as a paradigmatic example. In her doctoral thesis, Suman Makhaik (147) refers to this fact as follows: The character of Anna stands for women who, against all odds, wish to continue their victim roles even if it demands their total effacement as individuals. Such characters comply with binary masculine hegemony and help in its firm establishment. Ecofeminists raise a voice against doing so, and Atwood establishes the same by defining the negatives. Ecofeminism, which is a type of feminist theory linked to ecology, is arguably also in the background in Atwood's novel. We will succinctly comment on this aspect in the next point. The protagonist The title of the book -Surfacing-is not meaningless either: the unnamed protagonist, a castaway in the sea of patriarchal society and in her own sea of fractured memories and experiences, goes through a process of self-discovery when investigating the disappearance of her father. This inner journey is metaphorically represented by the natural environment where most of the story takes place and, in particular, by water. In chapter 23, once she has managed to remain alone in the island, UP steps into the water and floats with her clothes on, which she soon takes off: "When every part of me is wet I take off my clothes, peeling them away from my flesh like wallpaper. They sway beside me, inflated, the sleeves bladders of air" (230). Then, after leaving the water, the metaphor cannot be more explicit: "When I am clean I come up out of the lake, leaving my false body floated on the surface, a cloth decoy; it jiggles in the waves I make, nudges gently against the dock" (231). In order to understand this twofold process of surfacing we must, however, clarify UP's position as a victim first. G. Sankar and R. Soundararajan (41) have indeed observed a strong sense of gender victimization in the character: The main issue of the novel is that of searching for identity. The unnamed protagonist perceives herself as a victim; … as a member of patriarchal society, she is a victim of men: not only, in the protagonist's view, do they make use of women's bodies for their own satisfaction, but also have more rights. They are those who have the main voice in creating history and think they are responsible for "saving the world, men think they can do it with guns. (Surfacing 176) This victim status of the protagonist, originating from society as an abstract whole, crystallizes in her interpersonal relations with Joe, David and even Anna; the macrocosms in which she feels trapped is concretized, in the novel, in a microcosm where gender discrimination takes shape. As it happened with Anna, the reader may initially believe that UP is making the journey simply because she wants to enjoy herself with her friends and boyfriend. Nevertheless, it is soon discovered that she has no choice: she depends on David's car and she has not told any of them the true reason why she wanted to go to the island (investigating her father disappearance). Even if she wishes to leave the place, she is subject to her friends' will: I sit down on the bed. They might have asked me first, it's my house. Though maybe they're waiting till I come out, they'll ask then. If I say I don't want to they can't very well stay; but what reason can I give? I can't tell them about my father, betray him; anyway they might think I was making it up. There's my work, but they know I have it with me. I could leave by myself with Evans but I'd only get as far as the village: it's David's car, I'd have to steal the keys, and also, I remind myself, I never learned to drive. (86) During her sojourn in the place, she is invaded by the memories linked to the objects with which she interacts in the house. It is then that she recalls different points of her life when she has been a victim of machismo and of men's impositions. When she tried to become an artist, her wings were clipped by her ex-husband for being a woman: "For a while I was going to be a real artist; he thought that was cute but misguided, he said I should study something I'd be able to use because there have never been any important woman artists" (63). At school, she was an object of ridicule for the boys: "When the boys chased and captured the girls after school and tied them up with their own skipping ropes, I was the one they would forget on purpose to untie. I spent many afternoons looped to fences and gates and convenient trees, waiting for a benevolent adult to pass and free me" (88). Even with her family, she was conditioned by the manly habits of her father: "There's more than one way to skin a cat, my father used to say; it bothered me, I didn't see why they would want to skin a cat even one way" (117). On top of all these memories, one weighs heavily: the feeling of having lost her child, presumably aborted against her will. We will refer to this point at the end of the analysis of the character. Considering animal mistreatment (epitomized by the skinned cat) and the defilement of nature by Americans in the novel, the critics have underlined the presence of an ecofeminist message in Atwood's work. In this line, Ambika Bhalla (1) has observed a parallelism between the victim status of the main character and nature, which becomes a revealing force in her process of self-awareness: "The protagonist realizes the gap between her natural self and her artificial construct only when she encounters nature. The ecofeminist impact is seen implicit in the novel by the protagonist's return to the natural world. Her association with nature raises her consciousness of victimization of women." Towards the end of the story, in chapter 24, the fusion reaches its climax; firstly, a desultory frog symbolizes the connection: "A frog is there, leopard frog with green spots and gold-rimmed eyes, ancestor. It includes me, it shines, nothing moves but its throat breathing" (233). Later in the chapter, there is a key passage, intentionally written by Atwood in the form of separate paragraphs, that highlights the symbiosis between UP and nature: The animals have no need for speech, why talk when you are a word I lean against a tree, I am a tree leaning I break out again into the bright sun and crumple, head against the ground I am not an animal or a tree, I am the thing in which the trees and animals move and grow, I am a place (236) In contrast with this union with the natural environment of the island, there is a sense of rupture with Joe, who works as a masculine archetype for the protagonist: "Everything I value about him seems to be physical: the rest is either unknown, disagreeable or ridiculous" (68). UP's indifference towards him also increases throughout the story; at the beginning of chapter 8, the tedium is evident: In the early morning Joe wakes me; his hands at any rate are intelligent, they move over me delicately as a blind man's reading Braille, skilled, moulding me like a vase, they're learning me; … A phrase comes to me, a joke then but mournful now, someone in a parked car after a highschool dance who said With a paper bag over their head they're all the same. At the time I didn't understand what he meant, but since then I've pondered it. (83) This feeling of sexual objectification will permeate the mood of the protagonist, who will progressively distance from Joe in emotional terms: "Joe stayed on the wall bench, arms wrapped around his knees in lawn-dwarf position, watching me. Every time I glanced up his eyes would be there, blue as ball point pens or Superman; even with my head turned away I could feel his x-ray vision prying under my skin, a slight prickling sensation as though he was tracing me" (106). Eventually, their relationship will become noticeably deteriorated (up to the point that Anna will realize) and Joe's attempts to have sex will be dismissed over threats of pregnancy: "Don't, I said, he was lowering himself down on me, "I don't want you to." "What's wrong with you?" he said, angry; then he was pinning me, hands manacles, teeth against my lips, censoring me, he was shoving against me, his body insistent as one side of an argument. I slid my arm between us, against his throat, windpipe, and pried his head away. "I'll get pregnant," I said, "it's the right time." It was the truth, it stopped him: flesh making more flesh, miracle, that frightens all of them. (188) Nonetheless, despite their dissimilarities and lack of a deep consistent affective bond, they end up having sexual relations by mutual agreement: he satisfies his carnal desires and she will be able to redeem herself from the loss of her former child: "Nobody must find out or they will do that to me again, strap me to the death machine, emptiness machine, legs in the metal framework, secret knives. This time I won't let them" (210). UP's resolve constitutes an act of both redemption ("I can feel my lost child surfacing within me, forgiving me" [209]) and of self-assertion as a woman, since she will give birth to her child (the "goldfish" [249] in her belly) all by herself in nature, without the interference of a society that has proven patriarchal to her: "This above all, to refuse to be a victim" (249). She has surfaced. All the same, she is posed a last crux when, after having remained alone in the island for a period of time, Joe comes back in search of her: she must decide whether returning to civilization together with Joe, for whom her love is "useless as a third eye or a possibility" (250), or staying in the place at the risk of isolation in a wild atmosphere. Although there is no certainty that she decides to go with Joe because the end is, once again, intelligently open to ambivalence by Atwood, it seems quite likely that she does: "To trust is to let go. I tense forward, towards the demands and questions, though my feet do not move yet" (251). Should this be the case, we believe that there would be an emphasis on survival: she hopes nothing from neither Joe nor men in general, but she is no less aware that, albeit not strictly necessary, her possibilities of enduring are significantly larger if she is reintegrated into society, in spite of everything it entails. It is a purely rational choice. Conclusion In light of the current study, it seems indubitable that there is a strong sense of victimization in Margaret Atwood's Surfacing, which concerns both imperialism and gender. In particular, female victimization is quite clearly reflected in the two women characters of the story: Anna and the unnamed protagonist. Yet, whereas the former does not seem to care about her position and ends up suffering abuse from David because of it, the latter -a victim of masculine dominance since childhood-goes through a process of inner discovery that results in her decision to give birth and live isolated in nature, thus constituting an ecofeminist vision of life. At the end, however, it is suggested that she will return with her boyfriend to become a part of the social gear again. This being the case, the story could be interpreted as a message towards equality: women need men to fully ensure their survival as a human species, but civilized society is too patriarchal a system for them to develop themselves in equal conditions as men. While most of the characters in the book are fit within archetypes -the dominant cynical male, the abused submissive woman, the ruthless American imperialist-to construct the fiction, it is no less true that it is precisely through those roles that Atwood's work becomes relevant from a (eco)feminist point of view. To this approach we must add a whole series of relevant features, such as the richness of the narrator's reflections on her own personal life, her presentation of her family relationships, the unreliable nature of her narration, her process of self-discovery, her contribution to her own victimization or the symbolic language used. All these characteristics have turned Margaret Atwood's Surfacing into a contemporary classic, in which feminism undoubtedly occupies a very significant position. This article, in an attempt to let the text breathe and speaks (almost) by itself, has very modestly tried to gather some particularly pertinent academic contributions to the topic and to observe how the novel, Surfacing, reflects such notions through the use of language and through the unfolding of events on the part of the author. Rather than corseting the book with a perhaps excessive number of theoretical concepts, we have tried to focus on the novel and to use scholarship as a complement, instead of the other way around. If this paper may thus have any value, it might precisely be the strict consideration of Atwood's writing as such, sometimes forgotten at the expense of other labels which, although indubitably interesting, could push the text -the actual object of analysis-into the background.
2019-10-31T09:10:34.939Z
2019-10-23T00:00:00.000
{ "year": 2019, "sha1": "876c38f2f42a8d03cb346f50beb877b7c33bf3d4", "oa_license": "CCBY", "oa_url": "https://revistaselectronicas.ujaen.es/index.php/grove/article/download/4594/4426", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cbff0756b1e7c3400b59487a22c975c0382e7d0f", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
252645180
pes2o/s2orc
v3-fos-license
Antimicrobial resistance of bacterial pathogens isolated from the infections of post maxillofacial surgery Inappropriate antibiotic prescriptions contributed to a global issue of antimicrobial resistance. This study aimed to assess the prevalence of bacterial pathogens and antimicrobial resistance isolated from maxillofacial infections (MIs). Two hundred and twenty-two patients with different MIs were included in this study. Swab samples were taken from the site of infections. Samples were cultured, and isolated bacteria were identified using various biochemical tests. Antimicrobial resistance patterns of isolates were assessed by the disk diffusion method. The mean age of the patients was 50.8 years. The male-to-female ratio was 127/95 (P<0.05). Smoking and alcohol consumption were found in 60.36% and 37.38% of patients, respectively. Most patients had a ≤1-week infection duration (P<0.05). Abscess lesion was the most predominant infection type (P<0.05). The prevalence of aerobic bacteria among abscess, pus localization, and deep facial infections was 59.33%, 64.28%, and 46.66%, respectively. The prevalence of anaerobic bacteria among abscess, pus localization, and deep facial infections was 40.66%, 23.80%, and 53.33%, respectively. Staphylococcus aureus (10.36%) and Prevotella buccalis (8.55%) had the uppermost distribution amongst all examined samples. Isolated bacteria exhibited the uppermost resistance rate toward penicillin (65.76%), tetracycline (61.26%), gentamicin (58.10%), and ampicillin (57.65%) antimicrobials. The lowest resistance rate was obtained for linezolid (25.67%), ceftriaxone (31.08%), and azithromycin (31.08%) antimicrobials. Linezolid, ceftriaxone, and azithromycin had effective antimicrobial activities toward bacteria isolated from MIs. Therefore, cautious antibiotic prescription might decrease the prevalence of antimicrobial resistance in dental and maxillofacial infections. INTRODUCTION Maxillofacial infections (MIs) are commonly attributed to the face and oral cavity [1]. Given the important anatomical position of the maxillofacial region, infections of this part may expand to other sites, including the respiratory system, brain, and mediastinum, and subsequent septicemia and even death may occur [2]. MIs are primarily self-limiting and can be treated quickly. However, there is a risk of death from airway obstruction and even infection spread [3,4]. Treatment of most MI cases requires an antimicrobial prescription. However, most aerobic and anaerobic bacteria responsible for MI exhibit a high resistance rate toward common antimicrobials [11]. The high antibiotic resistance rate of aerobic and anaerobic bacteria responsible for MI cases has been reported toward aminoglycosides, tetracyclines, penicillins, cephalosporins, quinolones, and other important classes of antimicrobials JOURNAL of MEDICINE and LIFE [12][13][14]. Thus, assessing MIs etiologic agent antimicrobial resistance can help identify the best antibiotics to treat and control the infection. Given the high importance of MIs as common and complicated bacterial infections with the emergence of antimicrobial resistance, existing research was conducted to assess the prevalence and antimicrobial resistance of aerobic and anaerobic bacteria isolated from different types of MIs. Study population, inclusion, and exclusion criteria A total of 300 patients were included in the study from October 2019 to October 2020. Inclusion criteria: patients with bacterial infections of odontogenic origin, including dentoalveolar abscess, those with deep fascial space spreading infections, and others with infections causing localization of pus in the head and neck, were included in the study. Exclusion criteria: patients with viral and fungal infections, infected cysts, neoplastic lesions, and those without known infections were excluded from the study. Additionally, patients with antibiotic therapy (over the past 30 days) and who used antiseptic mouth rinses (over the past 24h) were excluded from the survey. Pregnant women, patients with liver, gastrointestinal, and kidney disease, and those with positive Covid-19 and HIV tests were also excluded. Samples Aspiration sites were cleaned with alcohol (Merck, Germany). Saliva was continuously aspirated during the sampling. A separate sterile needle was used for pus aspiration from each patient. If aspiration was unsuccessful, a separate sterile swab was used for pus or exudate collection. Samples were transferred to the laboratory using the thioglycollate broth (Merck, Germany) media. Geographical information of the targeted population was recorded accurately. Bacterial isolation and identification All samples were separately cultured on the blood agar media (Merck, Germany) for aerobic incubation, chocolate agar (Merck, Germany) for microaerophilic incubation, and anaerobic blood agar (Merck, Germany) for anaerobic incubation. The blood agar media was prepared using the blood agar base (Oxoid, UK) with 5% defibrinated sheep blood. The anaerobic blood agar media was prepared using the fastidious anaerobe agar (Oxoid, UK) with 5% defibrinated sheep blood. All media were incubated at 37ºC. All isolates were subjected to Gram-staining. Isolates grown on the blood agar and chocolate agar were Gram-stained after 24h of growth in air and CO 2 , respectively. Isolates grown on the anaerobic blood agar were Gram-stained after 48h. Gram-negative and Gram-positive bacteria were tested using various biochemical tests according to the Analytical Profile Index (API) system. Gram-negative bacillus bacteria were identified using the API 20E [15]. The catalase production test was used for Gram-positive coccoid bacteria. All catalase-negative bacteria were tested for the hemolytic reaction, and growth in the media contained 6.5% NaCl. Catalase-positive bacteria were tested for coagulase production, resistance to Novobiocin, and growth on the mannitol salt agar (MSA, Merck, Germany). Anaerobic bacteria were identified using the AP120A procedures [16]. Anaerobic culture was provided using the anaerobic jar (Oxoid, UK) and MART system (Lichtenvoorde, The Netherlands, 80% N 2 , 10% O 2 , and 10% CO 2 ) [17][18][19]. Isolates displayed a high antibiotic resistance rate toward penicillin, tetracycline, gentamicin, and ampicillin antimicrobials. Unauthorized prescription of antimicrobials, self-treatment with antimicrobials, and indiscriminate use of disinfectants are likely explanations for the prevalence of antimicrobial resistance in the present survey. Linezolid, ceftriaxone, and azithromycin prescription may cause better therapeutic effects on maxillofacial infections. Similarly, a high resistance rate toward penicillin, tetracycline, gentamicin, and ampicillin antimicrobials was reported in the United States [51], Australia [52], and the United Kingdom [53]. Kong and Kim (2019) [54] stated that the S. aureus, S. viridans, K. pneumoniae, and E. fecalis bacteria displayed the uppermost resistance rate against ampicillin, ciprofloxacin, clindamycin, erythromycin, gentamicin, penicillin, and tetracycline antimicrobials [55]. Habib et al. (2019) [56] stated that Staphylococcus spp., Streptococcus spp., and Klebsiella spp. isolates of odontogenic infections had a high resistance toward amoxicillin and metronidazole (80-100%). A Chinese survey [57] described boosting resistance rate toward ampicillin (100%) and penicillin (100%) antimicrobials. Possible reasons for antibiotic resistance differences reported in various studies include differences in antibiotic availability, antibiotic prices, over-the-counter antibiotic sales, and antibiotic prescribing procedures. Precise prescriptions based on laboratory results can diminish the risk of antimicrobial resistance among maxillofacial pathogens. There is no determined document about the exact origin of isolated bacteria. However, the role of food as a vector for these bacteria and also changes in the microflora of the oral cavity are more prone than other reasons [58,59]. We suggest other authors assess the originality of oral infections and the full genome sequencing of bacterial isolates to assess their genetic similarity. CONCLUSIONS The main achievement of this report was the assessment of antimicrobial resistance of bacteria isolated from infections of post maxillofacial surgery in order to identify the best treatment option and the main distribution of bacterial pathogens in these areas. In conclusion, S. aureus, S. pyogenes, S. viridans, K. pneumonia, P. buccalis, Peptostreptococcus spp., and P. gingivalis were the predominant causes of maxillofacial infections in Iraq. Rendering the disk diffusion findings, linezolid, ceftriaxone, and azithromycin prescription may cause better results in treating maxillofacial infections. Establishing preventive rules in prescribing antibiotics and accurately identifying the main causes of infection in these areas can prevent the spread of antibiotic-resistant strains in post maxillofacial surgery infections. However, several multifactorial surveys should be performed to address more aspects of the antimicrobial resistance bacteria in MIs.
2022-10-02T05:17:00.171Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "6de4e2d9444fd32825fbc142726acd596369daa0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6de4e2d9444fd32825fbc142726acd596369daa0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
41296921
pes2o/s2orc
v3-fos-license
A Flexible, Real-Time Algorithm for Simulating Correlated Random Fields and Its Properties Corresponding Author: Michael A. Kouritzin Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton (Alberta), Canada T6G 2G1 Email: michaelk@ualberta.ca Abstract: Contemporary real-time problems like CAPTCHA generation and optical character recognition can be solved effectively using correlated random fields. These random fields should be produced on a graph in order that problems of any dimension and shape can be handled. However, traditional solutions are often too slow, inaccurate or both. Herein, the Quick Simulation Random Field algorithm to produce correlated random fields on general undirected graphs is introduced. It differs from prior algorithms by completing the graph and setting the unspecified covariances to zero, which facilitates analytic study. The Quick Simulation Random Field graph distribution is derived within and the following questions are studied: (1) For which marginal pmfs and covariances will this algorithm work? (2) When does the marginal property hold, where the sub-graph distribution of an algorithm-simulated field matches the distribution of the algorithm-simulated field on the subgraph? (3) When does the permutation property hold, where the vertex simulation order does not affect the joint distribution? Introduction Correlated random fields are used in science and technology to model spatially distributed random objects. The applications of random field across the sciences are broad and include sequential Monte Carlo, computer vision, cryptography, astrophysics, rainfall, hydrology, analysis of gene expression time series, medical image processing and inverse optics and image synthesis; see, for example, Kouritzin (2017), Schlather et al. (2015), Chellappa and Jain (1993), Diaconis (2009), Vio et al. (2002), Leblois and Creutin (2013), Li et al. (2008), Li et al. (1995), Li (1995) and Winkler (2003). Furthermore, mathematicians often want to couple a collection of random variables with given distributions together on a single probability space while matching some constraint like covariances. In either situation, the complete joint distribution of the field may be unknown or even irrelevant as enough meaningful information is captured by marginal distributions and pairwise covariances between random variables. In the Gaussian case, many simple efficient methods, like covariance matrix decomposition, moving averages, Fast Fourier Transform (FFT), turning bands and local average subdivision, exist (see Shinozuka and Deodatis (1996), Kleiber (2016) or Blanchard et al. (2016) for example). However, these methods are easiest to use over a regular grid and many random fields are fundamentally non-Gaussian. In the general case, probability density functions are usually approximated by probability mass functions (pmfs) if necessary and some type of Markov chain Monte Carlo method is used when exact field distributions are desired. However, these methods require a very large number of iterations to converge, for example it took 2000 iterations in the simple Hamlet example in Diaconis (2009), and therefore are generally not suitable to real time computations. On the other hand, there are many approximation methods, often based upon the FFT or spectral decomposition and Karhunen-Loeve expansion to approximate covariance structure of fields (see e.g., Vio et al. (2002)). To meet the diversity of problems in a variety of dimensions, Kouritzin et al. (2014) considered random fields on a general undirected graph structure and proposed an algorithm for producing a new class of discrete correlated random field on such graphs by either one-pass simulation or Gibbs-like resampling. The approach has been applied to Optical Character Recognition (OCR) Kouritzin et al. (2014) and the generation of both black-and-white Kouritzin et al. (2013) and gray-level Newton and Kouritzin (2011) CAPTCHAs ( Fig. 1 shows a new example of such a gray level CAPTCHA.) The class of random fields created by their algorithm incorporate given probability mass functions (pmfs) at the vertices in a graph and specified pairwise covariances corresponding to edges existing in that graph. (This is translated into a pmf for the gray levels of each pixel and covariances between nearby pixels in this CAPTCHA example.) The joint distribution between pairs of vertices connected by a specified covariance edge is known in terms of two sets of auxiliary parameter pmf collections that can be selected for generality. However, the joint subgraph distribution on an incomplete subgraph is unknown for the algorithm in Kouritzin et al. (2013). The starting point for the simulation consists of a fixed portion as well as a design portion. The fixed portion is an undirected graph together with the desired marginal vertex pmfs (the π's) and the collection of nonzero covariances (the β's) for the graph edges. (This setting is general enough to handle simulation in any dimension for example.) The design portion consists of two sets of auxiliary (vertex) pmfs (theˆ's π and the 's πɶ ) that can be used in place of the 's π in portions of the algorithm to do things like improve efficiency or destroy independence (Actually, there is a wide assortment of reasonable choices for the ˆ's π and the 's πɶ discussed in Kouritzin et al. (2014)). Simulating the graph then amounts to directing the graph in an acyclic manner, fixing a topological sort of the vertices and using Proposition 1 of Kouritzin et al. (2014), requoted as Proposition 1 below, recursively (See Kouritzin et al. (2014) for details.) Our modified algorithm, introduced herein, completes the graph by adding edges of zero covariance wherever necessary before simulation. This completion does not complicate nor slow the simulation yet allows us to derive the complete field distribution in closed form for all possible auxiliary pmf parameters. We call this completed-graph simulation algorithm and resulting random field the quick simulation algorithm and quick simulation field herein. This paper focuses on the constraints and properties of the random field generated by this quick simulation algorithm. Naturally, the algorithm cannot work for all possible parameters and might not work for others. We start by giving the joint (field) distribution of the random field generated by this algorithm (when it works). From there, we study regularity, meaning when the algorithms does provide a legitimate distribution over the whole space of vertices. This is equivalent to ensuring that the recursive formula (2.5) of Proposition 1 produces a conditional pmf in every iteration. It was observed in our CAPTCHA Kouritzin et al. (2013) and OCR Kouritzin et al. (2014) applications that the occasional illegitimate conditional pmf value outside [0, 1] can be replaced with a value inside without noticeable effect on the simulation. However, it is still important to know when the only possible source of irregularity is numeric and not algorithmic. Next, we establish the marginality property that ensures the distribution of a random field on a subgraph projected from the random field constructed on the whole graph is the same as that for a random field constructed directly on this subgraph. Finally, we investigate the permutation property that makes sure the random field simulated from all topological sorts corresponding to the same complete undirected graph are the same in the sense of probability distribution. We establish necessary and sufficient conditions for this permutation property. Example 1 Suppose we have the following complete undirected graph G with vertices v 1 , v 2 , v 3 , probability mass functions 2) we assign the joint probabilities as follows: Moreover, if we simulated two vertices v i , v j , then we get: so marginality is also maintained. In this note, we show how to compute these probabilities so that the pmfs and covariances are preserved in general as well as establish the conditions for the marginality and permutation properties above to hold. The remainder of this note is laid out as follows: Section 2 contains our notation and background. Next, we give the closed form of correlated random field, discuss regularity and establish the marginality property in Section 3. The permutation property is studied in Section 4. Probabilistic Setup Let V be a finite set of vertices, V denote this set of vertices with an ordering and X v be a finite state space for each v∈V. For any nonempty subsequence B V ⊂ , the x by x i to ease notation. A random field ∏ is a strictly positive probability measure on X. The random vector ( ) v v V X X ∈ = on the probability space ( , 2 , ) ∏ X X is also called a random field. For B V ⊂ , the random subfield on B is the projection map v∈V} is a collection of subsets of V: A random field ∏ is Markov with respect to ∂ if for all x∈X: Problem Statement Let E be a set of edges where each (u, v) ∈ E with u, v∈V has no orientation but indicates u, v are neighbors of each other. Then, G = (V, E) is an undirected graph. If for every pair of vertices u, v∈V, there is a path of edges in E connecting u and v, then G is connected. If every vertex in G has a neighbor with at least two neighbors, then G is sufficiently connected. If for every pair of nonneighbor vertices z, u there is a neighbor of z and a neighbor of u that are distinct, then G is disjoint pair We illustrate the new concepts of sufficiently connected and disjoint pair rich. Example 2 Consider the graphs in Fig. 2. Both graphs in Fig. 2 are connected. However, neither is sufficiently connected since in both cases none of the neighbors of w have two neighbors. Example 3 The graphs in Fig. 3 illustrate the definition of "disjoint pair rich". In (B) non-neighbors z and u do not have distinct neighbors. Clearly in (C), every vertex has a neighbor with two neighbors and every pair of non-neighbors has distinct neighbors. Yet, two vertices only have one neighbor. Example 4 If every vertex in a graph G has two neighbors, then it is disjoint pair rich. It is also sufficiently connected. We are interested in creating a random field over V, where random variable X v at a vertex v∈V has a predescribed pmf π v and random vectors (X u , X v ) have a predescribed non-zero covariance β uv (= β vu ) for each (u, v)∈E. Naturally, this problem could be ill-posed in the sense that there are mathematically incompatible collections of pmfs and covariances. Also, there often are multiple solutions with some being more efficient to simulate and others having nice properties like the marginal and permutation properties defined above. Directed Graph The random variables in the field are simulated in sequence. The first step towards sequencing is directing the graph. Let A be a set of ordered vertex pairs, called arcs, (indicating the first vertex in the pair is simulated prior to the later). Then, Graph Completion denotes its completion, where there is an arc between every pair of vertices and the direction of an arc that is also in A matches that of A. Kouritzin et al. (2014) gives one possible algorithm to construct an acyclic complete directed graph and a topological sort on V, i.e. a simulation order where N = |V| is the number of vertices. Our new Quick Simulation Algorithm works on a completed acyclic directed graph. Zero covariances are placed along any added arc i.e. β v,u = cov(X u , X v ) = 0 when (v, u) Conditional Probability Update The Quick Simulation Random Fields match a collection of pmfs {π v , v∈V} and a collection of covariances However, there are also two auxiliary pmf parameter sets that provide flexibility in the choice of field distribution as well as simulation. (See Kouritzin et al. (2014) for examples of choices for these auxiliary pmfs.) They also appear in the conditional probability update through functions: . g ɶ andĥ may look mysterious here. However, looking ahead to (3.3), we see they affect the field distribution in our new algorithm. g ɶ normalizes the sample x v by subtracting the mean and dividing by the variance but it allows this normalization to be done with respect to any convenient non-trivial pmf πɶ that could be different than π. ĥ allows us to consider all the parents except the one we are currently setting the covariance for as if they came from a different distribution π . Intuitively, this makes sense. When we are focused on the covariance for one parent the other parents could have just as easily come from π or π . The following proposition establishes that this flexibility is allowed. Let The main proposition in Kouritzin et al. (2014) is: v∈V} are pmfs and {β v,u : (u, v) ∈A or (v, u) ∈A} are numbers such that the right hand side of: is computed according to (2.4). Form the conditional probabilities recursively using (2.5), starting with Then, the random field X, defined by (the multiplication rule and (2.5)): has marginal probabilities {π v } and covariances cov(X u , Remark 1 The term non-trivial pmfs can be interpreted as: Each v πɶ should have non-zero variance and each ˆv π should be strictly positive. These auxiliary pmfs affect the field distribution but not its marginal vertex pmfs nor its vertex-vertex covariances. Remark 2 In Kouritzin et al. (2014), there was the stronger constraint that the right hand side of (2.5) is in [0,1]. However, . Hence, if the right hand side of (2.5) is non-negative, then it is in [0, 1] and (2.5) defines a legitimate conditional pmf. Remark 3 Notice that (2.5) gives the same value, whether we consider the given graph D or its completion D where the added arcs have zero covariance. Distribution and Marginality of Quick Simulation Fields Proposition 1 can be extended to give the full field distribution when the graph is complete. Proposition 2 Assume that are numbers such that the right hand side of: ( ) Then, the random field X, defined by: • for each ∈ i i v x X and n = 1,…,N. Remark 4 The one-pass algorithm (as opposed to the Gibbstype algorithm used in Kouritzin et al. (2013)) follows from (3.1). We just use the conditional probability to simulate the new vertex given the prior ones in the topological sort. However, the big efficiency comes from the fact that the terms in (3.1) are only non-zero (and hence need to be computed) in the case where v j is a parent of v i in the original (non-completed) graph. Remark 5 Since the terms with Remark 6 Regularity means that the right hand side of (3.1) is a conditional pmf. As noted in Remark 2, the right hand side of (3.1) need only be non-negative, which is equivalent to: and can be checked during the iteration. Notice: (1) There is no constraint on where, pa(v i ) denotes parents in the original (notcompleted) graph. If pa(v i ) = {v i−1 } is a singleton, then (3.5) further simplifies by (2.2) to: x X X . One can check (3.5) or (3.6) iteratively to ensure the Quick Simulation algorithm is producing a field with the desired pmfs and covariances. Now, we show how equality in (3.6) is hit: Hence, we hit this bound when we have a singleton parent and one value of X i−1 precludes another value of X i . Proof of Proposition 2 a) This follows immediately from Proposition 1 and the fact that the parents of v i are all v 1 ,…,v i-1 when the graph is complete. b) Note (3.3) holds for n = 1. Now, we assume it is true for n−1 with some n∈{2,…,N} and show it for n. (3.1) is equivalent to: This is just the distribution we would have arrived at if we had just simulated {v 1 ,…, v l−l , v l+1 ,…, v N } in order. Using (3.9) repeatedly, we have proved the following marginality lemma. Lemma 1 Suppose the conditions of Proposition 2 hold and B ⊂ V . Then: This example illustrates several things about Quick Simulation Fields: order matters in general, there are dependent uncorrelated field and independence generally does not happen when ˆv v π π ≠ . Indeed, we explain below there is usually dependence even when ˆv v π π = . Example 6 In the important special case where ˆv v π π = for all v the closed form becomes: Remark 7 The following proof reveals the equivalence of (1) and (2) holds even if the original graph G is not connected. (2) implies (1): Multiplying (4.1) by ( , ) u g u x ɶ yields: for all x u ∈X u , x v ∈X v , x w ∈X w and distinct u, v, w∈M N . Take a∈G N and let b = (l l + 1) ο a for 1≤l≤ N−1. Noting that the transpose operations (l l + 1) are generators, we just need to show (4.6) for (arbitrary) a and this b. However, the left hand terms in (4.6) with i<l; j>l+1; j≤ l−1, l +2≤i; and j = l, i = l +1 directly cancel with the corresponding right hand terms for this b. Considering the (remaining) terms on the left side of (4.6) with j≤l− 1 and i = l, l + 1 for this b and using (4.9) with u = a(j), v = a(l), w = a(l + 1), we get upon manipulation: (4.10) so they are equal to the corresponding terms on the right of (4.6). (Notice the switch of π andπ in the final factors in (4.10).) Finally, the terms on the left of (4.6) with j = l, i≥l + 2 and j = l + 1, i ≥ l + 2 for b = (l l + 1)a: l are just the terms on the right of (4.6) with j = l + 1, i ≥l + 2 and j = l, i ≥l + 2 i.e. in reverse order. Hence, by breaking the summation up, we have shown (4.6) holds for arbitrary a and b = (l l+1) ο a, which implies (4.6) holds for arbitrary a, b and sufficiency follows. (3) Implies (2) Letting u, v, w∈V be distinct and using (4.2, 4.3), we have that: is constant, which in turn implies: where, c u , c v are constants. Since G is sufficiently connected (by non-zero covariances) every vertex can be included in some connected triple as above and we must have that: Ethics This article is original and contains unpublished material. The corresponding author confirms that all of the other authors have read and approved the manuscript and there are no ethical issues involved.
2017-08-25T23:37:24.764Z
2017-09-29T00:00:00.000
{ "year": 2017, "sha1": "08424a8457c9ec289e451067c661fb914a782abe", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/jmssp.2017.197.208", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b77ad401b57accf1357ead29e59e744ce4c1e205", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
235916928
pes2o/s2orc
v3-fos-license
Identification of hub genes in rheumatoid arthritis through an integrated bioinformatics approach Background Rheumatoid arthritis (RA) is a common chronic autoimmune disease characterized by inflammation of the synovial membrane. However, the etiology and underlying molecular events of RA are unclear. Here, we applied bioinformatics analysis to identify the key genes involved in RA. Methods GSE77298 was downloaded from the Gene Expression Omnibus (GEO) database. We used the R software screen the differentially expressed genes (DEGs). Gene ontology enrichment analysis and Kyoto Encyclopedia of Genes and Genomes pathway were analyzed by using the DAVID online tool. The STRING database was used to analyze the interaction of differentially encoded proteins. PPI interaction network was divided into subnetworks using MCODE algorithm and was analyzed using Cytoscape. Gene set enrichment analysis (GSEA) was performed to identify relevant biological functions. qRT-PCR analysis was also performed to verify the expression of identified hub DEGs. Results A total of 4062 differentially expressed genes were selected, including 1847 upregulated genes and 2215 downregulated genes. In the biological process, DEGs were mainly concentrated in the fields of muscle filament sliding, muscle contraction, intracellular signal transduction, cardiac muscle contraction, signal transduction, and skeletal muscle tissue development. In the cellular components, DEGs were mainly concentrated in the parts of cytosol, Z disk, membrane, extracellular exosome, mitochondrion, and M band. In molecular functions, DEGs were mainly concentrated in protein binding, structural constituent of muscle, actin binding, and actin filament binding. KEGG pathway analysis shows that DEGs mainly focuses on pathways about lysosome, Wnt/β-catenin signaling pathway, and NF-κB signaling pathway. CXCR3, GNB4, and CXCL16 were identified as the core genes that involved in the progression of RA. By qRT-PCR analysis, we found that CXCR3, GNB4, and CXCL16 were significantly upregulated in RA tissue as compared to healthy controls. Conclusion In conclusion, DEGs and hub genes identified in the present study help us understand the molecular mechanisms underlying the progression of RA, and provide candidate targets for diagnosis and treatment of RA. Background Rheumatoid arthritis (RA) is an autoimmune disease characterized by chronic inflammation, hyperproliferation of synovial tissue, and progressive destruction of multiple joints [1,2]. RA mainly targets the synovium of diarthrodial joints [3,4]. According to statistics, the prevalence of RA in China is about 0.5-1%, 0.5-5 new cases per 1000 people per year. RA has become one of the most common causes of disability in patients [5]. In RA, females are three times more affected than men [6]. RA manifests as osteoporosis around the joints, stenosis of the joint space of the knee joint, and bone cystic degeneration [7,8]. The pathogenesis of RA mainly focuses on autoantibodies and immune complexes [9]. RA involves T cell-mediated antigen-specific response, T cellindependent cytokine network, and aggressive tumorlike behavior of rheumatoid synovium [10]. The initial characteristics of the membrane are abnormal growth, infiltration of inflammatory cells (macrophages, T and B lymphocytes, plasma cells, and neutrophils), and the formation of pannus [11]. Significant thickening of the synovium is the most typical pathological change of RA [12]. Studies have shown that synovial inflammation plays an important role in the pathogenesis of RA. But the exact pathogenesis of RA is unclear. Chip technology has improved the ability to study disease pathogenesis and is an important technology for functional genomics research [13]. In recent years, with the commercialization of chips based on highthroughput platforms, this technology has gradually been used to explore disease epigenetics and screen effective biomarkers for disease diagnosis and prognosis [14]. In the expression monitoring of RA, the chip is mainly used to detect the gene expression profile of peripheral blood cells, miRNA expression, and circRNA expression. With the development of next-generation sequencing technologies and the improvement of biological database, using bioinformatics methods to explore the relevant mechanisms is significant. Microarray studies and datasets from Gene Expression Omnibus (GEO) The microarray datasets including GSE77298 was downloaded from the GEO database (https://www.ncbi.nlm. nih.gov/geo/) using "rheumatoid arthritis" as the keyword. The microarray dataset GSE77298 from GPL570 platform contains 7 samples of RA (end-stage RA synovial biopsies) and 16 healthy controls (synovial biopsies from individuals without a joint disease). The expression micro-array datasets was Affymetrix Human Genome U133 Plus 2.0 Array. Differential genes expression analysis Statistical programming language R (version 4.0.2) was used for log2 transformation of the data, and the two datasets were merged [15]. "SVA" package was used for batch correction. Differential expressed genes (DEGs) were defined as log |FC| > 0.5, and the corrected p < 0.05. Log |FC| > 0 means that the DEG is upregulated in RA. Functional annotation and pathway analysis of DEGs DEGs were inputted into David 6.8 online tools (https:// david.ncifcrf.gov/) to perform Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment [16,17]. P < 0.05 and the gene counts > 3 were considered statistically significant. Protein-protein interaction (PPI) network and key genes acquisition Using the Search Tool for the Retrieval of Interacting Genes (STRING, version 11.0, https://string-db.org/) database to analyze the PPI of proteins encoded by DEGs (medium confidence = 0.04) [18]. Cytoscape software (version 3.8.0) was used to perform visualization of PPI network. We used cytohubba plug-in to analyze the nodes of the genes with topological analysis methods, filtering with degree and stress and obtaining the key genes from the intersection of the first 15 genes sorted out in degree and stress methods respectively [19]. Gene set enrichment analysis (GSEA) Further GSEA was carried out for all genes that were detected by use of GSEA software (version 4.0.0), providing us another option to screen out significant differential biological functions derived after bariatric surgery [20]. The gene set arrangement was performed 1000 times per analysis. Gene sets was considered to be significantly enriched with an alpha or P value < 5% and a false discovery rate (FDR) < 25%. Statistical analysis Each experiment was performed at least three times. All data were expressed as mean ± standard deviation (SD). Statistical analyses were determined by Student's t test, and the differences between two groups or more than two groups were detected using ANOVA. A p value of less than 0.05 was considered statistically significant. Hierarchical clustering for sample selection The total samples were analyzed by hierarchical clustering, no samples were with high heterogeneity and eliminated. Finally, 23 samples were included for analysis. Identification of DEGs The blue bar represents the data before normalization, and the red bar represents the normalized data. After normalization, Fig. 1 depicts that the log2 ratios in the three pairs of samples are almost identical. A total of 4062 DEGs were screened out, including 1847 high expression genes and 2215 low expression genes. R was used to make results visualization and draw volcano map (Fig. 2) and heat map (Fig. 3). GO and KEGG enrichment analysis of DEGs In GO analysis, DEGs were divided into three categories: biological process, cellular component, and molecular function. In the biological process, DEGs were mainly concentrated in the fields of muscle filament sliding, muscle contraction, intracellular signal transduction, cardiac muscle contraction, signal transduction, skeletal muscle tissue development, sarcomere organization, antigen processing and presentation of peptide antigen via MHC class I, tricarboxylic acid cycle, and regulation of release of sequestered calcium ion into cytosol by sarcoplasmic reticulum (Fig. 4). In the cellular components, DEGs were mainly concentrated in the parts of cytosol, Z disk, membrane, extracellular exosome, mitochondrion, M band, cytoplasm, T-tubule, myofibril, sarcomere and so on (Fig. 4). In molecular functions, DEGs were mainly concentrated in protein binding, structural constituent of muscle, actin binding, actin filament binding, signal transducer activity, calmodulin binding, sodium channel regulator activity, SH3 domain binding, metal ion transmembrane transporter activity, and ATP binding (Fig. 4). PPI network analysis of DEGs Protein-protein interaction network with a total of 198 nodes and 356 relationship pairs was obtained, and genes in protein-protein interaction, such as RNF4, CDC20, UBE2D4, and UBE2Q2, were recognized as key nodes in protein-protein interaction (Fig. 5). A total of 20 core genes with a degree ≥ 20 selected by MCODE were obtained from the protein-protein network, and they were considered to be candidate core genes. In MCODE model 1, key genes were as follows: RNF4, UBE2D4, UBE2Q2, CUL5, NEDD4L, BXO32, LONRF, TRIM32, UBE2Q1, KLHL13, CDC20, ATG7, KLH41, and TRIM9 (Fig. 6). GSEA The analysis indicated that the most significant-enriched gene sets included the systemic lupus erythematosus, selenoamino acid metabolism, toll-like receptor signaling pathway, ubiquitin-mediated proteolysis, valine leucine and isoleucine degradation, and cholerae infection (Fig. 7). PCR The quantitative PCR (qPCR) results indicated that CXCR3 expression was significantly upregulated in RA synovial tissue compared with healthy control (Fig. 8). Moreover, GNB4 was significantly upregulated in RA synovial tissue compared with healthy control (Fig. 8). However, CXCL16 was significantly downregulated in RA synovial tissue compared with healthy control (Fig. 8). Discussion In the present study, we analyzed GSE77298 microarray dataset to screen DEGs between end-stage RA synovial biopsies and 16 synovial biopsies from individuals without a joint disease. GO and KEGG enrichment analyses were performed to explore interactions among the DEGs. CXCR3 was identified as the core gene that involved into the progression of RA. Bakheet et al. [23] found that CXCR3 antagonist AMG487 suppresses RA pathogenesis and progression by shifting the Th17/Treg cell balance. Therefore, CXCR3 antagonists could be used as a novel strategy for the treatment of inflammatory and arthritic conditions. Another core gene should be noticed is the GNB4. Previous study found that GNB4 can be a candidate diagnostic biomarker in inflammatory bowel diseases [24]. As for CXCL16, we also found that CXCL16 can be as a candidate core gene of RA according to the MCODE analysis. Li et al. [25] revealed that CXCL16 is a modulator of RA disease progression. They performed in vitro study and found that CXCL16 upregulates RANKL expression in RA synovial fibroblasts through the JAK2/STAT3 and p38/MAPK signaling pathway. The main innate immune-related signaling pathways include NF-κB signaling pathway and TRIM32 signaling pathway. Wang et al. [26] found that the TRIM3 expression was significantly downregulated in RA patients than that of the healthy controls. Overexpression of TRIM3 promoted the p53 and p21 expression, while inhibited cyclin D1 and PCNA expression. More importantly, knockdown of TRIM3 expression could partially reversed the inhibition effects of SB203580 (p38 inhibitor) on the inhibition of cell proliferation. Rheumatoid arthritis is an autoimmune nature joint disease with irreversible cartilage destruction and bone erosion. The DEGs were mainly enriched in muscle filament sliding, muscle contraction, intracellular signal transduction, and cardiac muscle contraction. KEGG pathway analysis shows that DEGs mainly focuses on pathways about lysosome, Wnt/β-catenin signaling pathway, metabolic pathways, regulation of actin cytoskeleton, focal adhesion, chemokine signaling pathway, adrenergic signaling in cardiomyocytes, biosynthesis of antibiotics, NF-κB signaling pathways, and proteoglycans in cancer. Studies have shown that the development of RA may depend on the common changes in the expression of specific key genes. Xiong et al. [27] revealed that upregulated genes in RA were significantly enhanced in protein binding, the cell cytosol, organization of the extracellular matrix (ECM), regulation of RNA transcription, and cell adhesion. Shchetynsky et al. [28] revealed that ERBB2, TP53, and THOP1 were new candidate genes in the pathogenesis of RA. KEGG pathway analysis revealed that NF-KB signaling pathway involved into the progression of OA. Xing et al. [29] revealed that miR-496/MMP10 is involved in the proliferation of IL-1β-induced fibroblast-like synoviocytes via mediating the NF-κB signaling pathway. The NF-kB signaling pathway may also have an important role in OA progression because NF-kB molecule has key role in immune response regulation. In this study, we also found that DEGs mainly enriched into the NF-KB signaling pathway. Lysosomes are membrane-bound organelles with roles in processes involved in degrading and recycling cellular waste. In KEGG pathway enrichment analysis, we found that DEGs also enriched into the lysosomes pathway. Lysosomes can be as a therapeutic target for RA [30]. There were some limitations in our study. First, all patients had a pathological diagnosis of RA; however, correlation between DEGs and disease severity did not examine in depth. Second, though we examined the expression of the DEGs between RA and healthy controls, the potential pathway that involved into the RA was not examined. Future studies should be performed to identify the detailed pathway that participated into the progression of RA. Conclusions In conclusion, DEGs and hub genes identified in the present study help us understand the molecular mechanisms underlying the progression of RA, and provide candidate targets for diagnosis and treatment of RA.
2021-07-16T13:45:54.443Z
2021-07-16T00:00:00.000
{ "year": 2021, "sha1": "47070788038fbe7e87d1e341bdd0df814c34a875", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-021-02583-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47070788038fbe7e87d1e341bdd0df814c34a875", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
28461876
pes2o/s2orc
v3-fos-license
Dental education in the rural community : a Nigerian experience At the University of Ibadan, Ibadan, Nigeria, dental students are prepared not only to provide skilled care to individual patients, but also to assume responsibility for the community as a whole. In establishing the rural posting program for dental students, it was planned that all fifth year dental students would undertake a rural posting at Igboora, approximately 80 km from Ibadan, the capital of Oyo State. During this 6 week immersion experience students learn about living in a rural area where they provide community oral health services. This report provides recommendations for initiating, sustaining and expanding rural dental education programs. Introduction Nigeria is a large country with a population of well over 150 million.It is diverse in its geography, ecology, culture, language and pattern of health and disease.Oral health services are sparse and concentrated in the cities. The Federal Government of Nigeria adopted the primary health care as a system of health care that will improve the health and wellbeing of Nigerians particularly those in rural areas.However, oral health care has not been properly integrated into it.Despite the acknowledgement and adoption of the Alma-Ata declaration by majority of the countries of the world as a strategy for achieving Health For All, medical education systems often remain as ivory towers from the health service system [1].Traditionally, basic medical education in African countries was mainly hospital-based, high-technology-oriented and focused on the cure with little attention given to preventive and promotive care [2].This has resulted into production of medical doctors who could only work in secondary and tertiary health centers thereby leaving the primary health care centers underserved or not served at all.This traditional system of medical education does not adequately prepare doctors in developing countries for their expected leadership role in meeting the health care needs of the entire population particularly those in the rural areas [1].Several governmental and nongovernmental agencies have supported the establishment of medical schools that will be more community oriented and train doctors that will have a great sense of service and a strong inclination to broad community care and preventive medicine [3].Sequel to this many medical schools have established rural or remote areas posting for medical students to tackle ill-health within the community [3]. Likewise some dental schools such as the University of Ibadan Dental School Ibadan, Nigeria rose up to the challenge of establishing rural dental education posting for dental students to tackle the nation's major oral health problems.In addition, this rural element of undergraduate dental curriculum was to enable dental students see firsthand practice and lifestyles of inhabitants of rural communities with a view to being stimulated to want to work in areas of unmet oral health needs. The Dental School University of Ibadan, Nigeria was established in 1975 to produce dentists that will apply knowledge and skills in dental sciences to the management of oral diseases and conditions of Nigerians.The clinical dental training programme was based at the Dental Center University College Hospital Ibadan, the capital of Oyo State, Nigeria.Community Dentistry was taught as part of the undergraduate curriculum but there was no rural or remote area exposure for dental students to appreciate and apply knowledge and skills acquired in underserved communities.However, the medical education programme of the University of Ibadan had a rural 2 posting programme which serves as rural immersion experience for medical students.This programme was established in the 1960s at Igboora, a rural community in Ibarapa District in Oyo State, Southwestern Nigeria Figure 1. Key Ibarapa East weeks.This rural medical posting has contributed greatly to the training of medical doctors since they are able to better understand the complex relationship between people's way of life and health.In addition, the programme has also led to the improvement of the wellbeing of inhabitants of Ibarapa. The beginning In Health Programme.This committee had several meetings which mapped out ways of achieving set aims and objectives.The committee also had several meetings with the management of the Ibarapa Community Health Programme so as to ensure the effective integration of the oral health programme into the existing health facilities at the General Hospital Igboora.In addition, discussions which were strongly supported by the Dean Faculty of Dentistry University of Ibadan were also centered on how dental students could begin a rural posting especially when their medical students' counterparts were undergoing their rural posting.Undergoing the programme with medical students was to enable both group of students share ideas and knowledge about health care needs of needy population. A familiarization visit was made to the General Hospital Igboora by both the Ibarapa Community Oral Health and the Community Health teams.During this visit, the consulting clinics, laboratories, pharmacy, wards, theatres, lecture rooms, games room, staff quarters, dormitories, canteen and water collection points were inspected.After the visit, a meeting was conveyed where two rooms for the dental clinic, two blocks of four rooms self-contained apartment for the dental surgeons, female and male dormitories for dental students during the rural posting at this health facility were allocated to the Community Oral Health Programme.These facilities were renovated and put into proper condition so that students can be well motivated to perform their duties.A report on rural practice preferences among medical students in Ghana showed that medical students valued rural job attributes that enabled them to perform well clinically and live comfortably [5].Dental materials, instruments and equipment were purchased and placed in the dental clinic.The service of an in-house trained clinic assistant who is an indigene of the local community was engaged.The dental clinic was later commissioned by the traditional ruler and chiefs of Igbo-ora in the presence of administrative heads of the Ibarapa Central LGA and the Dental School University of Ibadan.The commissioning of the dental clinic was followed by an oral health care training programme for school teachers and community health workers in Ibarapa Central LGA.This training programme was to create awareness about oral health care among them.In addition, it was to increase their capacity to identifying people with oral diseases and make adequate referral thereby making them sign posters.This training programme has also been carried out in other LGAs in Ibarapa district. The rural dental education posting The Programme.The administrative officer provides them with accommodation in the dormitories and shows them the canteen where they can buy their foods.This officer also shows them taps and tanks where they can fetch water for drinking and bathing.The second day after arrival, a rotation chart or work schedule is read and given to them by the senior registrar to allow for effective and proper coordination of the rural dental education programme.The students led by the senior registrar then pay advocacy visits to administrative and traditional leaders in the community so as to establish trust and good will.In addition, the aim and objectives of the rural dental education programme are explained to these leaders so that they can gain community participation.From an ethical perspective, community involvements in matters that fundamentally affect the delivery of health services at a local level is desirable and appropriate [6].Similarly, support from the community is one of the factors that were perceived to influence undergraduate medical student's willingness to work in rural communities [7]. Guided Supportive supervision has been noted to improve motivation among health workers and quality of care [8,9].The activities performed by the students are problem-based, self-directed and student-focused.Schmidt and colleagues [10] reported that problem-based, self-directed and student-focused learning approaches are based on the observation that when students are confronted with community health problems, rather than bits and pieces of fact learning, they are highly motivated to acquire the necessary skills for problem solving. Lectures in oral disease epidemiology, research methodology and biostatistics are given to the dental students by the lecturers in Community Dentistry during their scheduled visits to Igboora.This is to complement the structured community observation and investigation.Dental students are divided into groups and given topics on oral health issues in rural communities as assignments which they present and are scored.These lectures and presentations are interactive and guide students in their activities.A study [7] reported that medical students The impact of the rural dental education programme This rural posting enables students to make real difference in the community rather than the usual method of reading textbooks and not applying the knowledge gained.Non-application of knowledge makes knowledge acquired not to appear real.The posting enables students to get close to the patients and understand their illness.It allows them to understand how the community works and they are able to observe the various determinants of oral health.The communities benefit immensely from the rural dental education programme, not only from the oral health care provided but from the understanding of what might be available through the oral health service and oral health education.Anecdotal reports have shown that the majority of dental students who attended rural posting reported that the rural dental education experience met their expectations by identifying and sensitizing them to community needs.In addition, they mentioned that they were able to work as a team, developed problem-solving and self-directed skills and the rural dental education was relevant to their present function.A research that will report rural posting experience among final year dental students of the University of Ibadan, Nigeria is ongoing.In the future, a research that will systematically and comprehensively evaluate the rural dental education programme should also be carried out. The posting allows dental students to be trained in having the ability to solve oral health problems based on available resources, the ecology, the culture and traditions of the people.This posting helps to transform the image and practice of the dental profession, making dental students most acceptable to the people and making dental education relevant to community needs.This rural, community-oriented and problem-based educational strategy is an immersion experience that will better prepare dental students and help to address shortages of dentists in cross-cultural and underprivileged communities.In the long term, it will help in recruiting and retaining dentists to rural and remote areas.A previous report [11] shows that rural placements will enable health professionals who are unwilling to work in rural areas to do so since they will acquire experience in health systems and services in rural areas.Exposure to rural health care during training is one of the predictors of health professionals' choices for recruitment or retention on jobs in rural areas [12][13][14].The impact of this communitybased and problem-based Primary Oral Health Care (POHC) educational strategy on dental education and practice in Nigeria cannot be overemphasized.It will result in the training of dentists with a strong orientation towards priority oral health problems and community programmes.It will also help students to adopt a holistic approach in their future clinical work.Furthermore, this will strengthen the performance of newly-graduated dentists who are posted to PHC facilities in rural communities for one year National Youth Service Corps in provision of oral health services.Rural-based training placements might enable trainees to overcome the cultural shock of those who have never been to other areas of the country, or to rural areas [van Diepen et al [15]. Challenges of the rural dental education programme Kaye et al [7] mentioned inadequate support facilities such as internet and good libraries as challenges to rural medical programmes.This was also observed in this programme however recommendations have been made to the authorities on the need to provide these facilities.Providing these facilities will better position rural medical and dental education postings to meeting their goals.The majority of final year medical students of the University of Lagos, Nigeria who had rural exposure in the PHC programme of their school reported that the programme should not be scrapped rather it should be better funded to achieve desired objectives [16].Government should fund these programmes so as to motivate students to attend them.This will ultimately translate into the development of interest in working in rural and remote areas.One other challenge was the inability of some students to understand the local language and culture of the people, however this was managed by dividing students into groups which comprise those who can speak and understand local languages and cultures and those who cannot.The former were informed of the importance of helping to translate the local language to English to the latter. Conclusion This rural dental training programme of the University of Ibadan Dental School has been successful since its establishment and it is believed that it adequately prepared dentists who are exposed to this programme for their expected leadership role in meeting the oral health needs of their communities.The programme as part of the undergraduate dental training could be one of the ways of producing good community dentists.Dental schools especially in developing countries that are yet to develop a rural dental education programme should do so thereby demonstrating their role in providing oral health for all.This will also enable students to have enough confidence in providing routine primary oral care services independently in a setting where there are no multidisciplinary supports or advanced diagnostic device.Exposure to rural health care and other factors such as good remuneration and good working condition could help in the recruitment and retention of dentists in rural areas. Figure 1 : Figure 1: Map of Ibarapa Central Local Government Area, Oyo State, Nigeria Igboora, the headquarter of Ibarapa Central Local Government Area of Oyo State is situated about 80 km south of Ibadan and is inhabited by about 60,000 people whose main occupations are farming and tradin[4].The rural posting programme of the medical school of the University of Ibadan is part of the Ibarapa Community Health Programme, a joint programme between Ibarapa communities, the three Local Government Areas (LGAs), the Oyo State Government, the University College Hospital and the University of Ibadan.Medical students during their community medicine posting live in and carry out community health services in Ibarapa and its environs for 6 2007, the Department of Periodontology and Community Dentistry Faculty of Dentistry University of Ibadan established the Ibarapa Community Oral Health Programme with a view to providing oral health care for the inhabitants of Ibarapa and its environs.Similarly, this programme was established to provide location for community based dental education so that dental students can observe the various determinants of ill-health in underprivileged communities and meet patients in a real primary health care setting.The programme was funded by the MacArthur Foundation and strongly supported by the Vice Chancellor of the University of Ibadan, the Provost College of Medicine and the Director of the Ibarapa Community Health Programme.A committee comprising of consultants in community dentistry, resident doctors in community dentistry, dental officers, a public health nurse and an administrative officer was constituted to manage the activities of the Community Oral rural dental education posting of the Dental School University of Ibadan, Nigeria was started in 2008 at Igboora Ibarapa Central Local Government Area of Oyo State, Nigeria.Towards the end of the academic session 35 to 40 fifth year dental students after acquiring adequate clinical training in Dentistry and Medicine each year are scheduled to go for this posting which last for six weeks under the supervision of consultants in community dentistry and assisted by dental officers and resident doctors in Community Dentistry.This posting is usually undertaken when medical students are also undergoing their rural medical posting thereby allowing for academic and social interactions among the two groups of students.Before travelling to Igboora, dental students spend the first week of the posting at the Dental Center University College Hospital Ibadan receiving lectures on research methodology, developing a group research, receiving briefings on the rural dental education programme and collection of posting booklets.At the beginning of the second week, on arrival at Igboora, they are usually received by a senior registrar in Community Dentistry and the administrative officer of the Ibarapa Community Health by the work schedule, the dental students visit various populations in immunization clinics, antenatal clinics, market places, schools, local government area secretariats and venue of meetings for the various artisans.During such visit they carry out community diagnosis by screening for oral diseases.Patients with oral diseases are referred for routine dental treatments at the Dental Clinic in the General Hospital Igboora.Patients who require specialist dental care are referred to the Dental Centre University College Hospital Ibadan.They also provide oral health education on prevention and treatment of common oral diseases with emphasis laid on oral diseases that are prevalent among people of low socioeconomic group.An oral health education folk song has been developed by some group of dental students and this song is sung in both primary and secondary schools in and around Ibarapa.This song is focused on proper oral hygiene maintenance and a research is underway to determine the effectiveness of this song among children.Dental students also carry out their group research which is relevant to the needs of the community.The Senior Registrar in Community Dentistry assists students in organizing their surveys and interpreting their findings.However, the main initiative remains with the students.All these activities are performed under the supervision of lecturers and consultants in Community Dentistry and are assisted by Dental Officers on rotation in Community Dentistry so as to guide and motivate the students. perceived absence of guidance as one of the negative views on community-based training.They have clinical laboratory demonstrations and hands-on-practicals on basic investigatory procedures such as PCV and full blood counts by the laboratory scientists.They are taken through some environmental health and community development programmes by environmental health and community development officers.Two days before the end of the posting, students carry out a research-to-policy programme, where they give feedback to the community on outcome of the survey or group research.They give suggestions for actions to be considered by individuals, the government and health care providers.This programme is believed to effect policy change that will improve oral health.During this research-to-policy programme, members of the community representing various population groups, community leaders and administrative heads of the LGA are invited.A social or cultural night is usually organized for both medical and dental students by the Chairman of the Ibarapa Central LGA to show appreciation of the community to the services rendered by the students.On their return to Ibadan after the rural posting, dental students also give oral presentations of their group research and are scored by heads of the various departments in the Faculty of Dentistry, University of Ibadan, Nigeria who are present.The group research is also submitted as a thesis to the Department of Periodontology and Community Dentistry, Faculty of Dentistry University of Ibadan, Nigeria and scores are awarded.All these scores form part of the continuous assessment for the final Bachelor of Dental Surgery Examination in Preventive Dentistry.Students are generally excited and satisfied about being able to make presentations before of their teachers.This probably gives them confidence in presenting some research findings in both local and international conferences as evidenced by one of the presentations winning the Hatten/Unilever undergraduate poster competition at the 3 rd Conference of the African and Middle-East Region International Association for Dental Research.Administration of the rural dental education posting The rural dental education posting is administered by the Department of Periodontology and Community Dentistry.The head of the department delegates the academic planning of the programme to the Community Dentistry unit.This unit also ensures that the curriculum is developed and reviewed.The unit also ensures that transportation of students, their accommodation and field-work arrangements are made.The Head of Department reports directly to the Dean of the Faculty of Dentistry.
2018-04-03T01:12:36.032Z
2013-04-11T00:00:00.000
{ "year": 2013, "sha1": "dcc7cbc3b2ac49ae3acf6f03dc5561807c36b0aa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.22605/rrh2241", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "499f5a0cd007511879c3b03da4cf5425f073016e", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
229349627
pes2o/s2orc
v3-fos-license
Who practices urban agriculture? An empirical analysis of participation before and during the COVID‐19 pandemic Abstract Coronavirus disease‐2019 (COVID‐19) disrupted the food system motivating discussions about moving from a dependence on long food supply channels toward shorter local supply channels, including urban agriculture. This study examines two central questions regarding the adoption of urban agriculture practices at the household level during the COVID‐19 pandemic: whether the outbreak of the novel coronavirus elicited participation in urban agriculture (e.g., community growing and home growing) and what are the characteristics of individuals who participate. To answer these questions, we conducted two online surveys in Phoenix, AZ, and Detroit, MI. The first round occurred during 2017 and the second during the lock‐down in 2020. Using bivariate probit models, we find that (1) considerably fewer individuals participate in urban agriculture at community gardens compared to at‐home gardening; (2) participation overall is lower in 2020 compared to 2017; and (3) respondents in Detroit practice urban agriculture more than respondents in Phoenix. Across both cities, our results suggest that the continuity of individuals' participation in growing food at community gardens and home is fragile. Not all characteristics that determined who participated in community gardens before COVID‐19 are determining the likelihood to participate during the pandemic. In addition, growing food at home before COVID‐19 was practiced by larger households and employed respondents, yet, during the pandemic, we find that home‐growing was more likely when children were in the household and households were smaller and younger (Detroit), and younger and more educated (Phoenix). These findings suggest that many urban households' food‐growing practices may not yet be mainstream and that other barriers may exist that inhibit households' participation. before COVID-19 was practiced by larger households and employed respondents, yet, during the pandemic, we find that home-growing was more likely when children were in the household and households were smaller and younger (Detroit), and younger and more educated (Phoenix). These findings suggest that many urban households' foodgrowing practices may not yet be mainstream and that other barriers may exist that inhibit households' participation. | INTRODUCTION The 2019 novel coronavirus has disrupted many industries and behaviors across the globe. Among others, "economic activity, employment, food consumption, and workplace environments" have seen significant shifts (Coble, 2020, p. 3). Notably, the onset of shelter-in-place orders resulted in many individuals spending more time at home, which altered the way that households spent time on domestic activities. Not only did individuals increase their home cooking (Lusk & McCluskey, 2020;Thilmany McFadden & Malone, 2020), with 51% of consumers preparing 91%-100% of their meals at home (IRI, 2020), but anecdotes indicate that spending more time at home inspired households to start growing food (Walljasper & Polansek, 2020). In this article, we examine participation in small-scale urban agriculture, which considers urban agriculture related to household food production such as participation in home growing and growing at community gardens (McDougall et al., 2019), during the 2020 lockdown and compare these behaviors to a sample of households in the same geographic areas before the pandemic. The motivations for home growing during the pandemic may be attributed to several factors. Recently, it was reported that households did so to counter food shortages, minimize the frequency of shopping trips, or avoid going to the store altogether, even if gardening at home only proved to be "supplementary" (Russell, 2020;Thilmany McFadden & Malone, 2020;Timmins, 2020;Walljasper & Polansek, 2020). Additional evidence also suggests that seeds experienced unusually high demand, with orders for some companies being six to ten times higher than normal (Timmins, 2020). In addition, the spike in food prices may have served as a further incentive to start home growing. The grocery store consumer price index saw an increase of 2.7% from March to April 2020-a 4.1% increase from April 2019 (Lusk & McCluskey, 2020). Gardening was also adopted as an activity for children, as well as, filling free time due to loss of work, canceled events, and closed businesses (Walljasper & Polansek, 2020). From an empirical standpoint, questions remain regarding whether the shelter-in-place orders, and the surrounding uncertainty of COVID-19, motivated consumers to grow food themselves and what the characteristics are of those individuals who adopted gardening practices. This study aims to contribute to the literature by investigating participation in small-scale agriculture before and during COVID-19, with a quantitative comparative CHENARIDES ET AL. | 143 analysis of households engaged in food growing at home and at community gardens before and during the lockdown. While seed sales are not a valid indicator of a surge in home growing, this trend underscores an issue with potential economic implications. On the supply side, concerns within the agribusiness sector suggest that home food growing could lower demand for fresh fruits and vegetables produced by large-scale growers (Walljasper & Polansek, 2020). On the demand side, if households allocate time to gardening activities, they may do so at the expense of other activities, albeit with possible positive externalities, such as an added source of healthy food consumption and increased food security (Warren et al., 2015;Zezza & Tasciotti, 2010). In this sense, participation in home growing or community gardens may elicit deeper insights into the role of small-scale urban agriculture and its position within the local food system. Against this background, we use data collected in March/April 2017 and May 2020 for two major US metropolitan areas to compare participation in small-scale urban agriculture before and during the pandemic. We focus on urban agriculture-the growing and processing of food in or around metropolitan areas-because people living in urban areas in the United States amount to more than 80% of the population (World Bank, 2016). The expansion of urban areas places a heavy toll on local resources, such as (affordable) food retail options, to meet the needs of households within cities, which was highlighted by COVID-19. As mentioned above, urban agriculture has found new popularity, and could even be seen as a "catch-all" given that it provides access to local and fresh food (Thilmany McFadden & Malone, 2020), and enables those who grow their own food to be less dependent on traditional food outlets (Walljasper & Polansek, 2020). This study is most similar to Bellemare and Dusoruth (2020) who found that high-income households in Montreal were more likely to participate in urban agriculture during the pandemic, yet, to the best of our knowledge, no study has examined the characteristics of urban households who engaged in home growing both before and during the lockdown, which we seek to address. The remainder of the paper is organized as follows. Section 2 provides a brief literature review of urban agriculture. Section 3 describes the design of the study. Section 4 presents the empirical results, and Section 5 concludes the paper. | BACKGROUND With urbanization on the rise, urban agriculture has received more attention in recent years (e.g., Bellemare & Dusoruth, 2020;Dimitri et al., 2016;Grebitus et al., 2020Grebitus et al., , 2017McDougall et al., 2019;Printezis & Grebitus, 2018;Warren et al., 2015). Urban agriculture "is a dynamic concept that comprises a variety of livelihood systems ranging from subsistence production and processing at the household level to more commercialized agriculture. It takes place in different locations and under varying socioeconomic conditions and political regimes" (FAO, 2007, p. V). Urban agriculture is a growing sector within the farming industry that aims to increase overall food production in urban and periurban areas through the conversion of available land into agricultural farms. Local and small-scale food production has been integrated into urban areas across US cities, whether as commercial urban farms or community gardens, as well as growing food at ones' home (Hughes & Boys, 2015;Printezis & Grebitus, 2018). According to the USDA (2020), urban agriculture "takes the form of backyard, roof-top and balcony gardening, community gardening in vacant lots and parks, roadside urban fringe agriculture and livestock grazing in open space." Urban food production often focuses on specialty crops that include most fruits, vegetables, and tree nuts. Compared to traditional agriculture and commodities, these foods are rich in nutrients, vitamins, and minerals, which are considered part of an optimal diet (WHO, 2018). A number of studies have looked at the benefits of small-scale urban agriculture as it relates to food production, dietary patterns, and food security. Research finds that growing food enhances knowledge of food utilization, for example, cooking and preserving vegetables (Libman, 2007). Furthermore, there is a positive association between those who have grown food and produce consumption (Libman, 2007;Van Lier et al., 2017). These findings are supported by another study, which finds that indicators of health and well-being not only improved, but home gardening provides increased access to affordable and nutritious produce and improves food security for the community (Kortright & Wakefield, 2011). Furthermore, from an environmental standpoint, the more individuals participate in urban agriculture, the greater the impact on air quality, reduction in food miles, and mitigation effects of urban heat islands . Aside from the benefits associated with participating in small-scale urban agriculture, participation requires the availability of tangible and intangible resources, such as land, equipment, seeds, and an elementary knowledge of gardening practices. These inputs are directly correlated with garden yields, either at the community-or household-level. One study, conducted in Chicago, IL, compares the role of community and home gardens in terms of food production, and the authors find that only a small percentage of sites are community gardens producing food (Taylor & Taylor Lovell, 2012). Rather, home gardens make up the majority of urban food production areas. Given the additional constraints imposed by the shelter-in-place orders, home gardening may have been more attractive to individuals interested in participating in home food production. Except for Bellemare and Dusoruth (2020), little empirical research has examined participation in small-scale urban agriculture during the pandemic. In their study, Bellemare and Dusoruth (2020) elicit participation among households in Montreal, and find that respondents who are lower-income and male are less likely to participate in urban agriculture, while middle-aged respondents, home-ownership, and larger household size increases the likelihood of participation. Unlike their study, which also considers herb growing, we focus solely on household food growing of fruits and vegetables, as we assume that herbs do not add substantially to food consumption. Similarly, home gardening often considers flowers, which we exclude from our investigation for the same reasons as herbs. We aim to extend the existing research by utilizing data on urban agriculture from two major US cities three years before the pandemic and during the novel coronavirus pandemic to investigate whether the conditions surrounding COVID-19 led more or fewer households to practice urban agriculture than before. Developing a deeper understanding of households participating in small-scale urban agriculture may provide key insights useful for food retailers and food manufacturers who must quickly adapt to a constantly changing environment as well as policymakers, for example, with regard to community planning. | Data To investigate participation in urban agriculture and the extent to which the outbreak of COVID-19 changed participation, we pose the following research questions (RQ): RQ1: Who participated in urban agriculture before COVID-19? To answer these questions, we use data from two online surveys conducted in March and April 2017 (between March 30, 2017 and April 10, 2017) and May 2020 (between May 13, 2020 and May 30, 2020). The survey carried out during COVID-19 took place in May after most of the early constraints receded, conversations about reopening the economy took place, and a stimulus had been sent to many individuals. While we do not know whether households that practice urban farming during COVID-19 will continue doing so after the pandemic, we believe it is worth comparing behavior before and during this event to get an indicator of possible future behavior. We select Detroit, MI, and Phoenix, AZ as the study sites for our comparative analysis. Both Detroit and Phoenix metropolitan areas are among the 15-largest core-based statistical areas in the United States, with Maricopa County, AZ, as the fastest-growing county in the United States (US Census Bureau, 2019). Population CHENARIDES ET AL. | 145 density demonstrates an important need for sustainable urban farming practices, given the benefits of food security, economic stability, and sustainability. Beyond population density, Detroit, MI was chosen due to rapid economic development and opportunities for small-scale urban agriculture (Carmodym, 2018). As of 2019, Keep Growing Detroit (2019) reported nearly 1,600 urban gardens and farms in the Detroit metro area. Additionally, Detroit is a city characterized by a history of food access issues where a high proportion of households live without access to a supermarket or large grocery store (e.g., Budzynska et al., 2013;Taylor & Ard, 2015). Hence, there is an opportunity to alleviate some of the burdens from poor food access when consumers grow food themselves. We chose Phoenix, AZ as the second location because Phoenix provides a context that has a similar diversity of residents, as well as barriers to accessing healthy, affordable foods. In addition, unique climatic conditions characterize Phoenix. Namely, Phoenix has a climate where food can be grown all year round. Moreover, extreme weather conditions are common, which include short-and long-term drought and seasonal monsoons that can bring rapid flooding. Finally, Phoenix has begun to recognize urban agriculture as an attractive fixture in revitalizing communities, especially since urban expansion has replaced nearby agriculture at a large rate (Shrestha et al., 2012), and, unlike Detroit, vacant land that can potentially be used for urban farming is more readily available in Phoenix (Aragon et al., 2019). | Summary statistics In 2017, a total of 840 respondents completed the survey, with n = 420 (50%) participants from Phoenix, AZ, and n = 420 (50%) from Detroit, MI. In 2020, survey participation was slightly lower for Phoenix with n = 412 participants, compared with n = 449 from Detroit, MI. Of the pooled sample (n = 1,701), 28 observations were dropped due to missing demographic data, resulting in a final sample of 1,673 responses. In addition, because of the differences in sampling populations between 2017 and 2020, we applied weights to the 2020 sample to match the distribution of demographic characteristics from the 2020 data to the 2017 data. The demographic characteristics considered were gender, age, educational attainment, presence of children in the household, race, and income in the respective locations. Weights were constructed using an iterative proportional fitting technique ; Izrael et al., 2000). Thus, when weights are applied to the data, the difference in means of these demographic variables from the 2017 and 2020 samples in both locations are not statistically significant, with the exception of "Employed, Part-Time," "Unemployed, not looking for work," "Disabled," and "Asian" (see Table 1). In this sense, we can make comparisons between years within each study site. In regard to the generalizability to the US population, Dynata applied a quota according to age and gender, therefore sampling populations are consistent with the national US Census estimates. Some 25% of respondents have children in the household, with an average household size of 2.7 persons. Regarding employment, 42% of the participants are employed full time and 14% are employed part-time, while 20% are retired, 6% are students, and 7% are disabled. About 8% of the participants are unemployed looking for work, 6% are unemployed not looking for work. The 2020 sample consists of 53% female, 46% male, and 1% nonbinary gender respondents. The average age of participants is 53 years. The education level of the sample ranges from high school diploma (13%), some college experience (23%), 2-year degree (11%), 4-year degree (32%), to a professional degree (18%), and doctorate (2%), Note: All values are in percentages unless otherwise noted; *, **, *** indicate that the difference in means between years within each location is statistically different from zero at the 90%, 95%, and 99% confidence levels, respectively. Iterative proportional fitting was used to construct weights for the 2020 responses to match the distribution of the 2017 sample. . Several participants indicated multiple statuses, such as being employed and a student. Table 1 shows socio-demographics by location for both years. It is noticeable that the 2020 sample is, on average, older than the 2017 sample. Also, the income and education levels are higher but employment is lower. In addition, there are more retired participants in 2020, which likely correlates with the age difference. We include these (weighted) socio-demographics in the analysis as independent variables to test their associations with urban agriculture participation. | Urban agriculture participation before and during COVID-19 To analyze participation in urban agriculture, we asked respondents whether they grow food at home or at community gardens. While the survey responses ranged from never (0) to always (4), we recoded the answers into a dummy variable where zero equals never and one equals at least sometimes. This allows us to account for individuals who might be very involved in food production, but also to account for the "hobby gardener" who might only grow food sometimes. Also, not everyone might grow food all year round, whether that is for weather reasons or other circumstances. Phrasing the answer categories sufficiently open allows us to capture those involved in small-scale food production. Table 2 show several differences. First, considerably fewer individuals participate in urban agriculture at community gardens compared to at-home gardening. Furthermore, respondents in Detroit participate more than those in Phoenix. In 2017, 35% of Detroit participants grew produce at community gardens, while 67% did so at home. In Phoenix, only 23% grew produce at community gardens and 57% grew produce at home. Results in All shares are higher in 2017 than in 2020 during COVID-19. In the 2020 survey, 23% of respondents from Detroit report growing produce at community gardens, and 63% at home. Again, numbers for Phoenix are lower, with 21% reporting that they grow produce at community gardens and 51% stating that they grow produce at home. These descriptive statistics do not point toward a spike in urban agriculture participation due to the pandemic. Rather than seeing an uptick in participation, we note lower numbers, especially for community gardens. This observation, however, could be due to the social distancing and shelter-in-place guidelines during that time. Nevertheless, this does not explain the lower numbers for growing produce at home. | Analysis of practitioners of urban agriculture before and during COVID-19 To investigate the determinants of small-scale urban agriculture participation before and during COVID-19, we use four bivariate probit models. The questions regarding growing produce at home and community gardens serve as dependent variables. We estimate one model each for Phoenix and Detroit in 2017 and 2020. We use bivariate probit models to test whether the production of produce at home or the community garden is related to each other. This is tested using the Wald test of Rho. A significant and positive Rho would suggest that those who grow at home are also more likely to grow at the community garden, whereas a significant and negative Rho would indicate otherwise. If Rho is not significant, individual probit models are more appropriate. Our results for Rho show a significant and positive Wald test of Rho for all four models indicating that the bivariate probit models are appropriate. To begin, we compare Detroit and Phoenix to each other, with regard to participating in community gardening (see Table 3). In Detroit in 2017, characteristics including male, younger, having a lower level of education and less income, and nonwhite increase the probability of growing produce at a community garden. In Phoenix in 2017, being younger and not being a student increases the likelihood of growing produce at a community garden. Results for Detroit during COVID-19 are slightly different than in 2017. Again being male and younger increases the probability of participating in urban agriculture at a community garden, in addition not being a student determines urban agriculture participation. Results in 2020 differ quite a bit for Phoenix compared to 2017. During the coronavirus pandemic, people in Phoenix are more likely to participate in urban agriculture when being male, having children in the household, being younger, having a lower income, and being employed. Compared to growing produce at community gardens, growing produce at home is determined by different variables. In Detroit in 2017, larger households and not being unemployed made it more likely to participate in growing produce at home. However, during the coronavirus pandemic, younger individuals with children in the household and larger households were more likely to grow food at home in Detroit. In Phoenix in 2017, males with higher income were more likely to grow food at home. Results from the survey during COVID-19 show that younger individuals with a higher education level, retired, students, and White, Black or African Americans, and American Indians or Alaska Natives are more likely to grow produce at home in Phoenix. 1 | DISCUSSION AND CONCLUSION Anecdotal evidence suggests that households were increasingly participating in urban agriculture during the COVID-19 pandemic. The potential benefits of urban agriculture are well established in the literature, such as the link between healthy dietary patterns and improved food security for those who practice food growing; yet, during the pandemic, new motivations to garden became known. These reasons included any number of the following: individuals were anxious to go to the store, they wanted to be prepared against out-of-stock situations, they felt a need to become more independent from the traditional food supply, parents needed an activity to entertain children, and simply to adopt a new hobby as individuals found themselves spending more time at home. Nevertheless, adopting home-growing activities is not without tradeoffs. For example, partaking in home-food growing may not be an optimal use of time, especially for individuals who may have lost their jobs due to the shelter-in-place orders. In this study, we set out to investigate two central questions about who practices urban agriculture more generally (already before COVID-19) and who participated during COVID-19. To investigate small-scale urban agriculture participation before and during COVID-19, we use data from online surveys conducted in Phoenix, AZ, and Detroit, MI in 2017 and 2020 (during the first wave of the coronavirus). Overall, we did not find that urban agriculture participation increased during the pandemic. On the contrary, we found that T A B L E 2 Participation in urban agriculture before and during COVID-19 Detroit Phoenix 2017 2020 2017 2020 Grow at community garden 35% 23%*** 23% 21% Grow at home 67% 63% 57% 51%** Note: *, **, *** indicate that the difference in means between years within each location is statistically different from zero at the 90%, 95%, and 99% confidence levels, respectively. 1 In addition to the weighted results, we include the unweighted results in the Appendix. A comparison of both models shows that findings are similar between weighted and unweighted models with regard to significance, signs, and margins. In fact, the only coefficient that changed from being significant to being insignificant is household size, which is not significant for Detroit in 2020 for home gardening in the unweighted model. | 153 participation was lower than 3 years ago. Results show that about one-third of Detroit participants grew produce at community gardens in 2017 and two-thirds did so at home. In Phoenix, the share of those growing produce at community gardens was considerably lower with 23%, and the share of those who grow produce at home was also approximately 10% lower (57%). Urban agriculture participation during COVID-19 was lower in both cities, with 23% of Detroiters growing produce at community gardens and 63% at home. Phoenicians had lower numbers than Detroiters (21% grew produce at community gardens, 51% at home). These findings do not suggest that the pandemic led households to take up urban agriculture significantly. While other factors might have led to a reduction over time, it does not seem that the novel coronavirus motivated a large share of households to grow food. We then used the information regarding growing produce at home and at community gardens as dependent variables and estimated the associations between a number of socio-demographics and participation. We found differences between 2017 and 2020 but also between metropolitan areas. In 2017, male, nonwhite, and younger Detroit respondents with a lower level of education and income were more likely to grow produce at a community garden. During COVID-19, education, income, and non-white were no longer significant. In 2017 in Phoenix, being younger and not being a student increased the probability to grow produce at a community garden, but during COVID-19 male, younger participants with a lower income, and children in the household were more likely to grow produce at community gardens. Growing produce at home in 2017 in Detroit was practiced by larger households and employed respondents but during the pandemic, this changed to younger individuals with children in the household and smaller households. In Phoenix in 2017, males with higher income were practicing home food growing but during the pandemic younger individuals with a higher education level were more likely to do so. Our study is most comparable to Bellemare and Dosuruth (2020). However, while they found that lower income households were less likely to participate in urban agriculture, we find the opposite for urban agriculture practiced at community gardens. This might be an indicator that the structure of urban agriculture determines who participates. Lower income households might not own a property with a yard or balcony that would enable them to grow food at home. Hence, distinguishing between urban agriculture settings might be important when studying participation in small-scale urban agriculture. We acknowledge that regardless of these results, participation in small-scale urban agriculture during a pandemic has its challenges, with barriers to adoption being that many nurseries selling plants and seeds were considered nonessential businesses, and therefore acquiring the resources necessary to grow food at home may affect the ability for the adoption of home or community gardening. Reasons for the limited use of community gardens during the pandemic could be community gardens being closed. Those dependent on public transportation might not have been able to get to the community garden. Community gardens often have waitlists, which might have prevented individuals not formerly involved in gardening from participating during the pandemic even though they wanted to. Most community gardens charge a fee, which could have prevented some from participating especially considering that a high number of Americans lost their jobs during the pandemic. With regard to growing produce at home, a number of difficulties might have also arisen for gardening novices. These include: the low availability of seeds; not having a yard or balcony to garden; not having the resources for soil, fertilizer, pesticides, seeds, or seedlings; not knowing how to cook with the fresh produce; and, not knowing how to grow produce. Ultimately, while (financial) resources may be overcome, the lack of knowledge might be harder to tackle. Especially, if one invests resources the disappointment of plants not growing, dying, or not carrying fruit might easily discourage individuals to pursue agriculture in the long term. Our results suggest to stakeholders in the food industry that individuals continue to have a strong dependence on traditional food supply chains, as we saw fewer households shifting toward household food production during the pandemic. However, there might be increased demand for certain businesses such as seed providers and distributors, producers of gardening soil, fertilizer and pesticides, gardening containers, garden centers, home improvement stores, irrigation systems, and so forth. While we are usually focused on the traditional food chain from farmers to food retailers, this shows that there is another part of agribusinesses that would benefit from higher participation in smallscale (urban) agriculture. Also, given the stock-outs of seeds, it would be of interest to investigate whether increased availability would increase participation in urban gardening. Furthermore, past research found that knowledge and education were determinants of urban agriculture participation (Grebitus et al., 2017). Hence, those who have an interest in increasing partaking in small-scale urban agriculture, such as, seed growers, could offer educational materials and classes to enable individuals to grow produce. For policymakers and urban planners involved in making community gardens available, our findings suggest the importance of making resources available and providing support to households who wish to grow food, but may not have the expertise, time, or other resources necessary. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. Note: Robust standard errors reported. Estimates presented for the preferred model, selected based on lowest value of Bayesian Information Criteria (BIC). *p < .10. **p < .05. ***p < .01.
2020-12-02T14:02:03.866Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "be35faae16f24ae6a02ecc52cb913b049d126502", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/agr.21675", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "35f356a274a83380a73d5ac5a57e3e5d81b92661", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
257889925
pes2o/s2orc
v3-fos-license
Measuring the Difficulties in Forming a Coalition Government : Electoral thresholds in the context of parliamentary elections are an instrument for preventing the fragmentation of parliaments and facilitate the formation of a coalition government. However, the clauses also introduce distortions and modify the equality of electoral votes in an election. In order to decide to what extent these negative effects can be accepted, it is necessary to measure the difficulties in forming a coalition government and to quantify the effects of electoral thresholds on these difficulties. For this issue, we introduce a concept based on cooperative game theory which takes into account the distribution of seats in parliament and coalition statements of parties. Introduction An electoral threshold is a provision in a proportional representation system according to which parties with below a certain share of all votes are not taken into account in the allocation of mandates. The justification for these electoral thresholds is that they prevent the fragmentation of parliaments and facilitate the formation of a coalition government [1][2][3]. In Germany, an electoral threshold of 5% exists at the federal level and at the state level. 1 For the reasons stated in the first paragraph, this threshold is compatible with the Constitution (German: Grundgesetz) in the view of the Federal Constitutional Court [4]. However, this compatibility does not have to be permanent, and circumstances must be taken into account. At the level of a European election, Germany also had a 5% electoral threshold (European Election Law). In a judgment on 9 November 2011, the Federal Constitutional Court ruled that this threshold is not compatible with the Constitution, because in the European Parliament, no coalition government is formed [5]. Subsequently, the legislature introduced a 3% electoral threshold. This threshold was again rejected by the Federal Constitutional Court [6]. The question of whether and at what level an electoral threshold is compatible with the Constitution is therefore a current problem. An objective measure of when it is more difficult to form a coalition government and how this measure can be used to check whether an electoral threshold is appropriate to the circumstances in parliament is still lacking. To establish objective measures, cooperative game theory can be used to model party voting power in parliaments. Interestingly, the voting power of a party in a parliament does not necessarily correspond to only its share of votes. The distribution of votes among the other parties is also influential. For example, if party A has 40% of the votes in a parliament and the remaining votes are distributed among an infinite number of other parties, then the voting power of A is very high. However, if only one other party exists (with 60% of the votes), the intuitive assumption is that A's voting power will be close to zero, since it will be defeated in all votes. To determine the voting power of a party based on the vote shares of all parties, cooperative game theory provides an analytical framework with weighted voting games and voting power indices. Another factor that influences the voting power of a party is its coalition statements. These statements announce preferred coalition partners and excluded coalition parties. The statements have two effects. The initial purpose is to raise the political profile of the party. More specifically, politicians want to increase the attractiveness of their party and influence voters' behavior. After elections, the statements influence the bargaining strengths of the parties. For example, the exclusion of coalitions might reduce the bargaining power. Some new indices or values of cooperative game theory exist that could measure a party's voting power in parliaments under consideration of the excluded coalitions. More specifically, Ref. [7] introduced and axiomatized the EC Sh value (excluded coalitions' value based on the Shapley [8] value). The EC Sh value enhances the approaches developed by [9,10] to model player preference for cooperation with some other players. In [11], the value was used to analyze the distributions of power in German coalition governments that were possible with respect to the opinion polls prior to the 2013 federal election. Reference [12] gives some general results on the effects of excluding cooperation in games using the EC Sh value. In our analysis, in addition, we applied the Holler version [13,14] of the EC Sh value -the EC Ho value. Based on parties' levels of voting power measured by the EC Sh value and the EC Ho value, we applied a measure for concentration -the Herfindahl-Hirschman index [15]-to calculate the concentration of voting power. Our interpretation is that if the concentration is low, we have a parliament in which forming a government or a majority tends to be more difficult. We performed the following analyses in our approach. First, we considered some theoretical issues with the effect of electoral thresholds on the concentration measure. This was followed by three applications of the concentration measure. First, in a simulation study that randomized election results for five parties, we showed how the concentration of voting power changes on average in all simulations when the electoral threshold is increased step by step. In the following two applications, real election results and coalition statements were considered. The electoral thresholds were constant in both cases. These applications test how our proposed concentration measures would have behaved in a historical context and to what extent they are thus a meaningful measure of the possibility of coalition formation in parliaments. The first application involves the analysis of seat distributions in the parliament of the Weimar Republic between 1918 and 1932. The second application analyzed the distribution of seats in the German federal parliament between 1990 and 2021. The remainder of this article is structured as follows. Basic definitions of cooperative game theory and our definition of voting power concentration are presented in Section 2. In Section 3, we present our applications. Section 4 concludes the paper. Cooperative Game Theory A TU (transferable utility) game is a pair (N, v), where N = {1, 2, ..., n} is the nonempty and finite set of players. In our paper, the parties are players. The coalitional function v assigns every subset K of N a certain worth v(K), which reflects the economic abilities of K (i.e., v : 2 N → R such that v(∅) = 0). The number of players in N is denoted by n or |N|. A special case of a TU game are weighted voting games. In these games, a voting body is modeled. Parliaments are voting bodies. A primary task is to determine the characteristic function v. Every party i ∈ N has a voting weight w i ≥ 0 representing the seats of the party. 2 The coalition function v assigns the worth one (winning coalition) to a coalition K ⊆ N if more than half of all seats are owned/governed by coalition K. Hence, v is given by: A power index is an operator ϕ that assigns (unique) payoff vectors to all games (N, v) (i.e., uniquely determines a payoff for every party in every TU game). The Shapley value is one power index. To calculate the parties' Shapley payoffs, rank orders ρ on N are used. The set of these orders is denoted by RO(N); n! rank orders exist. The set of parties before i in rank order ρ, together with party i, is called K i (ρ). The Shapley payoff of a party i is [8]: When applying this value to parliaments, the Shapley value is called the Shapley-Shubik index, and the payoff of party i is interpreted as the voting power of i [16]. Another power index was introduced by [13,14]. They utilized ideas from [17,18]. For calculating the Holler voting power, the set of minimum winning coalitions of (N, v) is used. 3 These coalitions are minimal, in that any party's defection will reduce the worth of the coalition to zero. They are defined by: M(N, v) = {S ⊆ N|v(S) = 1 and v(T) = 0 ∀ T S}. The voting power of party i is: whereby C i (N, v) measures the number of times a party i is a member of a minimum winning coalition. Since it is common in German parliaments to form coalition governments that are minimum winning coalitions, the Holler power index is an appropriate power index. The Shapley index and the Holler index differ in two ways. First, voting power with respect to the Holler index considers only minimum winning coalitions. Second, the Holler index weights the marginal contributions of parties equally. Therefore, considering both indexes provides coverage for a variety of approaches to determining voting power and allows us to place our results on broader footing. A more detailed survey of power indices is presented in [20][21][22][23][24]. As mentioned in the Introduction, prior to elections, parties make coalition statements and exclude coalitions with certain parties. These statements prevent coalitions and are modeled below. The set of i's excluded coalition partners is denoted by E i . A party excludes only coalitions with single parties; i.e., if party A can cooperate with both party B and party C, it cannot exclude cooperation in a tripartite alliance {A, B, C}; |K| = 1 ∀ K ∈ E i . The set of coalitions that are not allowed based on E i is called X i , X i := { K ⊆ N|K\{i} ∈ E i } with |K| = 2. If i does not cooperate with j, we have X i ∩ X j = {i, j}; i.e., if a party i does not cooperate with j, j cannot cooperate with i. All inadmissible coalitions are denoted by Γ := K ⊆ N|∃ S ∈ X j , j ∈ N, with S ⊆ K . Thus, the admissible coalitions in the game (N, v) are Ω := { K ⊆ N|K / ∈ Γ}. A game with excluded coalitions is a tuple (N, v, Γ). The primary idea of the EC value is that only admissible coalitions are considered [7]. When calculating the EC value based on the Shapley value, we have: 4 In the case of The sum of the payoffs does not have to equal 1, unlike the Shapley value. This expresses the fact that coalition exclusions make it more difficult to find a majority. For example, if two parties, each with more than 25% but less than 50% of the seats in parliament, exclude any coalition with other parties, then the total EC payoffs will be zero. Similarly, we obtain the Holler version of the EC value by modifying C i (N, v): We obtain [11]: In addition, for this value, the parties' payoffs need not to sum up to 1. Again, in the case of Γ = ∅, we have EC Ho Similarly to our explanation of the EC Sh value, this means that exclusion of coalitions with parties holding seats will lower the concentration of voting power (see next Section). The payoffs of the two values can be interpreted as a priori voting power in parliaments without the existence of a governing coalition. Decreasing payoffs of the parties mean that their influence on voting results decreases. Due to coalition exclusions, it is possible that v(N) = 1 is no longer distributed to the parties. This can be interpreted as a difficulty in forming a governing coalition. The extreme case - respectively -then means that it is no longer possible to form a coalition government. Concentration of Voting Power The idea of our article is that the concentration of the a priori voting power of parties in a parliament operationalizes the capacity of the parliament to act. Concretely, we use the Herfindahl-Hirschman index [15] to determine the concentration of voting power of political parties. If this is low, we have a parliament in which forming a government or a majority tends to be more difficult, and possibly, more parties are represented in a coalition government. In such cases, the electoral threshold can serve to reduce the number of parties in parliament and increase the concentration of voting power. In other applications, the Herfindahl-Hirschman index is used, for example, in sales markets to determine market penetration. In [26], the Herfindahl-Hirschman index is used to determine the concentration of voting power of the owners of firms. The Herfindahl-Hirschman index has been applied to measure the political competition or the concentration of political power in parliaments based on the seats distribtution of parties in [27][28][29][30]. Maux [31] applied the Herfindahl-Hirschman index to measure the political power within a coalition government after an election. In our approach, we determine the concentration of voting power in a parliament, H, as: respectively Based on Equations (4) and (6) we deduce some first insights on the Herfindahl-Hirschman index: In these cases, finding a majority or forming a coalition government is most readily accomplished; we obtain for party i EC Ho respectively: In the case of n → ∞, we have H Ho → 0 and H Sh → 0. In this case, all parties are symmetric, and worth is distributed among all parties equally. Hence, we have EC Ho Simulation With our simulation study, we aim to show the effect of electoral thresholds on the two concentration measures. The exclusion of coalition parties is not considered, since options for this are very diverse. 5 From this, we have H Sh = 0.2333. By raising the threshold to b = 0.07, the distribution of seats is (with rounding) w 2 = w 3 = 0.20, w 4 = 0.28 and w 5 = 0.32. For this seat distribution, we have We simulated election results with five parties 100,000 times. For this, we have random votes s i assuming a uniform distribution of votes between 0 and 1 with ∑ i∈N s i = 1. For each voting result, we consider electoral thresholds between 0% and 20% in steps of 0.25. For each level of the electoral threshold, we determined which party is represented in parliament and calculate the share of votes in parliament for each party. From this, parties' Shapley payoffs and Holler payoffs were computed. These were used to calculate the concentration measures H Ho and H Sh . Figure 1 shows our average results for all simulations. Concentration with respect to the Holler index starts slightly below that of the Shapley one; both increase exponentially with the electoral threshold. Our results imply that the higher the electoral threshold, the stronger its impact on the concentration of voting power in parliament. Therefore, in our estimation, the legislature should carefully consider the level when introducing electoral thresholds and increase the requirements for consideration as the level of the threshold increases. Weimar Republic parliament 1918-1932 In this section, we apply the two concentration measures to the parliament of the Weimar Republic. We consider the seat distribution in parliament and the main excluded coalitions of parties. As the proportion of parties uninterested in democratic cooperation increased over time, it is generally acknowledged that the difficulties of forming a coalition government increased. We aim to show that the two concentration measures reflect these increasing difficulties. The electoral system of the Weimar Republic was a proportional representation system. In principle, a party received one seat in the Imperial Diet (German: Reichstag) for every 60,000 votes cast. 8 An electoral threshold did 8 For this purpose, the Weimar Republic was divided into 35 constituencies. Each party at constituency level received one seat in the parliament for every 60,000 votes cast. Residual votes from these constituencies were transferred to the next evaluation level (constituency associations) and assigned to parties there. If a party received at least : 1918-1932 In this section, we apply two concentration measures to the parliament of the Weimar Republic. We consider the seat distribution in parliament and the main excluded coalitions of the parties. As the proportion of parties uninterested in democratic cooperation increased over time, it is generally acknowledged that the difficulties of forming a coalition government increased. We aim to show that the two concentration measures reflect these increasing difficulties. Weimar Republic Parliament The electoral system of the Weimar Republic was a proportional representation system. In principle, a party received one seat in the Imperial Diet (German: Reichstag) for every 60,000 votes cast. 7 . An electoral threshold did not exist. Reforms of the electoral system were discussed throughout the existence of the Weimar Republic. One proposal was to increase the number of votes required per seat to 75,000. In addition, the aggregation of votes at the higher level was suggested to be capped. Other approaches included the introduction of elements of majority voting and electoral thresholds. The introduction of an electoral threshold would have increased the share of seats held by parties on the fringes of the party spectrum due to the high fragmentation of parties in the middle of the party spectrum. Therefore, this idea was rejected in the run-up to the 1930 election [32][33][34][35][36][37]. Decisions in the Imperial Diet were generally made by simple majority (exceptions existed, for example, for constitutional changes). The entirety of excluded coalitions in the Weimar Republic is quite complex [38][39][40]. We concentrate on the main exclusions that demonstrate the application of our approachi.e., we focus on the anti-republican parties on the left and right of the party spectrum, respectively, which were not seriously interested in parliamentary work. Concretely, we assume that the KPD (Communist Party of Germany) excluded all coalitions with other parties (with the exception of a possible cooperation with the USPD), and the NSDAP (National Socialist German Workers' Party) 8 excluded all possible coalitions. 9 The results of our calculations are shown in Table 1 which an electoral threshold would have alleviated the problems remains to be seen, since the parties were particularly fragmented at the center of the party spectrum. To show the effect of coalitional exclusions, we computed a counterfactual scenario in which no coalitional exclusions existed. The results are shown in Table 2. The resulting concentration of voting power was higher than when coalition exclusions were taken into account. The strengthening of the anti-republican parties on the left and right of the party spectrum leads to an increase in the concentration of voting power in this calculation, since the fragmentation of the parliament decreases. Hence, not taking coalition exclusions into account would suggest that coalition formation was not affected toward the end of the Weimar Republic. More fine-grained modeling of the preferences of parties in the Weimar Republic parliament was applied [41] for the 1919 election and the already undemocratic 1933 election. In their article, the parties are positioned on a spectrum and the values of the winning coalitions are weighted by the distances of the parties. After this, the Banzhaf-Penrose index [42] and intensity function values [43] are applied to them. In principle, the Shapley-Shubik index and the Holler index could also be applied to this weighted values of coalitions. The prerequisite to apply the approach by [41] is a detailed analysis of the positions and distances of the parties in the political spectrum of the Weimar Republic. Moreover, this method assumes that parties are positioned on a linear spectrum. However, cooperation between parties may be impossible despite the political proximity of theme.g., if one party gets a splinter party (e.g., SPD and USPD during the German Empire, or SPD and WASG in 2004 in Germany). German Federal Parliament: 1994-2021 In this final application, we analyze the German federal parliament from 1994 to 2021. Again, we consider the seat distribution in parliament and the main excluded coalitions of parties. As the number of parties in parliament has increased over the years and the AfD, a party with which the other parties do not want to form a coalition, has recently entered the parliament, the latest protracted negotiations to form a coalition government show that the difficulties of forming government have increased here as well. Again, we aim to show that the two concentration measures reflect these increasing difficulties. In addition, it is generally accepted that it was easier to form coalition governments in the period from 1994 to 2021 than at the end of the Weimar Republic -this should also be reflected in the concentration measures. The election to the Bundestag (Federal Parliament) is regulated by law in the Federal Election Act (German: Bundeswahlgesetz). It is a personalized proportional representation election. Voters elect a direct candidate for their constituency with their so-called first vote. With the second vote, the state list of a party is elected. The distribution of seats in the Bundestag is based on the share of the second vote. If a party receives more direct candidates than the number of seats it is entitled to according to the two-vote share, these are allocated to it as overhang mandates. To compensate for the resulting shift in the proportions in the parliament, the other parties receive compensatory mandates. 10 In § 6 (3) of the Federal Election Act, an electoral threshold of 5% is defined. Only parties whose share of the second vote is higher than this threshold are taken into account in the allocation of seats via the state lists. Alternatively, this quorum can be exceeded if a party receives three direct mandates. 11 Detailed specifications for the 2021 election are described by [57], for example. The coalition statements for the parties are shown in Table 3. A historical overview and analysis of coalition statements of German parties are presented in [58][59][60][61][62]. Decisions in the Federal Parliament are generally made by simple majority (exceptions existed, for example, for constitutional changes). The results of our calculations are shown in Table 4. The concentration of voting power for both concepts increased from 1994 to 2002. For 2005, both concepts produced their lowest values since 1994. For the elections in 2009 and 2013, the results increased. The last two elections in 2017 and 2021 resulted in low H Ho and low H Sh . This was due to the higher number of parties in the Bundestag and the existence of coalition exclusions from the parties. In particular, the rather high share of the AfD, combined with the coalition exclusion of the other parties, makes it more difficult to form a government and lowered the results for H Ho and H Sh . The first three-party coalition in government in 2021 was a result of this development. Compared to the Weimar Republic, however, the results (with limitations in 1920) are higher. This fits with the common opinion that coalition formation was much more difficult toward the end of the Weimar Republic than in the current situation in the German federal parliament. From this perspective, the results for H Ho and H Sh seem plausible according to both developments in the German Federal Parliament and compared to the Weimar Republic. To show the effect of coalition exclusions, we also present an alternative calculation here. The exclusions are shown in Table 5. They are based on the traditional party alliances in the 1990s in Germany, in which a conservative bloc of CDU and FDP opposed the bloc of SPD and Grüne. A coalition with Linke was also ruled out by the SPD and the Grüne. Only the following developments in the last few years allow for fewer exclusions, as shown in Table 3 As is to be expected, the concentration of voting power decreases as more excluded coalitions are taken into account (see results in Table 6). Hence, in this example as well, the two measures of voting power concentration model the situation appropriately. Conclusions In this article, we presented an approach based on concepts of cooperative game theory that is able to measure the difficulties in forming a coalition government. Our simulation study and the application to two parliaments showed that the approach yields plausible results. In our simulation study with five parties, the concentration of voting power increases on average when the electoral threshold is increased step by step. For the Weimar Republic's parliament, the two concentration measures reflect the generally acknowledged increase in the difficulty of forming a coalition government. The same result was obtained for the German federal parliament. In addition, the results for the concentration measures for the German federal parliament are higher than our results for the Weimar Republic parliament. This result fits in with the general assessment that forming a coalition government was more difficult in the Weimar Republic than in Germany between 1994 and 2021. Electoral thresholds are enacted by the legislature and reviewed by courts to reduce these difficulties in forming coalitions. Thus, our approach can be one element to operationalize these difficulties and could be a tool for courts in reviewing electoral thresholds. Of course, the methodology presented should not be used for analysis alone and should be complemented by existing analyses. These analyses include demoscopic analyses, which provide statements on election results, and politological analyses, which examine excluded coalitions between parties. Another line of research could be a comparison of our approach to measure the difficulties in forming a coalition government with other measures used in the literature, such as the number of parties in a coalition government [65,66], the number of seats above majority quorum [67], the margin of victory [68,69] and the volatility of votes shares over time [70]. In addition, our approach could be applied for other analyses in which the difficulties of coalition formation or political competition in parliament are relevant, e.g., budget deficits [27,69,[71][72][73][74], tax revenue [75], fiscal performance [30,[76][77][78] and expenditure efficiency [29,79,80]. For further research, one objective could be an investigation of flexible electoral thresholds that are determined on the basis of objective criteria. It is conceivable that a certain concentration of voting power in parliament should be set as a target before the election and that this is achieved with a flexible threshold after the election, knowing the election results. 12 This would allow electoral thresholds to be removed from the legislature as a political tool. Methodologically, our analysis was limited, for example, with respect to the modeling of minority governments. In some countries, such as Denmark and Norway, it is common that a government does not have a majority of overall seats in the legislature. This situation could not be modeled satisfactorily with cooperative game theory. Another constraint in our model is that parties can only completely exclude cooperation. In reality, however, more complex constellations exist. Possibly, the approach of overlapping coalitions can be adapted here for better modeling [82]. Additionally, our model could be enhanced by considering situations in which parties make statements that they will not be a junior partner in a coalition government. To model this, decomposition of the n-person game using a 2 n × 2 n matrix could be used [25,83]. In addition, voting power indices that take into account parties' preferences on coalitions in a more detailed way [41,43] could be the basis for the Herfindahl-Hirschman index. Data We add a list of supplementary information in Tables 7-10 in order to explain the previous contents. Data Availability Statement: The datasets analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The author declares no conflict of interest. Code Availability: The code is available from the corresponding author on reasonable request. 1 An overview of blocking clauses in other countries is given by [2]. 2 As is common in the academic literature, we assume that each party votes en bloc in the parliament. 3 Riker [19] shows in his book that the smallest minimum winning coalition will be formed. 4 In the interpretation of [25], excluding cooperation between i and j reduces the value of party i to party j to zero. 5 In applications of the following sections involving the Weimar Republic from 1918 to 1932 and the German federal parliament from 1994 to 2021, excluded coalitions are considered. 6 Given voting shares s i of a party i in an election with s i ≥ 0 and s i ≤ 1, ∑ i∈N s i = 1 and electoral threshold b with b ≥ 0 and b ≤ max{s i }, we have: For this purpose, the Weimar Republic was divided into 35 constituencies. Each party at constituency level received one seat in parliament for every 60,000 votes cast. Residual votes from these constituencies were transferred to the next evaluation level (constituency associations) and assigned to parties there. If a party received at least 30,000 votes at the constituency level, it could receive a seat in the parliament for 60,000 residual votes here. Finally, residual votes were transferred to the imperial election. Each party received one seat for every 60,000 votes remaining. A small party had an advantage if its supporters lived in regional concentrations 8 The same held for the NSFP (National Socialist Freedom Party) that existed in 1924 during the aftermath of the Beer Hall Putsch. 9 This restriction is controversial, as the NSDAP eventually entered into a coalition with the DNVP in 1933. 10 The election procedure has been reformed in 2011 [44,45], 2013 [46][47][48][49][50] and 2020 [51][52][53]. This was due to two objectives. On the one hand, the growth of the Bundestag was to be limited; on the other hand, so-called negative voting weights were to be avoided [54][55][56]. 11 Independently of this, each direct candidate of a constituency enters the Bundestag. 12 Analogously, [81] introduced variable qualified majority rules for decisions on shareholders' meetings.
2023-04-02T15:14:38.639Z
2023-03-31T00:00:00.000
{ "year": 2023, "sha1": "be790a2b3c1e761fdf1735d19117a351a6a5375a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4336/14/2/32/pdf?version=1680491622", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "691c5b7d384f541e3e3e7bcccaf4e5baf44a3027", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
90273363
pes2o/s2orc
v3-fos-license
Rapid DNA minipreps from Neurospora Rapid DNA minipreps from Neurospora Creative Commons License This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License. This regular paper is available in Fungal Genetics Reports: http://newprairiepress.org/fgr/vol32/iss1/8 Restriction analysis of pCC103 and pMF2 indicates that the pattern of digestion of DNAs from these two clones is different and the size of the fragments (base pairs) produced as a result of digestion with Pst I, Hind III, Him II, Bam HI, Sst II, Sma I and Xba I are shown in Table I. The comparison of the digestion patterns of pCClO3 and pMF2 (Table I) suggests the existence of one additional site for Bam HI and Hind III in pCC103. The detailed analysis of the exact locations of these restriction sites is in progress. This analysis which is part of our ongoing work will help to determine the initiation and termination sites and processing sites in this clone. This clone should be useful in screening and identification of rDNA clones for a variety of Neurospora species. Vollmer, S.J, and R.H. Davis We have developed a procedure for small-scale DNA isolation from Neurospora for use in screening Rapid DNA minipreps multiple isolates.It is faster than the from Neurospora. procedure reported by 198l).The cell wall of Neurospora has a polycation, polygalactosamine, that binds long-chain polyphosphate (Harold, F. M. and A. Miller, Biochim, Biophys, Acta 50: 261-270, 1961), and we reasoned that it might bind DNA in phenol extraction procedures.We therefore tried to isolate DNA from exponentially growing cells, with many nuclei and thin cell walls; losses of DNA which might bind to wall polycations might be tolerable.The method described below has worked well for us, but it is by no means perfected.It is offered as an idea for further improvement, somewhat prematurely, owing to the annual publication schedule of the Newsletter. The method yields DNA of good quality, indicated by restriction endonuclease digestions, and has an average size of 65-70 kb.The yield approaches 1 µg DNA per mg dry weight of culture.The entire procedure, from inoculation to analysis by gel electrophoresis, requires approximately 24 hr. PROCEDURE: 1. Start 20-to 40-ml cultures, inoculated with 1 x l0 6 conidia/ml, and allow them to germinate at 25 °C overnight (12 hr, to a dry weight of about 0.3 to 0.8 mg per ml. with strong shaking or aeration) Collect cells and wash on a filter funnel (we use l-inch circles of Whatman filter paper).Disperse in 4-5 ml 1M sorbitol in a 30-ml Corex tube.Avoid aggregated mycelial growth that might occur on the rim of the medium; evenly dispersed mycelium works well. 2. Add 1 ml Novozym reagent and incubate for 60 min at 30° C. may help. 6. Add 100 mM spermine (dissolved in TE) to bring extract to 3 mM spermine.Incubate on ice for 20 min. A clot should form quickly and condense gradually. (and sink) Carefully remove the hazy supernatant with a pipette, leaving clot.Add 1 ml cold spermine wash buffer to the clot and incubate on ice for 30 min; change solution in the same manner and incubate another 30 min. Remove supernatant with a pipette, leaving DNA clot.(A longer incubation in the second wash may be in order, particularly if the procedure is scaled up.) 7. Wash clot with cold 70% ethanol, centrifuge briefly, ethanol wash.and carefully remove Remove residual ethanol by vacuum centrifugation or heating at 65° C for 15 min.(Fully dry pellets will be hard to redissolve.)Dissolve pellet in 80 µl 1 mM Na 2 EDTA, pH 8.0, then add 20 µl 5X high-salt buffer.REAGENTS: 1. Novozym reagent: In 1M sorbitol disperse 2 mg/ml dry Novozym 234 powder (Novo Laboratories, Inc., 59 Danbury Rd., Wilton, CT 06897).Make fresh and keep on ice. 2. Phenol reagent: Add 150 ml of a solution which is 0.1 M Tris HCl, pH 8.0, 0.67 M NaCl and 1% sodium dodecyl sulfate to 100 g purified crystalline phenol.Heat to 65°C to allow the phases to mix fully.Divide into 50-ml aliquots and store at -20°C. 3. Spermine wash buffer: 75% ethanol, 10 mM Mg acetate, 0.3 M Na acetate, pH 6.0.Store at 4° C. 4. High-salt and 5X high-salt buffer are those of Metzenberg and Baisch (op.cit.).COMMENTS: We have found this method to be fast and reliable.Despite repeated attempts, however, we have not been able to use this procedure to isolate DNA from late-log or stationary (mycelial) cultures.Critical steps in the procedure are the use of spermine to condense selectively large pieces of nucleic acid (Hoopes, B. C. and W. R. McClure, Nucl. Acids Res. 9: 5493-5504, 1981), and aspiration of the supernatant, rather than centrifugation, to collect precipitated material.A benefit of rapid cell growth is the many mitochondria in these cells: distinct patterns of mitochondrial DNA bands are visible, superimposed on the diffuse nuclear DNA digestion products, when restriction endonuclease reaction products are separated by agarose gel electrophoresis.These bands match exactly those seen after digestion of purified mitochondrial DNA (Taylor, J. W. press ], and B. D. Smolich, Curr.Genetics 9: [in and they may be used as internal standards.We expect that this method can be scaled up to yield milligram quantities with suitable modifications. (Supported by NSF Grant PCM82-08866 to C. Yanofsky.S.J.V. is a Fellow of a Career Investigator of the American Heart Association).---Department of Biological Sciences, Stanford University, Stanford, CA 94305.[R.H.D. on leave from the Department of Molecular Biology and Biochemistry, University of California, Irvine, Irvine, CA 92717.1
2018-12-08T04:32:48.605Z
1985-01-01T00:00:00.000
{ "year": 1985, "sha1": "def9f5d768207d2c0b6185d15c0cc6736510ad08", "oa_license": "CCBYSA", "oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=1576&context=fgr", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "def9f5d768207d2c0b6185d15c0cc6736510ad08", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
118690504
pes2o/s2orc
v3-fos-license
Cosmological Models with No Big Bang In the late 1990s, observations of Type Ia supernovae led to the astounding discovery that the universe is expanding at an accelerating rate. The explanation of this anomalous acceleration has been one of the great problems in physics since that discovery. In this article we propose cosmological models that can explain the cosmic acceleration without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy. There are four distinguishing features of these models: 1) the speed of light and the gravitational "constant" are not constant, but vary with the evolution of the universe, 2) time has no beginning and no end, 3) the spatial section of the universe is a 3-sphere, and 4) the universe experiences phases of both acceleration and deceleration. One of these models is selected and tested against current cosmological observations of Type Ia supernovae, and is found to fit the redshift-luminosity distance data quite well. I. INTRODUCTION In the late 1990s, observations of Type Ia supernovae made by two groups, the Supernova Cosmology Project [1] and the High-z Supernova Search Team [2], indicated that the universe appears to be expanding at an accelerating rate.The current mainstream explanation of the accelerating expansion of the universe is to introduce a mysterious form of energy-the so called dark energy that opposes the self-attraction of matter.Two proposed forms for dark energy are the cosmological constant, which can be viewed physically as the vacuum energy, and scalar fields, sometimes called quintessence, whose cosmic expectation values evolve with time.Currently, in the spatially flat Λ CDM model of cosmology, dark energy accounts for nearly three-quarters of the total mass-energy of the universe [3].The introduction of dark energy raises several theoretical difficulties, and understanding the anomalous cosmic acceleration has become one of the greatest challenges of theoretical physics.There are a number of excellent review papers on this issue [4][5][6]. In this article we propose cosmological models that can explain the accelerating universe without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy.There are four distinguishing features of these models: i The speed of light and the gravitational "constant" are not constant, but vary with the evolution of the universe. i Time has no beginning and no end; i.e., there is neither a big bang nor a big crunch singularity. i The spatial section of the universe is a 3-sphere, ruling out the possibility of a flat or hyperboloid geometry. i The universe experiences phases of both acceleration and deceleration. One of these models is selected and tested against the current cosmological observations, and is found to fit the redshift-luminosity distance data quite well.This article has the following structure: In the next section, the cosmological models are developed, with the details of the calculations presented in the Appendix. In Sec. 3, the dynamical evolution of the universe is determined by solving the Einstein field equation under various conditions.In Sec. 4, a selected model is tested against the observations of Type Ia supernovae.Four data sets available in the literature are included in the test.Finally, the results are discussed in Sec. 5. Throughout this article we follow the sign conventions of Wald [7].In particular, we use metric signature − + + + , define the Riemann and the Ricci tensors by equations (3.2.3) and (3.2.25) of Wald [7] respectively, and employ abstract index notation to denote tensors.Greek indices, running from 0 to 3, are used to denote components of tensors while Latin indices are used to denote tensors.Einstein's summation convention is assumed. II. COSMOLOGICAL MODELS A cosmological model is defined by specifying: 1) the spacetime geometry determined by a metric g ab , 2) the mass-energy distributions described in terms of a stress-energy-momentum tensor T ab , and 3) the interaction of the geometry and the mass-energy, which is depicted through a field equation. A. The spacetime metric Under the assumption that on the large scale the universe is homogeneous and isotropic and expressed in the synchronous time coordinate and co-moving spatially spherical/hyperbolic coordinates ( , , , ) t ψ θ φ , the line element of the spacetime metric g ab takes the form [7] ( ) ( ) ( ) where c is the speed of light and the three options listed to the right of the left bracket correspond to the three possible spatial geometries: a 3-sphere, 3-dimensional flat space, and a 3-dimensional hyperboloid, respectively.The metric of form (2.1) is called the Friedmann-Robertson-Walker (FRW) metric. We view the speed of light as simply a conversion factor between time and space in spacetime.It is simply one of the properties of the spacetime geometry.Since the universe is expanding, we speculate that the conversion factor somehow varies in accordance with the evolution of the universe, hence the speed of light varies with cosmic time.Denoting the speed of light as a function of cosmic time by ( ) t c , we modify the FRW metric as sinh sin B. The stress-energy-momentum tensor The universe is assumed to contain both matter and radiation.The content of the universe is described in terms of a stress-energy-momentum tensor T ab .We shall take T ab to have the general perfect fluid form 2 T uu g , where u a , ρ and P are, respectively, a time-like vector field representing the 4-velocity, the proper average mass density, and the pressure as measured in the instantaneous rest frame of the cosmological fluid. C. The field equation In a cosmology where the speed of light is assumed constant, the interaction between the curvature of spacetime at any event and the matter content at that event is depicted through Einstein's field equation where R ab is the Ricci tensor, R is the curvature scalar, and G is the Newtonian gravitational constant.In a cosmology with a varying c and varying G , one needs a new field equation for attaining consistency [8].Noting that 2 / G c is the conversion factor that translates a unit of mass into a unit of length, we postulate that c and G vary in such a way that 2 ( ) ( ) / G t c t must be absolutely constant with respect to the cosmic time t .We can make 2 ( ) ( ) / 1 G t c t = by choosing proper units of mass and length.Accordingly, we speculate that in a cosmology with a varying c and varying G , the field equation describing the interaction between the spacetime geometry and the distribution of mass-energy is given as III. DYNAMICS OF THE UNIVERSE where k 1 = for the 3-sphere, k 0 = for flat space, and k 1 = − for the hyperboloid. Using equation (2.5) or, respectively equation (2.6), we rewrite equation (3.1), for a universe composed of dust only, as , where The cosmological density plays the role of ultimate clock in a homogeneous universe. Accordingly, when being converted into that in length, the magnitude of increment in time, dt , is normalized with ( ) . The conversion between time and length can then be expressed as where 0 κ is constant with respect to the cosmic time t .Since the speed of light in a vacuum ( ) c t is viewed as simply a conversion factor between time and length in spacetime, we also have the conversion By comparing the right hand sides of these two conversions, we conclude Accordingly, we speculate that where κ is constant with respect to the cosmic time t .We are now ready to solve equations (3.2) and (3.3) for ( ) a t and ( ) c t .Given equations (2.5) and (3.4), 3) is all we need to arrive at a solution.We will solve equation (3.3) for the universe composed of pressure-free dust and with spatially 3-sphere geometry ( k 1 = ) explicitly and discuss the other cases briefly. Simplifying and preparing (3.5) for integration results in Carrying out the integration leads to ( ) We have chosen the time origin ( 0) t = to be that when a achieves its maximum value 2M .Setting γ > From equations (3.4) and (3.7), the speed of light in a vacuum, as a function of cosmic time t , can be calculated as ( ) () Since the speed of light c , wavelength λ , and frequency ν are related by c λν = , a varying c could be interpreted in different ways.We assume that a varying c arises from a varying λ with ν kept constant.We further assume that the relation between the energy E of a photon and the wavelength λ of its associated electromagnetic wave is given by equation ( ) / ( ) , where η is a constant that does not vary over cosmic time.Consequently, from relation ( ) . Therefore, the so called Planck's constant h actually varies with the evolution of the universe. Following the same procedure as above, the solutions for the other five cases are given as follows: i For a universe composed of pressure-free dust and with spatially flat geometry, We have chosen the time origin ( 0) t = to be that when a reaches the value 2M . In this case ( ) t a will blow up at a finite future time t σ = .The graph of 2 i For a universe composed of pressure-free dust and with spatially hyperboloid i For a universe composed of dust and radiation, and with spatially 3-sphere geometry, ( ) We have chosen the time origin ( 0) t = to be that when a achieves its maximum From these results, we see that a spatially flat or spatially hyperboloid geometry is not feasible to describe our universe, since in each case ( ) t a will blow up at a finite future time. IV. THE COSMOLOGICAL REDSHIFT AND DATA FITTING In this section we test the model for the universe composed of pressure-free dust and with spatially 3-sphere geometry against cosmological observations.Theoretical predictions of luminosity distance as a function of redshift will be compared with cosmological observations of Type Ia supernovae.Four data sets available in the literature are included in the test: . In the data sets included in this test, the luminosity distance is given as the stretch-luminosity corrected effective B-band peak magnitude [1], m in V. DISCUSSION In the Friedmann cosmology [12], a homogeneous and isotropic universe must have begun in a singular state.Hawking and Penrose [13] proved that singularities are generic features of cosmological solutions only if general relativity is correct and the universe is filled with as much matter and radiation as we observe.The prediction of singularities represents a breakdown of general relativity.Many authors felt that the idea of singularities was repulsive and spoiled the beauty of Einstein's theory.There were therefore a number of attempts [14][15][16][17] to avoid the conclusion that there had been a big bang, but were all abandoned eventually.Negating the existence of singularities restores beauty to Einstein's theory of general relativity. Cosmological constant Λ was introduced into the field equation of gravity by Einstein as a modification of his original theory to ensure a static universe.After Hubble's redshift observations [18] indicated that the universe is not static, the original motivation for the introduction of Λ was lost.However, Λ has been reintroduced on numerous occasions when it might be needed to reconcile theory and observations, in particular with the discovery of cosmic acceleration in the 1990s. With our models successfully explaining the accelerating universe without the introduction of Λ , the concept of cosmological constant shall be discarded again from the point of view of logical economy, as suggested by Einstein [19]. Beginning with Dirac [20] in 1937, some physicists have speculated that several so called physical constants may actually vary.Theories for a varying speed of light (VSL) have been proposed independently by Petit [21][22][23] from 1988, Moffat [24] in 1993, and then Barrow [25] and Albrecht and Magueijo [26] in 1999 as an alternative way to cosmic inflation [27][28][29] of solving several cosmological puzzles such as the flatness and horizon problems (for a detailed discussion of these problems, see Weinberg [30] , section 4.1).For reviews of VSL, see Magueijo [31].In the standard big bang cosmological models, the flatness problem arises from observation that the initial condition of the density of matter and energy in the universe is required to be fine-tuned to a very specific critical value for a flat universe.With our models asserting that the spatial section of the universe is a 3-sphere, the flatness problem In this sense, we may call the singularity a pseudo singularity. By comparing cosmological models, we refute the claim [32] that the time variation of a dimensional quantity such the speed of light has no intrinsic physical significance.We illustrate our point as follows: In Friedmann's closed universe, which resulted from the constancy of the speed of light, the time span is a closed interval, from big bang to big crunch, while in ours the time span is an open interval, with neither beginning nor end.The two models can be discriminated by the topological structures of their time spans-the former is compact, whereas the latter is not [33].( ) where the dots denote derivatives with respect to t .Substituting these expressions for the components of c ab Γ into those for the components of the Ricci tensor R ab , R . 4 ' 4 be determined.To solve equations (3.2) and (3.3) we need a further postulate on the relationship between ( ) t c and ( ) t a .For this we argue as follows: When converting the magnitude of increment in time, d t , into that in length, Nature needs a universal standard to refer to.Noting that the concept of time arises from the observation that the distribution of mass-energy contained in the universe is dynamic and the rate of change, concept of time would have no meaning. FIG. 1 . FIG. 1.The evolution of the universe composed of pressure-free dust and with spatially 3-sphere geometry.The hyper-radius of the universe, ( ) a t , can never reach zero.The universe is accelerating in the epoch when 7 / 8 γ < and is decelerating when 7 / 8. t σ is displayed in Figure 2 .FIG. 2 . FIG. 2.The dynamics of two versions of the universe composed of pressure-free dust: with spatially flat geometry, and with spatially hyperboloid geometry.In both universes ( ) a t can never reach zero and will blow up at a finite future time. ii For a universe composed of dust and radiation, and with spatially flat geometry, For a universe composed of dust and radiation, and with spatially hyperboloid geometry, parameter believed to be constant for all supernovae of Type Ia[9][10][11].From equations (4.3) and (4.4), our model will predict the theoretical value o γ , the quantity ( ) e z γ in (4.5), as a function of redshift factor z , is defined implicitly by equation (4.2).The best-fit parameters are determined by minimizing the quantity Figure 3 FIG. 3 . Figure 3 shows the Hubble diagram of corrected effective rest-frame B magnitude as 4 th e n a tu r a l u n it o f m a s s η τ κ ≡ i 4 T h e n a t u r a l u n i t o f t i m e 1 disappears automatically.The horizon problem of the standard cosmology is a consequence of the existence of the big bang origin and the deceleration in the expansion of the universe.Without the big bang origin and with the universe being accelerating in the epoch when ( ) 7 / 8 t γ < , our models may thus provide a solution to the horizon problem.Essentially, this work is a novel theory about how the magnitudes of the three basic physical dimensions, mass, time, and length are converted into each other, or equivalently, a novel theory about how the geometry of spacetime and the distribution of mass-energy interact.The theory resolves problems in cosmology, such as those of the big bang, dark energy, and flatness, in one fell stroke by postulating that three basic physical dimensions, any cosmological model requires two constants.Einstein took c and G as the two constants, whereas weassert that the two constants are κ , the factor relating to the conversion between time and length, and τ , the conversion factor between mass and length.These two constants, κ and τ , together with η , the constant relating the energy of a photon and the wavelength of its associated electromagnetic wave, can be used to define the natural units of measurement for the three basic physical dimensions.Using dimensional analysis, we obtain:3 n a tu ra l u n it o f le n g th expression (3.8), the speed of light in a vacuum becomes infinite, hence a singularity, at cosmic time 0 APPENDIX 1 . Calculations for the Components of c ab Γ and R ab For the case of 3-sphere geometry, in the synchronous time coordinate and co-moving spatial spherical coordinates ( , , , ) t ψ θ φ , the covariant components of metric g = and the Bianchi identity, we see that * T ab satisfies the -radius of the universe at cosmic time t .The radius will get smaller and smaller as t approaches ±∞, however it can never reach zero, and therefore, time has no beginning and no end, and there is neither a big bang nor a versus / t σ is displayed in Figure 1. The evolution equations for the universe TPlugging the expression for R and those for the components of R ab , g ab and *T ab into the field equation *
2019-04-22T05:26:26.020Z
2010-07-13T00:00:00.000
{ "year": 2010, "sha1": "b12a09f03321260443a34feceaffd19fbd23ab86", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20150831/MS1-13404172.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b12a09f03321260443a34feceaffd19fbd23ab86", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
30390795
pes2o/s2orc
v3-fos-license
Expression of HBsAg and HBcAg in the ovaries and ova of patients with chronic hepatitis B AIM: To investigate the expression and distribution of HBV in the ovaries and ova. METHODS: The immunohistochemistry method was used to detect the HBsAg and HBcAg in the ovaries of patients with chronic hepatitis B. RESULTS: Expression of HBsAg in the ova, granular and interstitial cells of the ovaries was located in the cytomembrane and cytoplasm. Expression of HBcAg in the ova, granular, interstitial and endothelial cells of interstitial blood vessels of the ovaries was found in the cytomembrane, cytoplasm, and nuclei. CONCLUSION: HBV can infect the ova at different stages of development and replicate in it. © 2005 The WJG Press and Elsevier Inc. All rights reserved. INTRODUCTION INTRODUCTION INTRODUCTION INTRODUCTION INTRODUCTION In Asian countries such as China, vertical infection plays a major role in transmitting HBV.The infected babies will eventually develop liver cirrhosis or hepatocellular carcinoma in their adulthood, and female babies will continue the cycle of vertical transmission to their offsprings [1][2][3] .Recent data indicate that intrauterine infection is one of the important routes of HBV vertical transmission, and the rate of intrauterine HBV infection is 10-44.4% [3,4] .Furthermore, studies have proved that most intrauterine infections can be prevented by inoculation with HBVac and HBIG in the perinatal stage, but 5-10% of the infections cannot be prevented by the combination of HBIG and HBVac [5,6] .The mechanism for this fact is controversial.Some researchers suspected that the infection of ova with HBV may be a factor in this mechanism [7] .But this was just a supposition, and little data exist to prove it. To explore if HBV could infect the ovaries and ova, we detected the expression of HBsAg and HBcAg in the ovaries and ova of patients with chronic hepatitis B (CHB) by immunohistochemistry method. MA MA MA MA MATERIALS AND METHODS TERIALS AND METHODS TERIALS AND METHODS TERIALS AND METHODS TERIALS AND METHODS Ovary tissues were obtained from 18 patients with CHB who received surgery for ovary disease at the First Hospital of Xi'an Jiaotong University in China.The specimens were collected after surgery.Serum samples from these patients were tested for different HBV markers including HBsAg, HBeAg, anti-HBc, anti-HBe (HBVM) using a commercially available ELISA kit (Sino-America Biological Technology, Inc., Beijing, China) prior to surgery.HBsAg was positive while HCV, HIV, HAV, and HEV were negative in 18 patients.All reactions were performed at the Immunohistochemistry Laboratory in Xi'an Jiaotong University.Ovary tissues were fixed in 40 g/L formaldehyde, embedded with paraffin and cut into 4 µm thick sections.Mouse (McAb) against HBsAg (Dako, Denmark) and rabbit polyclonal antibodies against HBcAg (Baoxin, China) were used for immunohistochemical test.DAB staining kits were purchased from Wuhan BoShiDe Biological Engineering Ltd. Company. HBsAg and HBcAg were detected by immunhistochemical staining using the avidin-biotin complex (ABC) method on the sections of ovarian tissues.In brief, formalinfixed, paraffin-embedded sections were deparaffinized in xylene and passed through ethanol series.After the endogenous peroxidase activity was blocked, the sections were rinsed in 0.01 mol/L PBS.Non-specific binding was blocked by treatment with 5% normal serum for 30 min.Primary antibody was applied to the sections and incubated in a moist chamber overnight at 4 .After the sections were washed in 0.01 mol/L PBS, second antibody was applied and sections were incubated for 30 min at 37 in a moist chamber.After being washed, the sections were incubated with avidin-biotin-peroxidase complex for 30 min at 37 in a moist chamber and washed again.The chromogen, 3-3-diaminobenzidine (DAB) was added to the sections for 10 min. The sections of HBsAg and HBcAg positive livers from autopsy were used as positive control, and the sections of ovaries from HBVM negative women served as negative control.At the same time, PBS was used as blank control instead of the first antibody in immunohistochemical test.Dark brown yellow in cytoplasm, cytomembrane or nuclei was regarded as strongly positive, brown yellow as positive, and light brown yellow as weakly positive. RESUL RESUL RESUL RESUL RESULTS TS TS TS TS Expression of HBsAg in the ova and granular cells of the ovaries was located in the cytomembrane and cytoplasm. The positive rate was 11% (2/18).Negative control and blank control were negative.(Figures 1A and B) Expression of HBcAg in the ova, granular, interstitial and endothelial cells of interstitial blood vessels of the ovaries was found in the cytomembrane, cytoplasm, and nuclei.HBsAg and HBcAg in the ovaries were strongly positive.The positive rate was 45% (8/18).Negative control and blank control were negative.(Figures 1C-H). DISCUSSION DISCUSSION DISCUSSION DISCUSSION Much attention has been paid to the effective prevention of vertical HBV transmission [1][2][3][4] .It was reported that intrauterine HBV is transmitted through HBV-infected placenta and HBV could enter the blood of infant through placental leak [8] .But this cannot explain why HBV can infect the embryo (46 d).Chen et al. [7] , showed that HBV cannot infect the embryo through placenta.They believe there is another mechanism by which HBV infects the embryo. Yu et al. [9] , proved that hemorrhagic fever virus exists in the ova of rodent.Bovine viral diarrhea virus can infect the ova of bovine [10] .Tagawa et al. [11] , found that duck HBV (DHBV) is expressed in yolk of duck with hepatitis B, and that DHBVDNA is present in liver of embryo after 6 d of incubation, suggesting DHBV could infect the egg of duck and replicate in it.Zhao et al. [12] , reported that DNA of TTV is found in the ovaries of CHB patients by hybridization in situ.Zhou et al. [13] , reported that HBVDNA is present in plasma of ova and in interstitial cells of the ovaries of a patient who died of serious hepatitis B. Taylor et al. [14] , have detected HBsAg in follicular fluid of CHB patients by immunohistochemistry method. In this study, HBsAg and HBcAg were expressed in ova and granular cells of the ovaries.HBcAg was expressed in ova at different stages of development, suggesting that HBV can infect the ovaries and ova, and replicate in them.The fact suggests that HBV-infected ova may cause the vertical transmission of HBV, which cannot be prevented by inoculation of HBV vaccine and HBIG because the embryo is infected as zygote is formed. In this study, the positive rate of HBsAg (11%) was lower than that of HBcAg (45%).We consider that the primary antibody against HBsAg is monoclonal antibody (McAb), but the primary antibody against HBcAg is polyclonal antibody.Since the binding epitope of McAb is less than that of polyclonal antibody, the positive rate of HBsAg is lower.On the other hand, in the ova, the synthetizing and expression of HBcAg by HBV may be higher than that of HBsAg in the ovaries.The amount of HBsAg is more than that of HBcAg in liver cells [15,16] .Of course, a little sample may be a reason. In conclusion, HBV can infect the ovaries and ova and replicate in them.But how does HBV infect the ovaries and ova and whether HBV-infected ova results in HBV vertical transmission remain unknown. Figure 1 Figure 1 Expression of HBsAg and HBcAg in the ovaries and ova.A: HBsAg negative control (original magnification:10×40); B: expression of HBsAg in plasma and membrane of ova and granular cells (original magnification:10×40); C: expression of HBcAg in plasma, membrane and nuclei of sinus ova, granular and interstitial cells (original magnification: 10×100); D and E: expression of HBcAg in plasma, membrane and nuclei of primary ova, granular and interstitial al. Antigen of HBV in the ovaries and ova
2017-09-14T23:40:57.455Z
2005-09-28T00:00:00.000
{ "year": 2005, "sha1": "da579cea72e143254cc63b0641fb77e855b60cae", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v11.i36.5718", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "da579cea72e143254cc63b0641fb77e855b60cae", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257844525
pes2o/s2orc
v3-fos-license
Restless Legs Syndrome in Chronic Kidney Disease- a Systematic Review Objectives: The objective of this review is to provide updated information on the epidemiology, correlating factors and treatment of chronic kidney disease associated restless legs syndrome (CKD-A-RLS) in both adult and pediatric population. Materials and Methods: We have reviewed the Medline search and Google Scholar search up to May 2022, using key words restless legs syndrome, chronic kidney disease and hemodialysis and kidney transplant. The reviewed articles were studied for epidemiology, correlating factors, as well as pharmacologic and non-pharmacologic treatment options. Results: Our search revealed 175 articles, 111 were clinical trials or cross- sectional studies and 64 were review articles. All 111 articles were retrieved and studied in detail. Of these, 105 focused on adults and 6 on children. A majority of studies on dialysis patients reported a prevalence between 15–30%, which is notably higher than prevalence of RLS in general population (5–10%). The correlation between presence of CKD-A-RLS with age, gender, abnormalities of hemogram, iron, ferritin, serum lipids, electrolytes and parathyroid hormones were also reviewed. The results were inconsistent and controversial. Limited studies have reported on the treatment of CKD-A-RLS. Non-pharmacological treatment focused on the effect(s) of exercise, acupuncture, massage with different oils and infra-red light whereas, pharmacologic treatment options include the effects of dopaminergic drugs, Alpha2-Delta ligands (gabapentin and pregabalin), vitamins E and C, and intravenous iron infusion. Conclusion: This updated review showed that RLS is two to three times more common in patients with CKD compared to the general population. More patients with CKD-A-RLS demonstrated increased mortality, increased incidence of cardiovascular accident, depression, insomnia and impaired quality of life than those with CKD without RLS. Dopaminergic drugs such as levodopa, ropinirole, pramipexole and rotigotine as well as calcium channel blockers (gabapentin and pregabalin) are helpful for treatment of RLS. High quality studies with these agents are currently underway and hopefully confirm the efficacy and practicality of using these drugs in CKD-A-RLS. Some studies have shown that aerobic exercise and massage with lavender oil can improve symptoms of CKD-A- RLS suggesting that these measures can be useful as adjunct therapy. INTRODUCTION The definition and classification of chronic kidney disease (CKD) have evolved over time. The current international guidelines define this condition as decreased kidney function characterized by a glomerular filtration rate (GFR) of less than 60 ml/min per 1·73 m², or markers of kidney damage, or both, of at least 3 months duration, regardless of the underlying cause. CKD is very prevalent in the general adult population. Data from the United States estimate a prevalence of 13.1% among adults, which has increased over time. The burden of CKD is substantial. According to WHO global health estimates, 864 226 deaths (or 1·5% of deaths worldwide) were attributable to this condition in 2012. Ranked fourteenth in the list of leading causes of death, CKD accounted for 12·2 deaths per 100,000 people. Projections from the Global Health Observatory suggest that the death rate from CKD will continue to increase to reach 14 per 100,000 people by 2030. CKD is also associated with substantial morbidity. Worldwide, CKD accounted for 2,968,600 (1·1%) of disability-adjusted life-years and 2,546,700 (1·3%) of life-years lost in 2012. Patients with CKD require monitoring for complications such as metabolic abnormalities, anemia, CKD associated mineral bone disease, and cardiovascular diseases [1,2]. Patients with chronic kidney disease are commonly affected with various types of sleep disorders. Sleep disorders have been associated with increased cardiovascular risk and may contribute to the morbidity and mortality of people with advanced (stages 4 to 5) CKD and those treated with dialysis [3]. Within the spectrum of sleep disorders, restless legs syndrome (RLS) causes a disturbance in sleep through an irresistible desire to move one's legs. Symptoms of RLS are more common in patients with CKD than in the general population [4,5]. Restless legs syndrome, also known as Willis-Ekbom disease (WED), is a sensorimotor disorder characterized by an irresistible urge to move the legs. The urge is usually accompanied by an uncomfortable sensation in the legs that occurs in the evening or night and is partially or totally relieved by movement [6]. Brain iron deficiency and dopaminergic neurotransmission abnormalities play a central role in the pathogenesis of RLS, along with other nondopaminergic systems, although the exact mechanisms are still unclear. The cause of most cases of RLS is unknown, and hence is called primary (idiopathic) RLS. Secondary RLS occurs in association with a variety of systemic disorders especially iron deficiency and chronic renal insufficiency [7]. The initial management approach to essential RLS should include measuring serum ferritin and transferrin-percent saturation, with iron-replacement therapy indicated when these measures are below the low-to-normal range. There is limited evidence of nonpharmacologic treatment in primary RLS. In moderate to severe RLS, pharmacologic treatment may be considered. There is strong evidence for efficacy of both Alpha2-Delta Ligands (gabapentin and pregabalin) and dopamine agonists in the therapy for RLS. Unfortunately, a growing body of evidence over the last decade has indicated disturbing side effects associated with dopaminergic therapies. Most significantly, a large proportion of RLS patients treated with dopaminergic drugs (direct agonists such as ropinirole) develop augmentation syndrome. Augmentation is characterized by earlier appearance of the symptoms during the day, often associated with more intensity. Prevalence rates for dopamine agonist-related augmentation vary from less than 10% in the short term to 42% to 68% after approximately 10 years of treatment. In addition, excessive daytime sleepiness with sleep attacks particularly in patients with comorbid parkinsonism, impulse control disorder symptoms, as well as dose related adverse effects, such as dizziness and drowsiness may develop in patients on dopamine agonist medications. Second-line therapies include intravenous iron infusion in those who are intolerant of oral iron intake and/or those having augmentation with intense, severe RLS symptoms, and opioids including tramadol, oxycodone, and methadone [8,9,10]. There is evidence that chronic RLS makes the patients prone to cardiac and cerebrovascular accidents although there is a need for more careful studies in this area [11]. RESEARCH DESIGN A Medline search and Google Scholar search was conducted up to May 1 st , 2022, crossing the term restless legs syndrome and chronic kidney disease (CKD) and additionally with hemodialysis. The search included only articles published in English language. The number of relevant articles in adult and pediatric literature were presented in a Prisma monograph. Prevalence of RLS was investigated in CKD patients on dialysis and off dialysis as well in hemodialysis versus peritoneal dialysis and after kidney transplantation. efficacy and practicality of using these drugs in CKD-A-RLS. Some studies have shown that aerobic exercise and massage with lavender oil can improve symptoms of CKD-A-RLS suggesting that these measures can be useful as adjunct therapy. The search included data on presence or lack of correlation between CKD-associated-RLS (CKD-A-RLS) and gender, age, basic metabolic index, serum albumin, serum lipid profile and presence of comorbidities (such as diabetes and hypertension). Additionally, correlations were searched and recorded between CKD-A-RLS and a large number of metabolic and hormonal factors including serum electrolytes (in particular calcium and phosphorus), serum iron, hemoglobin, ferritin, transferrin saturation, parathyroid hormone level as well as stages of kidney dysfunction and markers of kidney dysfunction (glomerular filtration rate, serum creatinine and BUN). The search also included data supporting or refuting correlation between CKD-A-RLS with dialysis parameters such as frequency and duration of dialysis as well as type of dialysate used for treatment of renal failure. The issue of mortality in CKD-A-RLS syndrome was searched for and analyzed. Data from blinded and open label studies on treatment of CKD-A-RLS were compiled and analyzed. The search included data from commonly used agents such as dopaminergic and antiepileptic drugs and certain opioids as well as newer and experimental drugs. Information from case reports was excluded. Important treatment protocols in progress were mentioned and briefly discussed. The data from non-pharmacological clinical trials such as those related to the use of different exercise modalities, acupuncture and herbal treatments were noted and recorded. Fischer exact test was used for to detect statistical significance between small data values. RESULTS The search, performed up to May 1st 2022, disclosed 510 articles. Two authors reviewed the literature under this subject independently. After exclusion of duplications (231 duplications), 279 articles remained (Figure 1-Prisma). Of these, 104 articles were excluded (not relevant to the topic, in languages other than English, case reports). Of the remaining 175 articles, 111 were clinical trials or crosssectional studies and 64 were review articles. All 111 articles were retrieved and studied in detail. Of these, 105 focused on adults and 6 on children. Several of the published manuscripts attempted to correlate presence of CKD-A-RLS with age, gender, abnormalities of hemogram, iron, ferritin, serum lipids, electrolytes (particularly calcium and phosphorus) and parathyroid hormones. The results were inconsistent and controversial. Although also controversial, more studies have found predominance of women and increased incidence of sleep apnea, cerebrovascular accidents, depression and poor quality of life among patients affected by CKD-A-RLS. The correlation between CKD-A-RLS and hemodialysis parameters was also controversial, but CKD-A-RLS was observed in more patients with advanced kidney failure (beyond stage 3). Two studies compared the prevalence of CKD-A-RLS in peritoneal dialysis with hemodialysis [34][35]. In both studies, the prevalence of CKD-A-RLS was substantially higher in peritoneal dialysis, 50% and 33% versus 23% in hemodialysis. In one study, using the cool dialysate improved the symptoms of RLS in CKD [36]. TREATMENT OF CKD-A-RLS Limited studies have reported on the treatment of CKD-A-RLS; the data involves both non-pharmacological and pharmacological treatment. On non-pharmacological treatments, reports include the effect(s) of exercise, acupuncture, massage with different oils and infra-red light ( Table 1). Pharmacological treatments consist mainly of treatment with dopaminergic drugs (levodopa, ropinirole), and pramipexole, as well as treatment with calcium channel blockers structurally similar to gabaaminobutiric acid (gabapentin and pregabalin) ( Table 2). Additional therapeutic approaches for CKS-A-RLS include treatment with vitamins E and C which claimed alleviation of symptoms without side effects (Table 3). SIDE EFFECTS Limited studies have been published on the therapeutic role of intravenous iron in dialysis patients with CKD-A-RLS ( Among narcotic medications oxycodone (two blinded study [61,62]) (Table 5), and tramadol (two open label studies) have been reported to alleviate the symptoms of severe restless legs syndrome. No information on the use of oxycodone in CKD-A-RLS is available. Oxycodone needs to be used with caution in patients with kidney failure as the main mode of its elimination is renal [63]. Six publications provided data on CKD-A-RLS in pediatric population with ages of 5 to 17 years [64][65][66][67][68][69]. In five of these studies [64,65,67,68,69], the prevalence of RLS was higher in CKD (15.3% to 35%) compared to normal population. All studies were open label. Three out of six were prospective. Information was taken in the clinic or via telephonic contact, often through the parents. In general, the symptoms were mild; only in two studies poor quality of sleep and impaired quality of life was mentioned. No other treatments were reported. The diversity of data found in this review is not surprising considering the fact that different investigators used different criteria for diagnosis of RLS (particularly before 2003). Furthermore, the studies represented findings in different stages of chronic kidney disease, hence not quite comparable. On the clinical sides, though the data is contradictory, considerably more studies reported positive correlations with increased mortality, increased cardiovascular complications, insomnia, and depression. Non-pharmacological treatment, especially aerobic exercise and massage with lavender oil seems helpful in treatment of CKD-A-RLS. The Guideline Development Subcommittee of the American Academy of Neurology [70,71] recommends treating CKD-A-RLS with vitamin C and E based on one published class I study [55]. As a reducing agent, vitamin C plays an important role in iron metabolism. It increases absorption of iron from gastrointestinal tract and enhances the bioavailability of iron after Symptoms only occur and are worse in the evening or night than during the day 5 The occurrence of the described features is not solely accounted for as symptoms primary to another medical or a behavioral condition (myalgia, venous stasis, leg edema, arthritis, leg cramps, positional discomfort, habitual foot tapping). The criteria published earlier in 2003 lacks the 5 th criteria. intravenous iron injection. It also can mobilize iron from the reticuloendothelial system to transferrin [72,73]. Currently, dopaminergic medications (levodopa, ropinirole, pramipexole and rotigotine) and gabapentinoids (gabapentin and pregabalin) are recommended as the first line of drugs for treatment of essential RLS [74]. However, in case of CKD-A-RLS, more robust investigations for ropinirole and pramipexole are needed. Development of augmentation remains a worrisome issue with the use of dopaminergic drugs especially if long term therapy is contemplated. Currently, more practitioners prefer the use of direct dopamine agonists (ropinirole, pramipexole, rotigotine) over levodopa for treatment of CKD-A-RLS. One randomized controlled study has shown the superiority of ropinirole over levodopa for treatment of CKD-A-RLS (Table 2) [54]. The same preference applies to gabapentin over levodopa based on three small comparative studies ( Table 2). NUMBER OF Due to renal clearance, dose adjustment is necessary when oral dopaminergic drugs or gabapentinoid medications (gabapentin and pregabalin) are going to be used for treatment of CKD-A-RLS. In case of gabapentin and pregabalin the following dosing schedule is recommended by Chincholkar et al [75] (Table 7). Ropinirole was approved by FDA for treatment of restless legs syndrome in 2005 and pramipexole in 2006. In an open label study of 10 patients affected by advanced kidney disease and on dialysis, Miranda et al [76] reported a significant improvement of RLS severity scores (using the criteria set by International RLS Study Group) after treatment with pramipexole (mean dose of 0.25 mg/day). The mean time of follow up was 8 months. Currently, two randomized, double-blind studies are ongoing with aims of assessing the efficacy of ropinirole and pramipexole in CKD-A-RLS [77,78], the results of which will hopefully, be available soon. Rotigotine as a skin patch has the advantage of bypassing drug absorption through the GI tract which is often affected in patients with chronic kidney disease. Renal clearance of rotigotine is also not influenced by kidney disease. Even in advanced kidney failure the level of unconjugated rotigotine does not change indicating no need for dose adjustment [79]. For these advantages, treatment with rotigotine deserves further investigation. Intravenous iron using iron dextran and iron sucrose have been helpful in reducing intensity of RLS in chronic kidney disease especially in case of iron deficiency [58][59][60]. Up to 65% of patients with CKD in clinical examination demonstrate evidence of peripheral Neuropathy [80]. Peritoneal and hemodialysis can improve mild peripheral neuropathy but their effect on severe peripheral neuropathy is not adequately studied [81]. A clinical trial with two years of follow up demonstrated failure of renal transplantation to improve CKD associated peripheral neuropathy [82]. CONCLUSION This updated review showed that restless legs syndrome is two to three times more common in chronic kidney disease compared to the general population. When assessed, more patients with CKD-A-RLS demonstrated increased mortality, increased incidence of cardiovascular accident, depression, insomnia and impaired quality of life compared to CKD patients without RLS. Dopaminergic drugs such as levodopa, ropinirole, pramipexole and rotigotine as well as calcium channel blockers (gabapentin and pregabalin) are helpful for treatment of RLS. High quality studies with these agents are currently underway and hopefully confirm the efficacy and practicality of using these drugs in CKD-A-RLS. Treatment with Vitamin C and E is recommended for CKD-A-RLS. Some studies have shown that aerobic exercise and massage with lavender oil can improve symptoms of CKD-A-RLS suggesting that these measures can be useful as adjunct therapy.
2023-03-31T15:21:01.924Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "76c2a973ce6418249838ffc44838a8edbdcb7680", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4ddeacf13c7b1f4452a6da3cfd71847d02441ffa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16080896
pes2o/s2orc
v3-fos-license
Upregulation of SIRT6 predicts poor prognosis and promotes metastasis of non-small cell lung cancer via the ERK1/2/MMP9 pathway Sirtuin6 (SIRT6), a member of the sirtuins protein family, plays multiple complex roles in cancer. Here, we report that elevated SIRT6 expression was correlated with clinicopathological parameters such as T and N classification in non-small cell lung cancer (NSCLC) patient tumors. SIRT6 overexpression in NSCLC cell lines increased extracellular signal-regulated kinase (p-ERK)1/2 phosphorylation, activated matrix metalloproteinase 9 (MMP9) and promoted tumor cell migration and invasion. Upon treatment with a specific mitogen-activated protein kinase (MEK) 1/2 inhibitor, these effects were abolished. Our results demonstrate SIRT6 upregulation in NSCLC for the first time and suggest a functional role for SIRT6 in promoting migration and invasion through ERK1/2/MMP9 signaling. SIRT6 may serve as a potential therapeutic target in NSCLC and its utility as a prognostic indicator warrants further study. INTRODUCTION Lung cancer is the most commonly diagnosed cancer and is a leading cause of cancer-related morbidity worldwide, with 1.6 million new cases and 1.4 million deaths annually [1,2]. Of the two major types of lung cancer, small cell lung carcinoma (SCLC) and non-small cell lung carcinoma (NSCLC), NSCLC accounts for 80-90% of cases and has a 5-year survival rate less than 15% [3,4]. NSCLC is relatively refractory to both therapeutic modalities commonly used in lung cancer treatment, chemotherapy and radiation [5,6]. Moreover, most patients are diagnosed with highly invasive, unresectable NSCLC, associated with poor outcome [7]. Metastasis resulting from later-stage disease is thought to be the major cause of death in lung cancer. Therefore, identification of novel targets to combat metastasis is critical and urgent. Sirtuins (SIRTs) are a family of NAD + -dependent deacetylases that are highly conserved from lower organisms to humans. In mammals, seven different SIRTs (SIRT1-7) are linked to the regulation of critical biological processes, including metabolism, genomic stability, cell division, differentiation, survival, senescence and organismal lifespan [8,9]. In addition, SIRT family members are thought to play roles in cancer development [10]. SIRT6 is located on a chromosomal locus (19p13.3) that is a frequent breakage site in human acute myeloid leukemia [11]. SIRT6 is overexpressed in several cancers, including prostate and endometrioid carcinomas, and keratinocyte-derived skin squamous cell carcinomas [12][13][14]. In contrast, SIRT6 is downregulated in pancreatic cancer, head and neck squamous cell carcinomas, human hepatocellular carcinoma (HCC) and colorectal carcinoma [15][16][17]. In human HCC, SIRT6 may act as a tumor suppressor, given that ectopic SIRT6 overexpression Research Paper www.impactjournals.com/oncotarget inhibits HCC cell growth [17,18]. In breast cancer, high nuclear expression of SIRT6 is predictive of poor prognosis [8]. SIRT6 regulates Ca 2+ responses to promote pancreatic cancer cell migration [15]. In contrast, many SIRT6 biological functions are associated with anticancer effects, including its role in cancer cell apoptosis, enhancing sensitivity to radiation damage and reducing cell viability [12]. Therefore, the role of SIRT6 in cancer is complex, with some studies supporting a tumorsuppressive role, and others a cancer-promoting role. In particular, the molecular mechanism(s) of SIRT6 activity in NSCLC are largely unknown. In the present study, we found that SIRT6 is upregulated in NSCLC cell lines and patient-derived tumor tissues. SIRT6 overexpression was associated with clinicopathological features and prognosis in NSCLC. Silencing of endogenous SIRT6 reduced NSCLC cell migration and invasion, whereas ectopic SIRT6 overexpression promoted migration and invasion. Moreover, we demonstrated that SIRT6 promotes NSCLC cell invasion through the ERK1/2/MMP9 pathway. SIRT6 is upregulated in NSCLC cell lines and tumor tissues SIRT6 protein levels in both human NSCLC cell lines, including A549, SPC-A1, GLC82, PC9 and L78, and a human lung fibroblast (HLF) cell line, were assessed using western blotting analysis. All NSCLC cell lines expressed higher SIRT6 levels than did HLF cells ( Figure 1A). Western blotting and immunohistochemical (IHC) staining demonstrated that SIRT6 was upregulated in 12 patient-derived NSCLC tissue samples as compared with paired adjacent noncancerous tissues ( Figure 1B and 1C). Correlations between SIRT6 expression and NSCLC clinical features We analyzed SIRT6 expression in 174 paraffinembedded archived NSCLC tissue samples using IHC. Mean patient age was 59.3 years (range, 31-84 years), and the median follow-up period was 30 months (range, 0-120 months). A total of 128 deaths were reported during the follow-up period. SIRT6 was highly expressed in 128 of 174 (73.6%) human NSCLC samples. Spearman's correlation analysis indicated an association between high SIRT6 expression and clinical stage (P = 0.026), T classification (P = 0.016) and N classification (P = 0.019). However, SIRT6 overexpression was not associated with other clinicopathological parameters, including gender, age, M classification, histology subtypes and differentiation status (Table 1). SIRT6 prognostic significance in NSCLC Patients Statistical analyses showed that high SIRT6expressing NSCLC patients had a lower cumulative survival rate as compared with low SIRT6-expression patients (P = 0.034; Figure 2B SIRT6 promotes NSCLC cell migration and invasion We investigated the effects of SIRT6 overexpression on NSCLC cell invasion. NSCLC cells were engineered to stably overexpress or silence SIRT6 ( Figures 3A and 4A). Wound-healing assays showed that ectopic SIRT6 expression accelerated NSCLC cell migration ( Figure 4B). Transwell assays (with or without Matrigel) revealed that SIRT6 overexpression increased the migration and invasion rates of A549 and L78 cells ( Figure 4C), whereas SIRT6 silencing reduced migration and invasion ( Figure 3C). SIRT6 promotes migration and invasion via ERK1/2/MMP9 We examined the effects of SIRT6 expression on the ERK1/2/MMP9 pathway, which is involved in lung cancer metastasis and invasion. ERK is a member of the mitogenactivated protein kinase (MAPK) signaling pathway, which positively regulates activator protein 1 (AP-1). AP-1 acts as a master regulator of tumor cell migration and invasion by targeting genes such as MMP9. In A549 and L78 cells stably overexpressing SIRT6, MMP9 levels and activity and ERK1/2 phosphorylation were elevated compared to control cells ( Figure 5A, 5B and 5D). Treatment of cells with the specific MEK1/2 inhibitor U0126 abrogated SIRT6 overexpression-mediated invasion and migration and MMP9 expression/activity ( Figure 5C-5F). These results demonstrated that SIRT6 promotes invasion and migration through the ERK1/2/MMP9 pathway. DISCUSSION To the best of our knowledge, this is the first report correlating SIRT6 overexpression with clinicopathologic NSCLC characteristics, such as tumor stage. In addition, we found that SIRT6 overexpression predicts poor NSCLC patient prognosis. SIRT6 is highly expressed in thymus, skeletal, brain and muscle tissues [24,25]. SIRT6 has two major biochemical activities, functioning as (1) a deacetylase and (2) a mono-ADP ribosyltransferase [26,27]. SIRT6 participates in numerous biological processes, including maintaining genomic stability, modulating senescence and development of age-related diseases [8,9]. Recently, SIRT6 was implicated in cancers [10], although the role of SIRT6 in various cell types is different. In human HCC, SIRT6 inhibits survivin to control cancer initiation via an AP-1-dependent regulatory network [28]. SIRT6 overexpression induced apoptosis in HT1080 (fibrosarcoma cells), MEF (mouse embryonic fibroblasts), HeLa (human cervical cancer cells) and HCA2 (human sigmoid colon carcinoma) cells, but not in their non-cancerous/normal counterparts [29]. These data support a tumor suppressor role for SIRT6 in cancer. On the other hand, SIRT6 inhibition reduced prostate cancer cell viability and increased apoptosis, suggesting that SIRT6 may promote tumorigenesis [12]. SIRT6 promoted pancreatic cancer cell migration by inducing cytokines such as interleukin-8 and tumor necrosis factor in a Ca 2+ -dependent manner [15]. There is thus evidence to support conflicting functions for SIRT6 in cancer, either as a tumor suppressor and as a cancerpromoting factor under different circumstances. However, the role of SIRT6 in NSCLC is debated in the literature. Han, et al. [30] showed that SIRT6 suppresses NSCLC cell proliferation through Twist1 inhibition. Cai, et al. [31] showed that SIRT6 overexpression can reduce cell proliferation, change the cell distribution of the cell cycle and induce apoptosis via Bcl-2 downregulation and Bax and cleaved caspase-3 upregulation. However, YOKO, et al. [32] reported that SIRT6 knockdown did not affect A549 cell proliferation or Bcl-2 expression in this cell line. Similarly, Kim, et al. [33] found that cAMP reduces SIRT6 expression to enhance apoptosis via inhibition of the Raf-MEK-ERK pathway. In this study, we found a positive correlation between SIRT6 overexpression and TNM stage N classification in NSCLC. Our results also showed that stable SIRT6 knockdown in NSCLC cells reduced migration and invasion, whereas ectopic SIRT6 overexpression increased migration and invasion. We speculate that SIRT6 promotes NSCLC metastasis. Consistent with this hypothesis, a previous study suggested that elevated SIRT6 promotes pancreatic cancer cell migration and invasion and may play a vital role in disease progression [15]. Tumor metastasis is a multistep process, and considerable evidence shows that MMPs drive metastasis by degrading the extracellular matrix (ECM) [34]. Type I and IV collagens are the major ECM components. Under certain conditions, MMP9 degrades these collagens, thereby expediting and facilitating cancer cell invasion and and wound healing assays (B) in A549-vector (vector), A549-SIRT6 (SIRT6), L78-vector (vector) and L78-SIRT6 (SIRT6) cells; β-actin was used as the loading control for western blotting. Representative micrographs and cell migration and invasion quantification from the transwell migration assay, with and without Matrigel (C) Images represent data from three independent trials with two technical replicates per trial. metastasis [7,35]. MMP9 has been found in various cancer types, including glioma, lung cancer, pancreatic cancer and osteosarcoma [34]. MMP9 upregulation was predictive of poor prognosis in patients with lung cancer, glioma or colorectal cancer [43]. The MMP9 gene promoter regions contain cis-elements for the Sp1 transcription factor, and ERK activation is crucial for Sp1-mediated MMP9 expression [36,37]. MMP9 reportedly also contains a highly conserved proximal AP-1 binding site. ERK belongs to the MAPK family of kinases, which transduce a wide variety of extracellular stimuli into intracellular cascades and regulate a number of transcription factors, including AP-1 [38,39]. MAPKs also participate in many cancer processes, such as cell proliferation, angiogenesis, migration and invasion [34,40]. In addition, the ERK1/2/ MMP9 pathway also reportedly modulates migration and invasion in colorectal cancer, prostate cancer and NSCLC by targeting various genes [41][42][43][44]. In this study, we demonstrated a link between SIRT6 overexpression and increased ERK activation, as indicated by increased ERK1/2 phosphorylation without changes in total ERK1/2 levels. We observed subsequent MMP9 upregulation, ultimately leading to enhanced NSCLC cell migration and invasion. These SIRT6-mediated effects were MEK1/2-dependent, since concomitant treatment with a MEK1/2 inhibitor abolished the above effects. This report demonstrates SIRT6 upregulation in NSCLC for the first time, and suggests a functional role for SIRT6 in promoting migration and invasion through ERK1/2/ MMP9 signaling. In conclusion, our study demonstrated that SIRT6 upregulation was associated with an invasive NSCLC phenotype in patients and may promote NSCLC development and progression. Furthermore, we demonstrated that SIRT6 promoted metastasis through the ERK1/2/MMP9 pathway. SIRT6 may serve as a potential therapeutic target in NSCLC and its utility as a prognostic indicator warrants further study. Cell lines and cultures The A549, SPC-A1, GLC-82 and PC-9 human lung adenocarcinoma cell lines, the L78 human squamous lung cancer cell line and the HLF human lung fibroblast cell line were used in this study. A549, PC9 and HLF cells were obtained from Cell Bank, Chinese Academy of Sciences (Shanghai, China), and were maintained in our laboratory. SPC-A1, GLC82 and L78 cells were kind gifts from Prof. Liantang Wang at the Department of Pathology in the First Affiliated Hospital of Sun Yat-Sen University. Cells were grown in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, BRL, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS; HyClone, Logan, UT) at 37°C in a humidified incubator with 5% CO 2 . Plasmid construction and retroviral infection The pBaBb-puromycin plasmid was used to generate pBaBb-puromycin plasmid-SIRT6, the SIRT6 expression vector. The pSUPER-retro-puro vector was used to generate pSUPER-SIRT6-ShRNA, the plasmid expressing SIRT6-specific shRNA. A549 and L78 cells were infected with the retrovirus expressing SIRT6 or pBaBb-puromycin plasmid alone (empty vector). SIRT6 was also knocked down in A549 and L78 cells by transduction with control or SIRT6-specific shRNA-harboring retroviruses. Stable cell lines overexpressing SIRT6 or silenced for SIRT6, and the corresponding control cell lines, were selected with puromycin for 10-14 days beginning 48 h after infection. Cell lysates prepared in the sampling buffer were resolved by SDS-PAGE and SIRT6 levels were assessed by western blotting. Ectopic SIRT6 coding sequence was amplified by polymerase chain reaction (PCR). The primer sequences were: forward: 5′-TTCTTCGAAATGTCGGTGAATTACGCGGC-3′; reverse: 5′-CTAGCTAGCTCAGCTGGGGACCGCCTT GG-3′. The following SIRT6 sequences were targeted by shRNA: SIRT6-SH1, CCGGGAAGAATGTGCCAA GTGTAAGCTCGAGCTTACACTTGGCACATTCTTCT TTTTG and SIRT6-SH2, CCGGCAAGTTCGACACCA CCTTTGACTCGAGTCAAAGGTGGTGTCGAACTT GTTTTTG. Patients and tissue specimens Fresh tumor tissue samples, along with paired noncancerous lung tissue samples, were obtained during surgery from 12 NSCLC patients treated at the First Affiliated Hospital of Sun Yat-sen. Paraffin-embedded, archived NSCLC samples were obtained from 174 patients diagnosed with NSCLC between January 2004 and December 2009 at the Department of Pathology in the First Affiliated Hospital of Sun Yat-sen. Histologic characterization and clinicopathological staging of the samples were determined according to WHO criteria [19] and current Union for International Cancer Control tumornode-metastasis (TNM) classification [20][21]. Patient clinical information is summarized in Table 1. Pertinent follow-up information was available for all patients. Written informed patient consent and study approval from the Institutional Research Ethics Committee were obtained. Immunohistochemistry IHC analysis was performed to study altered protein expression in 174 human NSCLC tissues using previously described methods [22]. Slides (paraffinembedded sections) were incubated with polyclonal rabbit anti-human SIRT6 (Abcam, 1:400) overnight at 4°C. Immunohistostaining was scored separately by two independent investigators (Ran Wang and Minghui Zhang) blinded to histopathological features and patient data, and the average of these two scores was calculated for each sample. Scores were determined by assessing both staining intensity and the proportion of positively stained tumor cells. The proportion of positively stained tumor cells was graded as follows: 1, ≤ 25% positive tumor cells; 2, > 25% to ≤ 50% positive tumor cells; 3, > 50% to ≤ %75% positive tumor cells; and 4, > 75% positive tumor cells. Staining intensity was recorded on a scale of 0-3 with: 0, no staining, negative; 1, weak staining, light yellow; 2, moderate staining, yellowish brown; and 3, strong staining, brown. SIRT6 staining index (SI) was calculated (values 0-12) as follows: SI = staining intensity × proportion of positively stained tumor cells. An SI score > 6 was used to define tumors with high SIRT6 expression and ≤ 6 represented tumors with low expression. www.impactjournals.com/oncotarget Wound-healing assay A549 and L78 cells were grown to confluence in cell culture dishes. A wound was inflicted on the monolayer using a 100 μL pipette tip. Cells were maintained in serumfree medium and allowed to migrate for 24 h before images of cells that had migrated into the wound area were taken. Migration and invasion assays Migration and invasion assays were performed as described previously [22]. Briefly, cells were plated onto cell culture inserts with 8µm microporous filters (Corning) coated with (invasion) or without (migration) 40 μL Matrigel (1:8 dilution; BD Biosciences, Bedford, MA) and incubated for 24 h. Cells in the upper filters (inside the inserts) were removed, and cells that had migrated into or invaded the lower filters were fixed in 4% paraformaldehyde, stained with crystal violet and counted under a microscope. The number of migrated or invaded cells was counted in five random optical fields for each filter (100 × magnification). For A549 and L78 cells, 2 × 10 4 and 4 × 10 4 cells were plated onto each insert, respectively. To test the effect of the specific MEK1/2 inhibitor U0126 (Sigma) on the migratory and invasive abilities of cells, A549 and L78 cells were pretreated with 10 µM U0126 for 30 min before migration and invasion assays were performed. Experiments were performed in triplicate. Zymography Cells plated in 6-well plates were cultured in fresh RPMI 1640 medium for another 48 h after a particular treatment. Culture medium was then collected and centrifuged, and the supernatant was preserved. Supernatants were concentrated using ultrafiltration (Millipore, Billerica, MA) and protein concentration was measured using the BCA kit (CWBiotech, Beijing, China). Protein (40 μg) was loaded and separated on a 10% SDS polyacrylamide gel containing 1% gelatin by electrophoresis at 27mA/gel and 4°C. Gels were then processed according to the gelatin zymography kit instructions (Applygen, Beijing, China). Statistical analysis Data are expressed as mean ± standard deviation (SD) of values from three independent trials. Groups were compared using Student ' s t-test or one-way analysis of variance (ANOVA). The χ 2 test and Spearman's correlation analysis were used to analyze relationships between SIRT6 expression and clinicopathological characteristics. The Kaplan-Meier method was used to plot survival curves and the log-rank test was used to compare survival curves. P < 0.05 was considered statistically significant. Statistical analyses were performed using SPSS 13.0 software.
2018-04-03T01:22:39.179Z
2016-05-31T00:00:00.000
{ "year": 2016, "sha1": "718304326a29a1a853c051a1e2e91484fbcd9254", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=30567&path[]=9750", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "718304326a29a1a853c051a1e2e91484fbcd9254", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221913952
pes2o/s2orc
v3-fos-license
IHDS: Intelligent Harvesting Decision System for Date Fruit Based on Maturity Stage Using Deep Learning and Computer Vision Date is the main fruit crop of the Kingdom of Saudi Arabia (KSA), approximately covering 72% of the total area under permanent crops. The Food and Agriculture Organization states that date production worldwide was 3,430,883 tons in 1990, which increases yearly, reaching 8,526,218 tons in 2018. Date production in KSA was around 527,881 tons in 1990, approximately reaching 1,302,859 tons in 2018. Harvesting date fruits at an appropriate time according to a specific maturity stage or level is a critical decision that significantly affects profit. In the present study, we proposed an intelligent harvesting decision system (IHDS) based on date fruit maturity level. The proposed decision system used computer vision and deep learning (DL) techniques to detect seven different maturity stages/levels of date fruit (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar). In the IHDS, we developed six different DL systems, and each one produced different accuracy levels in terms of the seven aforementioned maturity stages. The IHDS used datasets that have been collected by the Center of Smart Robotics Research. The maximum performance metrics of the proposed IHDS were 99.4%, 99.4%, 99.7%, and 99.7% for accuracy, F1 score, sensitivity (recall), and precision, respectively. I. INTRODUCTION According to the Ministry of Agriculture in Saudi Arabia, an estimated 24-25 million palm trees approximately produce a million tons of dates yearly, accounting for an estimated 15% of the global date production [1], [2]. The estimated average annual yield of dates per palm tree in Saudi Arabia is 48.0 kg, with a selling price estimated at SR 4.00/kg. Several Saudi farmers are suffering from lack of skilled labor; hence, around 23.00% of the farmers sell their produce from the farm itself to foreign labor for a cheap price [1]. According to the Food and Agriculture Organization of the United Nations, global date production is annually increasing, as shown in Date production in Saudi Arabia was around 527,881 tons in 1990, approximately reaching 1,302,859 tons in 2018. However, despite the increase in cultivated areas, VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ productivity per hectare has declined in recent years. This may be due to the lack of skilled labor. Saudi Arabia is the second largest date-producing country in 2018 and the third in 1990, with a cultivated area of about 1,116,125 hectares in 2018. According to these statistical information, the palm productivity of Saudi Arabia is relatively low based on the number of palms. This may be attributed to several reasons, including the inability to estimate the weight of date per palm and maturity level before harvesting when crop ''dates'' are in the trees; therefore, the farmer sells the crop in the palm trees without knowing the weight and degree of maturation; weak pre-harvesting maintenance; and the lack of skilled laborers. A. DATE HARVESTING Date harvesting involves several tasks before, during, and after harvesting for better yield and tree maintenance. Furthermore, we are going to give a brief description of these tasks. B. PRE-HARVESTING TASKS In this stage, many caring pre-harvesting tasks are performed, including dethroning, thinning the palm date tree, aligning the bunch, bunch attaching, removing dust, exterminating date spiders, bagging, and estimating the weight and yield. Pre-harvesting tasks are done ensure the quality of the date fruit, making the fruits ready for the next stage, which is the harvesting stage. C. HARVESTING DATE PALM BUNCHES There are different types of harvesting; either picking the date fruits one by one or by shaking the bunch, where most of the dates will fall down, or by cutting down the bunch at a certain time. In this proposal, we will focus on date palm trees requiring full bunch cutting. D. POST-HARVESTING Post-harvesting consists of many operations that happen after the dates are removed. In this step, the palm trees do not contain date fruits anymore. The remaining brown dead leaves are cut using a circular saw at a very precise angle (avoiding sharp cuttings) for the safety of the manual workers. In KSA, the traditional way is to avoid cutting many leaves (average of six leaves per tree) to avoid the vertical growth of the palm trees, keeping them as short as possible to make the next harvest easy for the manual workers. Various cleaning operations can be automated, such as brown-leaf cutting and trunk cleaning. These operations require less effort and precision than those in the harvesting process. To solve the inability to estimate the maturity level of dates per palm before harvesting, we have developed a smart system using DL techniques to predict the maturity level of the dates before harvesting. Furthermore, we proposed an intelligent harvesting decision system (IHDS) based on the maturity level detection of date fruits. The proposed decision system uses computer vision and DL techniques to detect seven different maturity stages/levels of date fruit (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar) before harvesting. This paper is organized as follows: in Section 2, literature review is presented. The methodology and dataset are explained in Section 3. The proposed system is presented in Section 4, and Section 5 explains the training and testing parameters. The experimental results are illustrated in Section 6, Section 7 compares the proposed system with other systems, and the conclusion is given in Section 8. II. LITERATURE REVIEW Many studies have been conducted to classify fruit maturity levels using image processing technologies. In 2014, Zhang et al. [4] used a color-grading method to determine the quality and the maturity of date fruits. It used 2-D histograms with a color-grading category to define the co-occurrence frequency. In 2015, Gokul et al. [5] used image processing to estimate the maturity of sweet lime. They classified maturity through RGB color coding based on the RG ratio. In 2013, Prabha, D. Surya; and Kumar, J. Satheesh introduced a maturity classification system for banana fruit using image processing technique in terms of the color and size value of their images [6]. They classified the maturity of banana into three different stages, namely, under-mature, mature, and over-mature. The mean color intensity from the histogram, area, perimeter, and major and minor axis lengths from the size values were extracted from calibration images to classify the maturity stage. However, most of these techniques use thresholds for features, such as color, shape, and size. In 2014, Yamamoto et al. [7] used machine learning (ML) approaches to detect tomato fruit maturity stages without adjusting the threshold values for fruit. They proposed a method containing three steps: a pixel-based segmentation, blob-based segmentation, and X-means clustering. They achieved precision levels of 1.00 and 0.80 for mature and immature fruits, respectively. Other several studies used robotics technology and machine vision in agricultural applications and called it harvesting robots. These harvesting robots can be used for fruit picking [8] and for detecting of fruit-bearing branches [9]. Another study [10] developed a detection algorithm based on color, depth, and shape information. Chen et al. [11] introduced a multi-camera scheme for agricultural application to increase the perception range of vision systems. Several studies have been done to classify date fruits. Nasiri et al. [12] used computer vision and machine ML techniques to classify three maturity stages (Khalal, Rutab, and Tamar) and one defective stage. The dataset was built using single dates with a uniform background. This study used the VGG-16 architecture model with max pooling, dropout, batch normalization, and dense layers. They collected the dataset through a smartphone, and their system achieved an overall accuracy of 96.98%. Another study has been done by Altaheri et al. [13], who proposed a framework using a vision system to classify date fruits in an orchard environment. They used the proposed framework to classify date fruit images based on type and maturity. This study used the VGG-16 and Alexnet architecture models, and achieved accuracy levels of 99.01% for type classification and 97.25% for a five-level maturity classification system. Several other studies have been done to classify fruits other than dates. In 2020, Behera et al. [14] introduced two methods based on ML techniques to classify papaya fruit maturity stages. They used a very small dataset with 300 papaya fruit images, consisting of 100 images of each of the three maturity stages. They used seven pretrained architectures: VGG-19, VGG-16, ResNet101, ResNet50, ResNet18, AlexNet, and GoogleNet. Another study has been done in 2019 [15] by Pacheco, W. D. N. and F. R. J. López to classify the maturity of Milano and Chonto varieties of tomatoes using ML techniques. In 2020, Caladcad, J. A., S. Cabahug, et al. introduced a system to classify the maturity of Philippine coconut using ML techniques [16]. They classified the Philippine coconut into three different maturity levels (pre-mature, mature, and over-mature) using random forest and support vector machine (SVM) classification systems. Another study has been done in 2020 by de Luna et al. [17] to monitor the growth stage of tomatoes using SVM, ANN, and KNN, which achieved maximum accuracy levels of 99.81% for SVM, 99.32% for KNN, and 99.32% for ANN. Another research using MLK was introduced in 2020 by Chen et al. [11] to classify the maturity levels of sweet red and yellow peppers. They achieved 98.2% and 97.3% accuracy levels for red and yellow pepper maturity classification, respectively, for two maturity stages; and 89.5% and 97.3% for red and yellow pepper maturity classification, respectively, for four maturity stages. III. METHODOLOGY In general, DL works better with huge datasets than with smaller ones. For applications with a small dataset, the transfer learning concept is used to enhance the efficiency and outcomes of the system. In the proposed IHDS system, we started by building the dataset named ''DATE FRUIT DATASET FOR AUTOMATED HARVESTING AND VISUAL YIELD ESTIMATION'' [18]. Then, we used this dataset to train and evaluate the proposed IHDS system that used three types of CCN: VGG-19 [19], Inception-v3 [20], and NASNet [21]. The IHDS takes live videos from video sources, extracts and manipulates the images, and then the manipulated images are entered into the maturity level detection system (MLDS) to identify the date fruit maturity level (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar. Selected CNN Architecture In this work, instead of using traditional image processing techniques, we used the CNNs to detect the maturity stages/levels of date fruit from the images because of their high-accuracy. To save time, obtain better accuracy, detect high-level features; such as edges and patterns, we used pretrained CNN models instead of using an ad hoc, and then we added four more layers to the pretrained CNN models as illustrated in the succeeding part of this section. In the proposed system, we will use three models, namely, VGG-19 [19], Inception-v3 [20], and NASNet [21]. The VGG model was developed with minimum pre-processing graphic patterns from pixel images. The ImageNet project has been configured for applications in visual object detection research. The VGG network is characterized by its simplicity, using only 3 × 3 convolutional layers stacked on top of each other in increasing depths. Volume size reduction is handled by max pooling. Two fully connected layers, each with 4,096 nodes, are then followed by a Softmax classifier. In the proposed system, we froze all layers from 1 to 15 of the VGG-19 architecture. Then, we added five more layers (Global average pooling, Dropout (0.3), Dense (128), Dense (64), and Softmax (2/3/4/5/6/7 classes)) before the last layer. At the end, the VGG-19 architecture has total 20,098,759 parameters, 7,153,799 trainable parameters, and 12,944,960 non-trainable parameters for the seven-stage MLDS (TABLE 2). In the beginning, the Inception CNN architecture was introduced as GoogleNet and called Inception-v1. Then, Ioffe and Szegedy enhanced the Inception architecture by introducing batch normalization and called it Inception-v2 [22]. Later Szegedy, C., et al. (2015) enhanced the Inception-v2 CNN by adding factorization and then called it Inception-v3. [20].The main idea of the Inception architecture was to find the optimal local construction of the convolutional network and spatially repeat it [20]. In general, Inception was introduced based on the idea that several connections between layers are ineffective and have redundant information due to the correlation between them. Therefore, the Inception architecture used 22 layers in a parallel manner (Figure 3), which benefited from the several auxiliary classifiers within the intermediate layers, thereby improving the discrimination capacity in the lower layers [23]. For Inception-v3, we added five more layers (Global average pooling, Dense (1,024), Batch normalization, Dense (1,024), and Softmax (2/3/4/5/6/7 classes)) before the last layer. In the end, the Inception-v3 architecture had a total 23,916,327 parameters, 23,877,799 trainable parameters, and 38,528 non-trainable parameters for the seven-stage MLDS. NASNet is a google DL model introduced in May 2017. It produces a small network architecture. Google introduced NASNet mainly for image classification applications. For NASNet, we added five more layers (Global average pooling, Dense (1,024), Batch normalization, dense (1,024), and Softmax (2/3/4/5/6/7 classes) before the last layer. In the end, the NASNet architecture had a total of 6,417,051 parameters, 6,376,217 trainable parameters, and 40,834 non-trainable parameters for the seven-stage MDLS. Dataset We use a dataset named ''DATE FRUIT DATASET FOR AUTOMATED HARVESTING AND VISUAL YIELD ESTIMATION'' [18] that was built by the Center of Smart Robotics Research (www.CS2R.ksu.edu.sa). The date fruit dataset was introduced for use in the pre-harvesting and harvesting stages. The date fruit dataset consists of two different datasets, namely, Dataset-1 and Dataset-2. Dataset-1 contains about 8,079 pictures captured from 350 bunches that belong to 29 palms using two Canon cameras (EOS-1100D and EOS-600D), with resolutions of 4,272 × 2,848 and 5,184 × 3456, respectively. The images were taken under different natural daylight conditions: in the morning (9:00-11:00) or afternoon (3:00-5:00). Dataset-1 covers all the maturity levels of date fruits: Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Kahalal with Rutab, Pre-Tamar, and Tamar ( Figure 5 and Figure 6). Dataset-1 was labeled according to type and maturity. Dataset-1 and its annotation files are available in [https:// ieee-dataport.org/open-access/date-fruit-dataset-automatedharvesting-and-visual-yield-estimation]. Dataset-2 was built for weight estimation, which consists of 152 date bunches of 13 palms. These bunches were weighed after harvesting, and their images were captured with a white background. A. PROPOSED SYSTEM In this paper, we are proposing an IHDS based on maturity level detection of date fruits. As shown in Figure 7, the IHDS takes live videos from video sources (unmanned aerial vehicles or any other source), then extracts the image from the live video stream. After that, image manipulation is performed on the extracted images. Then, the manipulated images are entered into the MLDS that identifies the date fruit maturity level (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar) as shown in Figure 8. B. THE MATURITY LEVEL DETECTION SYSTEM (MLDS) The MLDS was designed to detect seven different maturity types or levels of date fruits (Figure 8) (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar) based on DL techniques. In MLDS, we developed six different DL systems with different accuracy levels, as follows: a two-stage maturity detection system to determine two maturity stages (Immature and Tamar); a three-stage maturity detection system to determine three maturity stages (Immature, Khalal, and Tamar); a four-stage maturity detection system to determine four maturity stages (Immature, Khalal, Khalal with Rutab, and Tamar); a five-stage maturity detection system to determine five maturity stages (Immature, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar); a six-stage maturity detection system to determine six maturity stages (Immature, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar); and a seven-stage maturity detection system to determine seven maturity stages (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar). In IHDS, we used a seven-stage MLDS to determine seven maturity stages. All maturity level systems used an endto-end DL framework in detecting the date fruit maturity level from the gathered images. We have developed an ML system VOLUME 8, 2020 that explicitly detects date fruit maturity level from raw images without requiring feature extraction. As illustrated in Figure 8, we started by collecting dataset images (thousands of date fruit images) in different maturity levels (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar). Then, we augmented the images by resizing them based on the standard size of their respective CNN models. After that, we divided the dataset into a training dataset and a testing dataset, and then applied the retrained CNN models (VGG-19, Inception-V3, and NASNet) to determine date fruit maturity levels. C. TRAINING AND TESTING PARAMETERS In the proposed MLDS, two well-known pretrained deep learning CNNs (NASNet, Inception-V3, and VGG-19) were trained, evaluated, and tested using the KERAS framework to detect the date fruit maturity level from the gathered images. The training of different models was conducted on a computer using the Inteli9-9880H core @ 2.3 GHz Processor and 32 GB RAM, 8 GB Graphics Unit Processing Unit Graphics Card on 64-bit Windows 10. In the present study, we used the ImageDataGenerator for augmentation with the following parameters: rotation range = 40, width shift range = 0.2, height shift range = 0.2, shear range = 0.2, and zoom range = 0.2. Also, we resized all images (224 × 224) to fulfill the requirement of the pretrained models. We used Anaconda 4.8.3 environment, Spyder 3.7 development environment, and Keras 2.2.4 with a Tensorflow 2.1.0 backend. We used the following training parameters: batch size = 16, number of epochs = 30, and ADAM optimizer with learning rate = 0.0001. For training and testing, we used a five-fold cross-validation method. We also benefited from the python implementation that was done by Talha Anwar [24]. IV. RESULTS The evaluation of the proposed IHDS is based on Dataset-1 (https://ieee-dataport.org/open-access/date-fruit-datasetautomated-harvesting-and-visual-yield-estimation). For each MLDS, we tested the VGG-19, Inception-V3, and NASNet models for the two-stage, three-stage, four-stage, five-stage, six-stage, and seven-stage maturity detection systems. Well-known performance metrics TABLE 4, such as F1 score, accuracy, recall, precision, and confusion matrix, were used to evaluate the models and were compared with other obtained results. The VGG-19, Inception-V3, and NASNet architecture models were trained using Database-1. For the two-stage maturity detection, we used 1,302 images, with 661 images We performed a five-fold cross-validation with 50 epochs for each process for all maturity level detection systems for all VGG-19, Inception-v3, and NASNet models, and took the overall average of all the results. Figure 9 illustrates the learning performance accuracy of VGG-19 in a single-fold cross-validation, with 50 epochs of all stages of the maturity level detection systems. As shown in Figure 9, the VGG-19 model has a good fit and stable performance. The training and validation loss decreased to a point of stability with a minimal gap between two final loss values. Figure 10 shows the confusion matrix for VGG-19, for one random fold for all maturity stage detection systems. V. DISCUSSION In this section, we will compare the proposed system with many reference studies using the same dataset (Dataset-1), as well as other datasets. The comparison will be based on well-known performance metrics (F1 score, accuracy, sensitivity (recall), and precision). Our study and a reference study by Altaheri et al. [13] used the same datasets in a farm environment and the date fruit bunches in an orchard, whereas other studies used different datasets using single dates with uniform background. TABLE 7 illustrates a comparison of the evaluation parameters of the proposed system and the reference study of Nasiri et al. [12]. In the proposed system, VGG-19 outperformed the other models and showed outstanding results for all performance metrics for all maturity detection systems. As shown in TABLE 7, our proposed system using VGG-19 outperformed other systems. The reference study [5] had values of 97.25%, 89.56%, 96.1%, and 97.2% for accuracy, F1 score, sensitivity (recall), and precision, respectively, for five maturity levels using VGG-16, whereas our proposed system gave 98.3%, 98.6%, 98.9%, and 98.24% for accuracy, F1 score, sensitivity (recall), and precision, respectively, for five maturity levels with the same dataset. The reference study [5] achieved 92.3%, 96.71%, 86.98%, and 92.3% for accuracy, F1 score, sensitivity (recall), and precision, respectively, using VGG-16 for seven maturity levels, whereas our proposed system gave 97%, 97.6%, 98%, and 96.9% for accuracy, F1 score, sensitivity (recall), and precision, respectively, for five maturity levels with the same dataset. With a comparably outstanding performance, our proposed system outperformed the reference study [12] with a four-stage maturity detection system. The reference study [12] archived 98.49%, 97.33%, and 97.33 for accuracy, sensitivity (recall), and precision, respectively, using VGG-16 for four maturity levels, whereas our proposed system achieved 98.5%, 98.6%, 98.5%, and 98.5% for accuracy, F1 score, sensitivity (recall), and precision, respectively, for four maturity levels. VI. CONCLUSION The present study proposed an intelligent harvesting decision system called IHDS to harvest date fruits at an appropriate time based on a specific maturity stage using DL and computer vision. In fact, harvesting date fruits at the proper time is a critical decision that significantly affects profit. In the present study, we were able to classify all maturity stages of date fruit (Immature stage 1, Immature stage 2, Pre-Khalal, Khalal, Khalal with Rutab, Pre-Tamar, and Tamar). We used the VGG-19, Inception-V3, and NASNet architectural models for pretraining. The maximum performance metrics of the proposed IHDS were 99.4%, 99.4%, 99.7%, and 99.7% for accuracy, F1 score, sensitivity (recall), and precision, respectively. The proposed IHDS was compared with two other studies from literature, and it comparably outperformed the others. In the future, we are planning to enhance the system to estimate date fruit type, maturity level and the weight of date fruits per palm in the pre-harvesting phase MOHAMMED ARAFAH received the Ph.D. degree in computer engineering from the University of Southern California, Los Angeles, USA. He is currently an Associate Professor with the Department of Computer Engineering, King Saud University, Riyadh, Saudi Arabia. He has published in the areas of multistage interconnection networks, MPLS networks, and LTE networks. His current research interests include robotics, cooperative communication, 5G mobile communications, software defined radios, and multiple antenna systems. MOHAMED AMINE MEKHTICHE (Member, IEEE) was born in Medea, Algeria, in 1987. He received the B.S. and M.S. degrees in electronics engineering from the University of Blida, in 2010 and 2012, respectively. Since 2014, he has been a Researcher with the Center of Smart Robotics Research, King Saud University, Saudi Arabia. His current research interest includes image processing stereo vision. VOLUME 8, 2020
2020-09-26T13:27:28.316Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "5f00a007bc09cc196c814029dca2a89bec812e4a", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09195414.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "5f00a007bc09cc196c814029dca2a89bec812e4a", "s2fieldsofstudy": [ "Computer Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
218945545
pes2o/s2orc
v3-fos-license
Dashboard for Evaluating the Quality of Open Learning Courses Universities are developing a large number of Open Learning projects that must be subject to quality evaluation. However, these projects have some special characteristics that make the usual quality models not respond to all their requirements. A fundamental part in a quality model is a visual representation of the results (a dashboard) that can facilitate decision making. In this paper, we propose a complete model for evaluating the quality of Open Learning courses and the design of a dashboard to represent its results. The quality model is hierarchical, with four levels of abstraction: components, elements, attributes and indicators. An interesting contribution is the definition of the standards in the form of fulfillment levels, that are easier to interpret and allow using a color code to build a heat map that serves as a dashboard. It is a regular nonagon, divided into sectors and concentric rings, in which each color intensity represents the fulfillment level reached by each abstraction level. The resulting diagram is a compact and visually powerful representation, which allows the identification of the strengths and weaknesses of the Open Learning course. A case study of an Ecuadorian university is also presented to complete the description and draw new conclusions. Introduction According to the Cambridge dictionary [1], Open Learning (OL) is a way of studying that allows people to learn where and when they want, and to receive and send their materials through electronic means. For Caliskan [2], the term Open Learning is used to describe learning situations in which learners have the flexibility to choose from a variety of options in relation to the time, place, instructional methods, modes of access, and other factors related to their learning processes. Although they are not exactly the same [3], the term Open Learning is closely related with other terms such as e-Learning, Online Learning, Technology-enhanced Learning, Flexible Learning, or Distance Learning [2]. All in all, universities are developing a large number of Open Learning projects based on Information and Communication Technologies (ICT), mainly e-Learning courses (as it has been agreed to call Electronic or Online Learning), to support their students within the teaching and learning process. As any process developed in the university environment, OL or e-Learning must be subject to parameters that allow to evaluate their quality. However, these systems have some special characteristics that make the usual quality systems not respond to all their requirements. For example, the high dependence on technology, which also entails the need to train teachers and students in that technology, or the need for different teaching and instructional methodologies, are differentiating features of OL systems. Although several proposals have been developed to evaluate the quality of OL, many of them are not transferable, are unstructured, are incomplete or do not present a formal description, as it will be explained in the background section. On the other hand, a fundamental part in the quality models is the visual representation of the results, since the main objective of these models is decision making based on the diagnosis established by the quality model. The usual form of information representation for decision making is the dashboard, consisting of a graphic representation of a set of indicators and other relevant information for the user who makes the decisions [4]. The general purpose of this research is to provide a compact and easily interpretable visualization tool for decision making in the context of an OL course quality assessment model. The proposed tool has the desirable features of a useful dashboard. The document is organized as follows: in Section 2, we review some previous concepts and work related to Open Learning, quality models for OL and dashboard design and indicator selection. Section 3 is devoted to present the methodology used in this research. In Section 4 we present our proposal, explaining the quality model and its principles. The research instruments for data collection, including their design and validation are presented in Section 5, while Section 6 is devoted to the design and construction of the dashboard. A case study of the application of the model to an Ecuadorian university is presented in Section 7, to illustrate the interpretation of the results. Finally, the conclusions are set out in Section 8. Background In this section we present the background of the research, focusing on three main issues that support the proposal: the concepts of Open Learning, e-Learning, b-Learning and other related ones; the existing models for quality assessment of OL systems; and the visualization of quality assessment results through dashboards as well as their desirable features. Open Learning, e-Learning, b-Learning and Other Related Concepts Open Learning is a term used to describe flexible learning experiences in which the time, place, instructional methods, modes of access, and other factors related to their learning can be chosen by the learners [2]. Bates [3] considers that Open Learning is primarily a goal, or an educational policy whose essential characteristic is the removal of barriers to learning. The concepts of Open Learning, Distance Learning, Flexible Learning and e-Learning (and other terms), are related and frequently considered as equivalent, though some different nuances are reported. For instance, Bates [3] states that Distance Learning is less a philosophy and more a method, so that students can study in their own time, at the place of their choice (home, work or learning center), and without face-to-face contact with a teacher. About Flexible Learning, the same author also considers it more of a method than a philosophy, but he reports a nuance: the flexibility in aspects such as geographical, social and time constraints of individual learners, rather than those of the institution. Flexible Learning may include distance education, but it also may include delivering face-to-face training in the workplace or opening the campus longer hours or organizing weekend or summer schools. Although Open Learning, Distance Learning and Flexible Learning can mean different things, they all have one feature in common: they provide alternative means of high quality education for those who either cannot take conventional, campus-based programs, or choose not to [3]. The term e-Learning is much more modern, born as a result of the emergence, explosion and generalization of information technologies and the Internet in particular, with its associated tools (e-mail, World Wide Web, videoconferencing, apps) and devices (computers, tablets, smartphones). It dates back to the late 1980s, was consolidated during the 1990s [5], until its current omnipresence. Although there is no consensus on the definition of e-Learning, we have chosen that of Koper [6]: e-Learning can be defined as the use of ICT to facilitate and improve learning and teaching. The term e-Learning has given rise to other related terms: mobile learning or m-Learning, ubiquitous learning or u-Learning, and blended learning or b-Learning. B-Learning is the mode of learning that combines classroom teaching with non-classroom technology [7]. In a b-Learning course, the methods and resources of both face-to-face and distance learning are mixed, giving students more responsibility in their individual study by providing them with skills for such studies. Moreover, b-Learning is an option for introducing information technologies among a reluctant teaching staff and it fosters innovation processes and improvement of teaching quality [7]. The philosophy of Open Learning has given rise to a set of derivatives with a slightly different nuance. The term open has come to be used in recent times as a synonym for freely accessible, public domain or open license. This is the sense in initiatives such as Open Educational Resources (OER), OpenCourseWare (OCW) or Massive Open Online Course (MOOC). Despite the diversity of these concepts and tools, and the arguments for or against each one, they all have in common one objective: to improve the way the contents are made available to the learners [8]. In this paper, it was decided to use the terms Open Learning, e-Learning and b-Learning since the proposed model can be applied to all these cases. The former is used because of its tradition and because it is a particularly broad concept that includes all the others. The second because of its wide use, having become almost the standard term. The third because our model takes into account classroom teaching in addition to virtual teaching. Quality in Open Learning Systems It is not possible to find a consensus on the concept of quality of education in a university, the definition of which varies greatly since quality has different perspectives. In this section, we are going to mention some contributions. One of the consequences of Open Learning is the self-organization of learning by the students, i.e., the student can lead his or her own learning, which implies a radical change in the roles assumed by the instructors and the students themselves. The instructor becomes a learning guide or facilitator, while the learners abandon their passive role and become the main protagonists. Therefore, if teaching and learning are changing, we cannot expect that the definition of quality and the method used to assess it will not also change [9]. The culture of quality is already well established in the universities. However, when it comes to Open Learning, quality assessment is addressed in the literature from very different points of view and for very specific cases. As a result, proposals are often not very transferable and it is difficult to find standards that allow us to undertake the task of assessing the quality of an Open Learning system in a structured and formal way. However, there are some authors that are looking for alternatives to the definition of quality in the field of Open Learning, as explained next. Ehlers [9] considers that with technology transforming the higher education institutions, the concept of quality must be redefined. Quality is no longer an add-on to teaching and to learning, but quality is the constituting issue. Therefore, the question is not how quality can be assured for the technology-enhanced learning systems but rather how technology-enhanced learning can be provided in a way so that high quality learning scenarios unfold. Vagarinho and Llamas-Nistal [10] establish that the quality of e-Learning is understood as the adequate fulfillment of the objectives and needs of the people involved, as a result of a transparent and participatory negotiation process within an organizational framework. Furthermore, in the field of e-Learning, quality is related to processes, products and services for learning, education and training, supported by the use of information and communication technologies. Martínez-Caro, Cegarra-Navarro and Cepeda-Carrión [11] give some clues about what the main factors are that affect the quality of e-Learning: the design and management of the learning environment, and interaction. Peer interaction, assessment and cooperation, and student-teacher interactions contribute to establishing an environment that encourages students to better understand the content. There have been efforts to evaluate the potential of other OL environments such as m-learning, through the evaluation of learning activities [12] and through other more complete analysis that try to develop authentic learning-based evaluation method and design approach for m-learning activities [13]. The ESVI-AL project [14] is about accessibility in e-Learning, but it makes an interesting analysis of the areas that must be studied to guarantee the quality of the e-Learning process: • Quality of the technology, from the technical point of view: availability, accessibility, security, etc. • Quality of the learning resources included in the platform: content and learning activities. • Quality of the instructional design of the learning experience: design of learning objectives, activities, timing, evaluation, etc. • Quality of the teacher and student training in the e-Learning system. • Quality of the services and support, help and technical and academic support offered to the users of the system. A more exhaustive review of the literature (a systematic review of the literature) on quality models for e-Learning/b-Learning can be found in a previous work by the authors [15]. It can be seen that the focus of a large number of publications is to address the technical quality of the technology that supports the e-Learning process [10,14,[16][17][18][19][20][21][22][23][24][25]. The quality of services and support associated with e-Learning systems [11,[26][27][28][29][30][31][32], learning resources [33,34] and instructional design of online courses [23,[35][36][37] are also topics of interest, although there is less consensus among researchers, as studies are case-focused and results are not generalizable. As for the training of students and teachers in the skills of using the e-Learning system [31], this seems to be an interesting issue but few authors have addressed it. On the other hand, an important symptom of the weak formalization of quality assessment models in Open Learning is the lack of references to more formal and widespread quality models [11,16,22]. As a result of this systematic study, we detected that more effort is needed in empirical research on this topic and that current research seems to focus on five aspects: technology, instructional design, learning resources, training, and services and support. However, there is no consensus on the characteristics that make a quality Open Learning course. Furthermore, no single comprehensive quality scheme has been found that contains the five areas and defines meaningful and measurable indicators. There are also some transversal aspects that a quality evaluation system should consider: communication, personalization, teaching innovation, entrepreneurship, linkage with society and collaboration, among others. Dashboards A dashboard is a business tool that displays a set of indicators and other relevant information to a business user. The information is usually represented graphically and must include the indicators involved in achieving the business objectives. All organizations need an information system that enables communication of key strategies and objectives and decision making. This is what Eckerson [38] calls the "organizational magnifying glass". This author considers that the dashboard is the organizational magnifying glass that translates the organization's strategy into objectives, metrics, initiatives and tasks. Few [4] considers that "a dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance". In short, a dashboard should contain limited, understandable, visual, important and goal-oriented information. The main objective of a dashboard is transforming data into information and turn this into knowledge for the business. More precisely, for Eckerson [38], the goals of a dashboard are: • Monitor critical business processes and activities using metrics of business performance that trigger alerts when potential problems arise. • Analyze the root cause of problems by exploring relevant and timely information from multiple perspectives and at various levels of detail. • Manage people and processes to improve decisions, optimize performance, and steer the organization in the right direction. Few [4] describes an interesting set of characteristics for dashboards: • A dashboard is a visual presentation, a combination of text and graphics (diagrams, grids, indicators, maps...), but with an emphasis on graphics. An efficient and attractive graphical presentation can communicate more efficiently and meaningfully than just text. • A dashboard shows the information needed to achieve specific objectives, so its design requires complex, unstructured and tacit information from various sources. Information is often a set of Key Performance Indicators (KPI), but other information may also be needed. • A dashboard should fit on a single computer screen, so that everything can be seen at a glance. Scrolling or multiple screens should not be allowed. • A dashboard presents up-to-date information, so some indicators may require real-time updating, but others may need to be updated with other frequencies. • In a dashboard, data is abbreviated in the form of summaries or exceptions. • A dashboard has simple, concise, clear, and intuitive visualization mechanisms with a minimum of unnecessary distractions. • A dashboard must be customized, so that its information is adapted to different needs. The information in a dashboard is a set of Key Performance Indicators. These indicators are usually high-level measurements of how well an organization is doing in achieving critical success factors. To determine what KPIs take part in the dashboard, the designer must consider the audience level in the management structure. Therefore the details should be removed as the target audience moves up in the management structure to avoid information overload. Moreover, a typical dashboard is usually made of a few KPIs, including only those that are strictly necessary (typically between 7 and 10). Selecting the KPIs is not an easy task. In most cases this task is manual and specific to a particular case. However, there are some research projects that seek to formalize and automate all or part of the dashboard design process, from selecting KPIs to defining visualization tools. Chowdhary, Palpanas, Pinel, Chen and Wu [39] propose an efficient and effective model-driven dashboard design technique. A model is a formal specification of the function, structure and behavior of a system from a specific point of view, represented by a combination of drawings and text. The models are used as the primary source for selecting the KPIs and designing, constructing and deploying the dashboards. Kintz [40] presented a semantic dashboard description language used in a process-oriented dashboard design methodology to help overcome known challenges of business process monitoring, such as the difficulty to built appropriate dashboards from complex data sources to best monitor given goals. As regards methodologies for building dashboards, Brath and Peters [41] advise following an iterative model of creating dashboards for better designs. The design iteration and the use of sketches and prototypes help identify the needs and requirements and refine vague design ideas into the best possible solution. The selection of the KPIs must meet a number of constraints that we have already discussed: they must be directly related to the organization's goals, they must focus on few key metrics, they must consider the state of the organization and be adapted to the business model and features. An interesting work is that of Keck and Ross [42], that have investigated solutions to the selection of KPIs through the use of machine learning techniques in the particular case of a call center. In this context of dynamism they have consider the problem as one of multi-label classification where the most relevant KPIs are labeled and selected later. Molina-Carmona, Llorens-Largo and Fernández-Martínez [43] propose the use of the values of the own indicators of a model to classify them and determine which of them are most suitable to be part of the dashboard, in what is called data-driven indicator classification and selection. This way, decision making takes place at two levels: on the one hand the values of indicators and their evolution help the dashboard designer to classify and select the KPIs that will be part of this dashboard, and on the other hand the KPIs themselves will help the top management to make their business decisions. Moreover, there is a second derivative from this proposal: the data themselves will report how the different indicators evolve and, therefore, when the KPIs of the dashboard are likely to be replaced by more significant ones. In short, more dynamic dashboards are obtained, to be adapted to the changing environment conditions. The need to reduce information means that many data and indicators collected at universities do not end up as part of the dashboard. The selection of these KPIs has been the subject of numerous studies that highlight the complexity of this selection. Therefore, a good option is to try to represent the indicators at various levels, so as not to have to give up any of them in the final visualization, but to obviate them in the overall view. One example of that are the Technological Ecosystem Maps (TEmaps) [44]. A TEmap is a polygonal representation of the main elements of a technological ecosystem [8]. It is divided into levels (levels of abstraction from which to study the ecosystem), facets (basic principles that guide the organization and are transferred to the technological ecosystem) and components (specific aspects that are affected by the technological ecosystem). Each component, at each facet, studied from each level, is evaluated according to its maturity level. To do so, a maturity model is required so that it can be measured how good the element of the ecosystem is to fulfil the required objectives. Each maturity level is represented by a color, so that the TEmap finally takes the form of a heat map. Methodology For this research, we propose a methodology based on five major stages and a total of ten intermediate steps (Figure 1). The stages and steps proposed are: (1) Review: in this stage we have made a deep and systematic study of the literature, which has been presented in the previous section. This stage consists of two steps: (1) Literature review on the quality of Open Learning courses (Section 2.2) (2) Literature review on dashboard design (Section 2.3) (2) Model: The formulation of the model is key in our proposal. This formulation is presented in Section 4 and it is based on the literature review of the quality of the courses of the previous stage. The design of the model has been structured in three steps Components design: The first step has been the structuring of the model in components, elements and attributes, inspired by the models found in the literature (Section 4.1). Indicators design: The design of the indicators is presented in Section 4.2, which establishes the specific indicators that we have considered in our proposal. Fulfillment levels: The last step of this stage is to establish the fulfillment levels we propose for the aggregation and comparison of the indicators (Section 4.3) (3) Instruments: Data collection is the objective of this stage (Section 5). It involves three steps: (6) Collection instruments: in this step the data collection instruments have been designed, based on the indicator design of the previous stage (Section 5.1). Instruments validation: to ensure that the data collection instruments are useful and valid, a validation by experts has been carried out and is reported in Section 5.2. Data collection: In this step the data of the specific institution are collected thanks to the application of the data collection instruments (Section 5.3) (4) Dashboard: the design and construction of the dashboard takes up the fourth stage (Section 6), divided into two steps: (9) Dashboard design: the dashboard is designed on the basis of the desirable characteristics established in the dashboard literature review, and on the basis of the model design, particularly the fulfillment levels (Section 6.1). (10) Dashboard construction: finally, thanks to the data collected, it is possible to build the dashboard designed in the previous step (Section 6.2). (5) Case study: the last step is the analysis and interpretation of the results of the case study (Section 7). Model The proposal is a complete model for evaluating the quality of Open Learning courses based on the principles of quality, and is supported by different theoretical frameworks that allow it to be given a formal structure: process management and the principle of continuous improvement. As a starting point, we defined the following principles for our model: • It must be supported by previous studies, which is why it is based on a systematic review of the literature. • It must be integral, trying to include all aspects. • It must be open, that's why we use an iterative methodology that allows us to include new aspects in the future. • It must be adaptable, being able to be applied in any e-Learning course with few adaptations. • It must have a solid theoretical base, such as instructional design theories and process management. From the systematic review of the literature, we obtain four key aspects for the definition of the model: • The literature describes five areas on which to study quality and which should appear in the model: learning resources, instructional design, user training and education, service and technology support, and learning management system (LMS). • Kirkpatrick's model [23,30], which proposes the evaluation of training through the four levels (feedback, learning, transfer and impact) should guide the design of our model. • The instructional design ADDIE [36] (Analysis, Design, Development, Implementation and Evaluation), on the use of technology, should be taken into account. • We must take into account the generic quality models, among them, the Total Quality Management (TQM) model [11,37], the Sustainable Environment for the Evaluation of Quality in e-Learning (SEEQUEL) [22] and Benchmarking [16]. Learning as a PROCESS: Components and Elements A process is a set of mutually related and interacting activities that uses inputs to provide an output [45]. The teaching-learning process supported by technology is a dynamic system that fits this definition, in which the input to the system (the student, with his or her previous knowledge and skills) undergoes a transformation involving different resources (human, technological and methodological) until an output is obtained (the student with new knowledge and skills). It is possible, therefore, to see learning as a process. This view of learning as a process has some background that is worth noting. For example, Biggs [46] established the so-called 3Ps Model to explain the teaching-learning process, especially from the student's point of view. To this end, he established three components that correspond to three moments in the process: (1) Presage, which characterizes the student and the context in which he or she is learning; (2) Process, which refers to the way in which learning tasks are carried out; and (3) Product, which focuses on learning outcomes. Biggs' proposal has many points in common with ours, although in our case the point of view is not restricted to the student, so the elements involved in the process are extended. The process (Figure 2), generated through the interaction between the student and the different resources, makes possible the transfer of knowledge from the teachers and the resources to the students. In the output, the student is transformed through a process of knowledge acquisition, where, according to Kirkpatrick, we have four levels of evaluation that we can measure: reaction, learning, knowledge transfer and impact. Finally, there is the feedback to the process, which includes the results, the levels of satisfaction, the errors, the possible improvements, etc. These improvement options should be included in the next version of the course, with the corresponding modifications. As a result, we propose a hierarchical quality model, obtained from the previously described principles, key aspects and process. It is based on four levels of abstraction, so that the upper level represents the three major components of the model, and the lower level the indicators. As the levels of abstraction are lowered, the information becomes more concrete and detailed. These levels of abstraction are: The model is divided into three components, according to the nature of the participants in the process in Figure 2. These components are the human agents involved in the process, the resources they use in its development and the dynamic part of the process that includes the interactions that occur and the result itself. The elements make up the second level. They are the concrete elements in the process in Figure 2. Each component in the previous level is divided into three elements. However, the teaching-learning process is itself divided into six sub-elements, which represent the interaction of students and teachers with the elements of the resource component (instructional design, LMS and helpdesk). A summary of the components and elements are presented in Table 1. The attributes and indicators make up the third and fourth level and they should be dependent on the features of the particular Open Learning courses that are being assessed. They are analyzed in the next section. Component Element Description Human (Red) Student Learning recipient and input to the system. Teacher Who guides and creates the learning atmosphere using different methods and techniques. LMS manager Who provides the management and administration services of the learning platform. Methodology and Technology (Green) Instructional design Academic activity devoted to designing and planning resources and learning activities. The instructional design corresponds to the ADDIE model. LMS Software platform that manages learning, where resources and activities are located. Helpdesk Institutional service offered to students and teachers for the use, management and training of the LMS. Process (Blue) Process Interaction process of students, teachers and managers with each resource. It has 6 sub-processes, the result of combining each element of the human component with each element of the methodological and technological component.. Result Output of the teaching learning process in which a student with knowledge i ends up with knowledge j, where j > i. For evaluation, Kirkpatrick's 4 levels of assessment are taken: reaction, learning, knowledge transfer and impact. Feedback Improvement actions that imply a feedback to the system.. Attributes and Indicators The elements and sub-elements are divided, in turn, into attributes. The attributes represent characteristics of each element and are measurable by means of indicators. The indicators represent specific variables that can be evaluated in terms of reference levels or evaluation standards. A set of 38 attributes and 99 indicators are proposed [15], which are adapted to most of the situations that an Open Learning course can present. However, following the same methodology, it is possible to adapt the proposal to other cases and create other attributes and indicators more in line with the situation of each institution. For each indicator the following information should be considered: • Name, clearly identifying the meaning of the indicator. • Type, indicating whether its value is quantitative or qualitative. Fulfillment Levels An interesting contribution of our model is to define the standards in the form of fulfillment levels. We will propose five fulfillment levels for each indicator (from level 1 to level 5), regardless of the type of indicator. The levels of the indicator are established according to: • If there are associated regulations, the regulations are used to establish the levels. For example, Ecuadorian regulations establish that universities must aspire to have at least 70% of their teaching staff have a doctorate, so the indicator "% of teaching staff with a doctorate" will have a level 5 if it is greater than 70% and the rest of the levels are established by dividing the range from 0% to 70% into 4 intervals. • If there is no regulation that allows the establishment of reference points, the levels are defined by dividing the whole range into 5 parts, when they are quantitative, or they correspond to the 5 levels of a Likert scale, when they are qualitative. For example, for the indicator "% of teachers using the virtual classroom as a means of communication with students", the 5 fulfillment levels are established homogeneously, in steps of 20%. Another example is that of the qualitative indicator "level of satisfaction of students with the learning experience", for which a Likert scale with 5 values is used, equivalent to the fulfillment levels. Normalizing the value of the indicators through the fulfillment levels allows for the comparison of indicators and provides a homogeneous scale that has two fundamental advantages: • The model is hierarchical, so that attributes are evaluated according to their indicators, elements according to attributes and components according to elements. The fulfillment level of a hierarchical layer is the average of the lower layers. However, it is possible to establish a weighted average, so that the weight of each part is different. The determination of the weights is very dependent on each particular case and it could be a powerful strategy tool for the institution. In the standard model, though, it has been decided the use of a uniform weighting. • The simplification of the scale to 5 values is easier to interpret and allows us to establish a color code that will facilitate the graphic representation we are looking for, as we will see in Section 6. Research Instruments The instruments for collecting the data are mainly two: surveys for students and teachers, and interviews with the directors and managers of the units responsible for the management of the LMS and other technologies supporting the teaching-learning process. In this section we present the design of these instruments, their validation and the data collection using them. Collection Instruments The purpose of the data collection instruments is to collect from the different stakeholders the data that will allow the calculation of the indicators. We have considered two types of instruments: surveys and interviews. In the case of the surveys, we have followed the methodology proposed by Kitchenham and Pfleeger [47], which indicates that these instruments are complex and a series of well-defined activities must be carried out: establish the objectives of the survey, designing the survey, developing the questionnaire, evaluating and validating the questionnaire, carrying out the survey by collecting the data, analyzing the data obtained and reporting the results. When designing the instruments, a preliminary version was first made and then validated (Section 5.2). The resulting questionnaires are presented here. The student survey aims to evaluate the quality of the virtual classroom and the LMS from the students' point of view. The first part contains the informed consent and survey instructions. Participants are informed of the purpose of the research and the voluntary and anonymous nature of the survey. The survey consists of 12 parts and 61 questions. The 12 parts are: Use of the learning platform (1 question) 4. Use of the virtual classroom (9 questions) 5. Use of resources, learning activities, evaluation and collaborative work (6 questions) 6. Quality of the virtual classroom (11 questions) 9. Instructional design (1 question) 10. Teacher training and updating (3 questions) 11. Teaching and learning process supported by the virtual classroom (2 questions) 12. Technological services provided by the institution for the operation of the virtual classroom (5 questions) As for the teacher survey, it aims to evaluate the quality of the virtual classroom and the LMS, but in this case from the teachers' point of view. Like the other survey, it contains at the beginning the informed consent and the survey instructions. In this case the survey consists of 7 parts and 33 questions: Socio-demographic data (2 questions) 2. Technological services provided by the institution for the operation of the virtual classroom (1 question) 8. Recommendations (1 question) In addition to the surveys, an interview was arranged with the Information Technology Directorate and the Institutional Development Directorate to obtain some data. The interview consisted of 5 parts, in which the following data were collected: 1. Training of LMS managers, collecting data on training hours. 2. Characteristics of the technological infrastructure, in order to know, among other issues, the availability of the platform, the bandwidth, the security policies, the accessibility of the platform, the software update policies or the contingency plans. 3. Training of teachers and students, to know the percentage of teachers and students trained in the use of learning support technologies and training programs. 4. User support, collecting data on resolved incidents and response time. 5. Use of the virtual classroom, to know the teachers and students who really use the LMS. The complete data collection instruments can be consulted in the work of Mejía-Madrid [15]. Instruments Validation and Redesign In this section we present how the initial data collection instruments were validated to give rise to the final instruments. This validation consisted of an initial pilot test, both for students and teachers; plus a validation questionnaire by experts, in this case only for the teachers' instrument. In the case of the interviews for the directors and managers of the units, no explicit validation was made, since the questions were obtained directly from the model and reviewed by the authors of this article. As for the student data collection instrument, its validation was carried out with a pilot test to students in the Information Systems subject. The aim of the pilot test is to find possible shortcomings in language, writing, relevance or technical quality. In this way, a validation by students for students is carried out, which we consider indispensable because quality is focused on learning and, therefore, on the student as the main actor of the process. The students mentioned some suggestions regarding form; these were incorporated to later improve the research instrument. The validation of the teacher instrument was carried out in two ways: with a pre-test and with expert validation. The pre-test validation consisted of a pilot study with 17 teachers from the Central University of Ecuador, whose observations were incorporated. Then, the resulting questionnaire was validated by seven experts, among whom were four researchers in educational innovation and technologies for learning and three university managers in the field of educational technologies and academic management. The expert validation was conducted using an instrument provided by the Directorate of Academic Development of the Central University of Ecuador, in which the following aspects were evaluated for each question: • Relevance: The correspondence between the objectives and the items in the instrument. • Technical quality and representativeness: The adequacy of the questions to the cultural, social and educational level of the population to which the instrument is directed. • Language quality and writing: Use of appropriate language, writing and spelling, and use of terms known to the respondent. This instrument was passed on to each expert, who made their respective observations, and subsequently the final version of the survey was constructed. By compiling the results issued by the experts, they requested few changes in form, language and wording. In addition, they asked for the unification of questions due to the fact that it was an extensive survey. These observations were taken into account for the development of the final version of the survey. Data Collection Once the data collection instruments have been defined, it is necessary to carry out this collection. The following are some of the aspects that have been taken into account for this process. With regard to student and teacher surveys, three key aspects need to be defined for their implementation: • The population, i.e., the recipients of the survey. In this case the population is made up of the students and teachers of the university analysed, for each of the surveys carried out. • The chosen sample, that is, who from the entire population will answer the questionnaires. In our case it is a voluntary survey, so it is not possible to define a sample size a priori. This introduces the problem of the possible non-representativeness of the sample, either because of an insufficient size or because it is not a random sample. • The way to send them the questionnaire: on paper, by e-mail, by means of an online form... All questionnaires are accompanied by instructions indicating the purpose, who is sending the questionnaire, why the recipient of the survey was selected and whether and how the results will be shared. As for the interview with those responsible for the learning technology system, the selection of the participants is crucial. This is a key informant interview, that is, the individuals selected are considered unique because of their position or experience. To get the best results from the interview and to collect all the expected data, it is essential to have a well prepared interview guide. Dashboard A dashboard, as already noted, is a business tool that displays a set of indicators and other information needed to make decisions. It is important that it presents in a visual way, at a single glance, the most important data needed to achieve the business objectives. In this section we present our proposal for a dashboard. Dashboard Design Decision-oriented representation of results is one of the objectives of this research. To this end, we propose a heat map as a dashboard, in the form of a regular nonagon (because of the nine elements), divided into sectors and concentric rings, in which each color intensity represents the fulfillment level reached by each indicator, each attribute (as the average of the fulfillment levels of its indicators) and each element (as the average of the fulfillment levels of its attributes). An example of this type of display is shown in Figure 3. The three components (each with an associated color) are represented and their quality represented as a fulfillment level (with a different intensity of the chosen color). The first, human component (in red) includes students, teachers and LMS managers. The second, methodological and technological resources (in green) includes the instructional design, the LMS and the helpdesk. Finally, the third component is the dynamics of the process (in blue), which includes the process itself, the result and the feedback arising from the interaction between the elements. In each component, the elements (X i ), attributes (A j ) and indicators (a k ) are distributed in three concentric rings. The concentric rings give us information of different levels of abstraction. The closer they are to the center, the more general the information is, and as the rings move away from the center the information becomes more specific. The resulting diagram is a compact and visually very powerful representation, which allows us to easily identify the strengths and weaknesses of the Open Learning course analyzed. The proposed representation in the form of a heat map can be used as a dashboard since it mostly fulfills the characteristics of Few [4] for a dashboard: • It is an efficient and attractive visual presentation, combining text and graphics. • It shows information needed to achieve a specific objective (evaluate the quality of a Open Learning course), and has complex, unstructured and tacit information from various sources (data collection tools). It shows a set of KPIs (the 9 elements), but also other additional information (the attributes and indicators). • It fits on a single computer screen. • Allows for updated information if required. • The information can be considered as an aggregated summary of the whole Open Learning quality assessment. • A heat map is a simple, concise, clear and intuitive display mechanism. • It could be customized, showing more or less rings depending on the needs. Dashboard Construction The construction of the dashboard is a direct process once the data has been collected and the indicators calculated. Specifically, the steps that have been followed for the construction are: 1. Collection of indicator data from surveys and interviews. 2. Calculation of the fulfillment levels, based on the value of the indicators and the established standards, as indicated in Section 4.3. 3. Calculation of the fulfillment levels of the attributes, as an average of the fulfillment levels of the indicators of that attribute, with rounding to the nearest integer value. 4. Calculation of the fulfillment levels of the elements (sub-elements), as an average of the fulfillment levels of the attributes of that element (sub-element), with rounding to the nearest integer value. 5. Calculation of the fulfillment levels of the components, as an average of the fulfillment levels of the elements of that component, with rounding to the nearest integer value 6. Assignment of a color level according to the fulfillment level. To do this: • The hue depends on the component to which each element, attribute or indicator belongs: red for the human component, green for the methodological and technological component, blue for the process component. • The saturation depends on the fulfillment level. Five levels of saturation are established, distributed in intervals of 20%, between 0% (white, minimum saturation) and 100% (maximum saturation). The control panel has been built manually, but it can be easily automated since the calculation procedure is perfectly defined. Case Study The application of the model to a case study allows the description to be completed and new conclusions to be drawn. The model has been applied at the Central University of Ecuador (UCE-Universidad Central del Ecuador). In this case, the 3 components and 9 elements of the model have been divided into the 38 attributes and 99 indicators proposed in Section 4.2 and presented in the work of Mejía-Madrid [15]. In order to collect data to calculate the indicators, two surveys have been designed for teachers and students, and a series of interviews have been carried out with those responsible for the LMS. The surveys were conducted with 111 teachers (out of a total of approximately 2300) from the different faculties and 677 students (out of a total of approximately 40,000), and the heads of the university's information technology department were interviewed. Although the voluntary nature of the questionnaire does not make it possible to ensure the absence of bias, the sample can be considered sufficiently large for the population under study. Based on the data collected, each indicator is assigned its fulfillment level, with values from 1 to 5, and the results are incorporated into the heat map that constitutes the dashboard (Figure 3). In this graphic representation, the elements, attributes and indicators can be seen with different color intensities, depending on the fulfillment level reached. The ultimate goal of the model is to provide a complete picture of the state of the institution and determine what improvement actions can be taken to increase fulfillment levels for each element. The heat map can be used as a dashboard for the quality of the institution's Open Learning courses. Without being exhaustive, we present some interesting results of the application in the UCE. In the central ring we have the high level information concerning the nine elements, grouped in the three main components. In the example of the UCE, the "LMS manager" element within the human component stands out as the element with the highest fulfillment level (level 5), although the "Student" element of the same component also has a notable fulfillment level (level 4). However, the "Helpdesk" element within the methodology and technology component is the one with the lowest fulfillment level (level 1). Looking only at the innermost ring of the heat map gives us an overview, but can lead to confusion if the attributes and corresponding indicators are not analyzed in detail, especially when the fulfillment level is intermediate. Calculating the fulfillment levels of a ring as an average of those of the immediately outer ring may mean that the intermediate levels of fulfillment are due to very different values that contribute to that calculation. A clear case is that of the "Teacher" element of the human component. Its average fulfillment level is intermediate (level 3), but if we look at the outermost rings we can see that at the level of attributes (central ring) and indicators (outer ring) there are very different values. Thus, attributes A 2.1 and A 2.4 have a minimum level of fulfillment (level 1), while attribute A 2.3 has a maximum level (level 5). It is interesting to note that the attributes with the lowest values are A 2.1 : "Educational level" (due to the low number of PhDs on the staff of the UCE) and A 2.4 : "Teacher contribution to transparency" (since very few teachers publish data on their subjects on the institutional website, one of the indicators for this attribute). However, "Teacher training" is very good, according to attribute A 2.3 , with the highest level. The description of all the attributes and indicators can be consulted in the work of Mejía-Madrid [15] Conclusions In this paper, we presented two main contributions: a model for the evaluation of the quality of Open Learning courses and a visual tool for the representation of the results of the application of the model, useful as a dashboard for decision making. The quality assessment model has a solid theoretical basis from a systematic review of the literature, is comprehensive, open and adaptable. It is made up of 3 components and 9 elements, that are divided into 38 attributes and 99 indicators in the case of the Central University of Ecuador (consult the work of Mejía-Madrid [15] for the complete description of the model), organized in a hierarchical way and whose data are obtained through various data collection tools validated by experts. The greatest contribution is to formalize the data of different types on a single scale formed by five fulfillment levels, which allows for easy comparison. A dashboard, in the form of a heat map is proposed, which can be constructed thanks to the scale of fulfillment levels, is a compact and intuitive representation of the situation of the university with respect to the quality of Open Learning courses. It has several levels of abstraction represented in the rings of the diagram, which allows the information to be analyzed in different detail. Each color value represents a level of fulfillment (between 1 and 5), so that the representation is aggregated, homogeneous and comparable. The tool has an important potential for decision making, so we propose to continue advancing in this line of research in the future. Specifically, we are considering automating the process of obtaining the dashboard, in order to keep it updated in a simpler way. We also propose to develop a systematic diagnostic methodology based on the dashboard, to achieve the automatic definition of improvement actions in the weakest areas aligned with the institution's strategy. Finally, the proposal must be validated by university policy makers. The authors have experience in positions of responsibility in university governance and management, but we consider it essential to gather the opinion of other university leaders to understand the usefulness of the proposed model and dashboard. Table A1. Attributes and indicators for element X 1 , Student. Attribute Indicator Description A 1,1 Digital skills a 1,1,1 Use of computer tools % students who regularly use computer tools in their learning activities. A 1,2 LMS training a 1,2,1 Training % students who have been trained in the use of LMS.
2020-05-21T00:13:07.981Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "c11934da607dc52b01ad196c022b73ed2a7aa9bd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/9/3941/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6e99a28529c8f4f29e99bcd9336fa018e56660fe", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
86828582
pes2o/s2orc
v3-fos-license
Real-time relative permeability prediction using deep learning A review of the existing two- and three-phase relative permeability correlations shows a lot of pitfalls and restrictions imposed by (a) their assumptions (b) generalization ability and (c) difficulty with updating in real-time for different reservoirs systems. These increase the uncertainty in its prediction which is crucial owing to the fact that relative permeability is useful for predicting future reservoir performance, effective mobility, ultimate recovery, and injectivity among others. Laboratory experiments can be time-consuming, complex, expensive and done with core samples which in some circumstances may be difficult or impossible to obtain. Deep Neural Networks (DNNs) with their special capability to regularize, generalize and update easily with new data has been used to predict oil–water relative permeability. The details have been presented in this paper. In addition to common parameters influencing relative permeability, Baker and Wyllie parameter combinations were used as input to the network after comparing with other models such as Stones, Corey, Parker, Honapour using Corey and Leverett-Lewis experimental data. The DNN automatically used the best cross validation result (in a five-fold cross validation) for its training until convergence by means of Nesterov-accelerated gradient descent which also minimizes the cost function. Predictions of non-wetting and wetting-phase relative permeability gave good match with field data obtained for both validation and test sets. This technique could be integrated into reservoir simulation studies, save cost, optimize the number of laboratory experiments and further demonstrate machine learning as a promising technique for real-time reservoir parameters prediction. Introduction Relative permeability is the most important property of porous media to carry out reservoir prognosis in a multiphase situation (Delshad and Pope 1989;Yuqi and;Dacun 2004) and therefore needs to be as accurate and readily accessible as possible. Theoretically, it is the ratio of effective and absolute permeability. It is useful for the determination of reservoir productivity, effective mobility, wettability, fluid injection for EOR, late-life depressurization, gas condensate depletion with aquifer influx, injectivity, gas trapping, free water surface, residual fluid saturations, temporary gas storage amongst others (Fig. 1). It is well known that a significant variation in relative permeability data can have a huge impact on a macroscopic scale. The oil and gas industries have a need for easily available and reliable relative permeability data, expense reduction on experiments and a more general model for the parameter judging by the pitfalls pointed out by several researchers (Table 1) after testing the existing two-and three-phase relative permeability models. Such workers 1 3 like Fayers and Matthews (1984) and Juanes et al. (2006), after testing non-wetting relative permeability interpolation models such as Baker and Stone's I and II, against Saraf et al. (1982), Schneider and Owens (1970), Saraf and Fatt (1967) and Corey et al. (1956) experimental data, presented the same conclusion that they give similar results for high oil saturations but are different as it tends towards residual oil saturation. Manjnath and Honarpour (1984) concluded that Corey gives higher values for non-wetting phase relative permeability after comparing against Donaldson and Dean data. Based on the assumption that water and gas relative permeability depends only on their saturation and not on that of other phases, Delshad and Pope (1989) concluded after a comparative study of seven relative permeability models that Baker and Pope performed better but also stated the need for better models. Siddiqui et al. (1999) found Wyllie-Gardner and Honarpour to yield consistently better results at experimental condition after testing ten relative permeability models. Al-Fattah and Al-Naim (2009) found Honarpour regression model to be the best after comparing with five other models and also developed his own regression model. Since the coefficients of these regression models are not generalized, they are not suitable for real-time applications. Furthermore, for wetting phase relative permeability in consolidated media, Li and Horne (2006) showed that the Purcell model best fits the experimental data in the cases studied by them provided the measured capillary pressure curve had the same residual saturation as the relative permeability curve which is sometimes not the case. Saraf and McCaffery (1985) could not recommend a best model due to scarcity of three-phase relative permeability data. The different relative permeability correlations have limitations and assumptions which no doubt have implications, thus increasing the uncertainty in reservoir simulation studies hence the need for a more generalized model. Therefore, the purpose of this study is to implement a Deep Neural Networks model for the prediction of relative permeability accounting for reservoir depletion, saturation and phase changes with time. Guler et al. (1999) developed several neural network models for relative permeability considering different parameters that affects the property and selected the best model to make predictions for the test set while Al-Fattah (2013) also used a generalized regression neural network to predict relative permeability. Getting better prediction for out-of-sample datasets (better generalization), performance flattening out with a certain amount of data (scalability) as well as requiring far more neurons (and hence an increased computational time) to achieve better results as deep learning models is an issue for such networks. Again most of the reviewed empirical models can hardly generalize (Du Yuqi et al. 2004) and are static but deep neural networks (with its advanced features), if appropriately tuned, can capture the Fig. 1 Schematic of oil-water relative permeability curve transients faster and more accurately throughout the reservoir life while also getting better as more data becomes available with time. Training can be done offline and the trained networks are suitable for on-board generation of descent relative permeability profiles as their computation requires a modest CPU effort hence not a concern to real-time application. Methodology The most commonly available factors influencing relative permeability such as porosity, ∅; viscosity, µ; permeability k; saturation s, together with Baker and Wyllie parameter combinations were used as inputs for the network. Baker gave correlation coefficients of 0.96 and 0.86 while Wyllie has correlation coefficients of 0.91 and 0.89 for Corey and Leverett-Lewis datasets, respectively ( Table 2). There were a total of 12 input parameters fed into the network as shown in Table 3 after testing the sensitivity of several parameter combinations. Ten (10) sets of water-oil relative permeability data with 132 data points from a North Sea field with fourfifths used as training set and one-fifth as validation set. Another set of water-oil relative permeability data from a separate field were used as the testing set after data wrangling and normalization. A seed value was set to ensure the repeatability of the model. An optimised number of hidden layers was used to reduce the need for feature engineering. The best cross validation result in a fivefold arrangement was automatically used to train the DNN models until convergence using Nesterov-accelerated gradient descent (which minimize their cost function). The Rectifier Linear Units (ReLUs) were used in the DNN modelling to increase the nonlinearity of the model, significantly reduce the difficulty in learning, improve accuracy and can accept noise (Eq. 1). This allows for effective training of the network on large and complex datasets making it helpful for real-time applications compared to the commonly used sigmoid function which is difficult to train at some point. where Y ∼ ℵ(0, (x)) is the Gaussian noise applied to the ReLUs. Separate models were constructed for wetting and nonwetting phases as they have also been found to improve predictions (Guler et al. 1999). They were then validated and tested to check the generalization and stability of the models for out-of-training sample applications. The developed Deep Neural Networks model could further be applied to predict other experimental data carried out based on Buckley and Leverett (1942) frontal advance theory (Fig. 2) and Welge (1952) method for average water saturation behind the water front using the saturation history to make predictions of relative permeability as a function of time. Deep neural networks Deep neural networks (sometimes referred to as stacked neural network) is a feed-forward, artificial neural network with several layers of hidden units between its inputs and outputs. One hundred hidden layers with twelve neurons each (100, 12) were used in this work. The ability of the model to transfer to a new context and not over-fit to a specific context (generalization) was addressed using cross validation which is described in detail below. All networks were trained until convergence with Nesterov-accelerated gradient descent which also minimizes the cost function. In addition, both 1 and 2 regularization (Eq. 2) were used to add stability and improve the generalization of the model. This regularization ability was further improved by implementing dropout. A copy of the global models parameters on its local data is trained at each computed node with multi-threading asynchronously and periodically contributes to the global model through averaging across the network. Mathematically, where x are inputs, are parameters, is a measure of complexity by introducing a penalty for complicated and large parameters represented as l1 or l2 (preferred to l0 for convexity reasons). They are well suited for modelling systems with complex relationships between input and output which is what is obtainable in natural earth systems. In such cases with no prior knowledge of the nature of non-linearity, traditional regression analysis is not adequate (Gardner and Dorling 1998). It has been successfully applied to real-time speech recognition, computer vision, optimal space craft landing etc. Cross validation Overfitting which is the single major problem of prediction when independent datasets is used was reduced through cross validation by estimating out-of-sample error rate for the predictive functions built to ensure generalization. Other issues like variable selection, choice of prediction function and parameters and comparison of An extension of Purcell (1941) and Burdine (1953) Based on bundle of capillaries cut and rejoined along their axis with related entrapment of the wetting phase Considers irreducible water as part of the rock matrix Applied when water saturation is at irreducible level Honarpour et al. (1982) Water wet: Based on relative permeability, saturation-fluid pressure functional relationships with a flow channel distribution model in two-or three-phase flow subject to monotonic saturation path and to estimate effective mean fluid conducting pore dimensions Wettability takes the water > oil > gas sequence Irreducible fluid saturation is independent of fluid properties or saturation history No gas/water contact occurs in the three-phase region until the level where oil exists as discontinuous blobs or pendular rings Limited to cases where a satisfactory fit to the two-phase data is provided by the fitting equations m = 1 − 1∕n 1 3 different predictors were also addressed. A fivefold cross validation technique was used to split the data set into training and test set, build a model on the training set, evaluate on the test set and then repeat and average the errors estimated. A weight decay was chosen to improve the generalization of the model by suppressing any irrelevant component of the weight vector while solving the learning problem with the smallest vector. This also suppresses some of the effects of static noise on the target if chosen correctly and increases the level of confidence in the prediction (Fig. 3). Results and discussion Deep neural networks model has been validated using separate out-of-sample datasets not used for the training. The good agreement between experimental data and DNN's model predictions indicates that the complex, transient, non-linear behaviour of reservoir fluids can be effectively modelled as their saturation and phase changes with time. Figures 4, 5 and 6 give a comparison between actual experimental values and model predictions using neural networks without cross validation, neural networks with cross validation and the deep neural networks. The objective here was to see how deep learning out performs ordinary networks on new data. These cross plots show the extent of agreement between the laboratory and predicted values. For the testing set drawn from a different field from the training set, the deep neural networks for both the wetting and non-wetting phase relative permeability ( Fig. 6b and d) gives very close values to the perfect correlation line in all data points compared to the other models. Figure 4a and c representing neural networks without cross validation, gave an RMS value of 0.2484 and 0.0767 while neural net with cross validation gave an RMS of 0.0624 and 0.0765 ( Fig. 5a and c). The deep neural net gave an RMS value of 0.2517 and 0.065 (Fig. 6a and c) for both wetting and non-wetting relative permeability. It is clear that all the models did well for the validation set although the deep neural networks performed better than the other two models. The different models were then shown new data from a separate field to see how they performed. For the test set (which is an out-of-sample dataset) obtained from a different field, the RMS for neural network without cross validation is 0.9996 and 0.8483 ( Fig. 4b and d), 0.2295 and 0.8022 with cross validation (Fig. 5b and d) while DNNs gave 0.0759 and 0.15 ( Fig. 6b and d) for wetting and non-wetting relative permeability, respectively. The deep learning model used the fourth cross validation model which happen to be the best for the As the saturation of a phase tends to zero, that of the other two-phase will dominate The end points of the three-phase relative permeability isoperms coincide with the two-phase relative permeability data Based on the mean hydraulic radius concept of Kozeny-Carman (bundle of capillaries model) Assumes that the whole spectrum of the relative permeability curve can be captured with the L, E, T parameters It exhibits enough flexibility to reconcile the entire spectrum of experimental data wetting phase with a correlation coefficient of about 97% (Table 3) and the lowest training error of 0.0014 while the second cross validation model was used for the nonwetting phase relative permeability having 96% correlation coefficient and the lowest training error value of 0.030 (Table 4). Figures 7 and 8 display the trend comparing the different models using the standard relationship between saturation and relative permeability. The deep learning model clearly out performs the other models giving better predictions for both the wetting and non-wetting phases. Measurement error which causes input values to differ if the same example is presented to the network more than once is evident in the data. This limits the accuracy of generalization irrespective of the volume of the training set. The deep neural networks model deeply understands the fundamental pattern of the data thus able to give reasonable predictions than ordinary networks and empirical models (Figs. 9, 10). The curves show that significant changes in the saturation of other phases has large effect on the wetting phase ability to flow as observed from the less flattening of the water relative permeability curve and vice versa for the flattened curve. Although this flattening behaviour is usual in the secondary drainage and imbibition cycles but mainly in the wetting phase when flow is mainly through small pore networks. Again, the curve flattening of the oil relative permeability curve could from experience be from brine sensitivity and high rates causing particle movements resulting in formation damage. Figures 9 and 10 compares the deep neural network model with commonly used empirical relative permeability models like Baker, Wyllie, Honarpour, Stones, Corey, Parker. Despite the fact that some of these models were developed using lots of datasets way more than the amount Actual vs predicted value for neural networks without cross validation (cross validation not considered as part of the model formulation) with a wetting phase relative permeability for validation set, b wetting phase relative permeability for test set, c non-wetting relative permeability for validation set, d non-wetting relative permeability for the test set used for training the deep neural networks, it still out performed them showing that it is more able to capture the transients and eddies in real-time scenarios due to its ability to regularize and generalize using its robust parameters as discussed earlier. Figures 11 and 12 corroborate the earlier observation that the deep learning model predicts better compared to most of the relative permeability models used in reservoir modelling software. It is important to note here that the empirical models (Figs. 9, 10) have a problem of generalization especially as every reservoir is unique. Again, the assumptions associated with their formulation might not be practically true in all cases but this reservoir uniqueness or generalization is captured by the deep learning model bearing in mind that it will perform even better as more real-time data are added to the training set. Figures 13 and 14 describe the relative importance (sensitivity) of the variables used for the wetting and non-wetting deep learning relative permeability models. The wetting phase model was more sensitive to its saturation and relatively less Fig. 5 Actual vs predicted value for neural networks with cross validation technique used for its model formulation and it improved prediction ability of the network with a wetting phase relative permeability for validation set, b wetting phase relative permeability for test set, c non-wetting relative permeability for validation set, d nonwetting relative permeability for the test set Fig. 6 Actual vs predicted value for deep neural networks model with a wetting phase relative permeability for validation set, b wetting phase relative permeability for test set, c non-wetting relative permeability for validation set, d non-wetting relative permeability for the test set sensitive to that of the non-wetting phase while the non-wetting phase model was very sensitive to both its saturation and that of the wetting phase. Both models were also more sensitive to their own viscosities than the other. These models seem to obey the basic physics underlying relative permeability modelling. The least important variable still contributed above the median mark although in general, all variables show greater sensitivity in the non-wetting model than in the wetting relative permeability model. Table 5 shows the performance of the different variables combinations for both the wetting and non-wetting phase model. Conclusion A deep neural network methodology has been formulated for wetting and non-wetting phase relative permeability predictions taking into account phase and saturation changes hence its capability for real-time applications. This work has the following conclusions: 1. Deep neural network has shown to be a good predictive and prescriptive tool for relative permeability than ordinary networks. Its ability to generalize and regularize helped to stabilize and reduce the main problem of all predictive tools which is over fitting. 2. Different results were obtained from different relative permeability models for the same reservoir with some of the models giving better predictions at lower saturations but performs poorly at higher saturations and vice versa; hence, lots of uncertainty. Therefore, it is needful for practitioners to know the limitations of any correlation used for the prediction of wetting and non-wetting phase relative permeability. 3. In an industry where big data is now available, deep learning can provide the platform to systematically forecast reservoir fluid and rock properties to drastically optimize the cost and time needed for laboratory experiments. Even with the amount of data used, the power of the deep neural networks is evident in that it gave reasonable predictions which will dramatically improve if more data were available.
2019-03-28T13:14:04.364Z
2018-11-24T00:00:00.000
{ "year": 2019, "sha1": "2860c06faa5831992c3758e246166a9af9a6bf6e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13202-018-0578-5.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "2860c06faa5831992c3758e246166a9af9a6bf6e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
232164011
pes2o/s2orc
v3-fos-license
Squalenoyl siRNA PMP22 nanoparticles are effective in treating mouse models of Charcot-Marie-Tooth disease type 1 A Charcot-Marie-Tooth disease type 1 A (CMT1A) lacks an effective treatment. We provide a therapy for CMT1A, based on siRNA conjugated to squalene nanoparticles (siRNA PMP22-SQ NPs). Their administration resulted in normalization of Pmp22 protein levels, restored locomotor activity and electrophysiological parameters in two transgenic CMT1A mouse models with different severity of the disease. Pathological studies demonstrated the regeneration of myelinated axons and myelin compaction, one major step in restoring function of myelin sheaths. The normalization of sciatic nerve Krox20, Sox10 and neurofilament levels reflected the regeneration of both myelin and axons. Importantly, the positive effects of siRNA PMP22-SQ NPs lasted for three weeks, and their renewed administration resulted in full functional recovery. Beyond CMT1A, our findings can be considered as a potent therapeutic strategy for inherited peripheral neuropathies. They provide the proof of concept for a new precision medicine based on the normalization of disease gene expression by siRNA. C harcot-Marie-Tooth (CMT) diseases are the group of inherited neuropathies caused by chromosomal rearrangements and mutations 1,2 . Demyelinating CMT1A occurs in the first and second decade of life and represents around 40-60% of all CMT cases 3,4 . It is caused by a duplication in chromosome 17p11.2, leading to the overexpression of Pmp22, a 22-kDa hydrophobic transmembrane protein produced by Schwann cells and representing 2-5% of peripheral myelin proteins 1 . Electrophysiological studies demonstrate reduced nerve conduction velocity (NCV) (below 38 m s −1 ) and decreased compound muscle action potential (CMAP) 5 . Histopathological studies usually show onion bulbs, resulting from repetitive episodes of axon demyelination followed by remyelination 6 . To date, there has been no effective treatment for CMT1A 7 . Existing therapies aim to reduce its progression through rehabilitation and surgical corrections. Most of the therapeutic interventions include modulators of adenylate cyclase activity, such as ascorbic acid 8 and combination therapy PXT 3003 9 , neurotrophin-3 10 , an antagonist of the progesterone receptor (Onapristone) 11 and pain-modulating drugs (ADX71441 12 and FLX-787 13 ). However, these molecules did not reach clinics due to their inefficiency or toxicity in clinical trials 7 . More recently, genetic therapy was introduced by Zhao et al. by using antisense oligonucleotides (ASO) in CMT1A animal models and showed promising results 14 . Here, we took on the challenge of using siRNA to reverse CMT1A disease phenotypes in two transgenic mouse models. Due to their mechanism of action, siRNA offer important advantages, in particular their high degree of safety as they inhibit gene expression at a posttranslational level and do not directly interact with DNA, their high efficacy in suppressing gene expression and their specificity determined by complementary base pairing 15 . However, although inhibiting the expression of a culprit disease gene by siRNA has been successfully demonstrated, the normalization of an overexpressed dosage-sensitive gene, such as PMP22 in CMT1A, has never been considered. Such therapeutic strategy faces multiple challenges, in particular the requirement for reaching normal levels of Pmp22, as either too high or too low levels of the protein result in peripheral neuropathy. Whereas overexpression of Pmp22 causes CMT1A, its inhibition below normal levels results in hereditary neuropathy with liability to pressure palsies (HNPP) 16 . Due to their hydrophilicity and short plasmatic half-life, a major obstacle for siRNA therapy is their delivery to target cells. To overcome these limitations, several viral and non-viral vectors were developed. Viral vectors may show cyto-and geno-toxic adverse effects and are rather difficult to obtain, which limits their clinical applicability 17 . Concerning non-viral vectors, encapsulation using cationic lipids and polymers has been successful in increasing the efficacy and safety of siRNA therapeutics 18 . This was highlighted by the FDA approval of first siRNA "Patisiran" for the treatment of transthyretin-mediated amyloidosis disease. Currently the actual tendency is oriented toward chemical conjugation of siRNA to different molecules 15 . A major advance has been the discovery that bioconjugates resulting from the chemical linkage of small molecules to squalene (SQ), a natural and biocompatible lipid, could self-assemble as nanoparticles (NPs), offering protection and improving pharmacological efficacy of drugs for the treatment of variety of diseases, including cancer, neurological disorders and pain alleviation [19][20][21] . Noteworthy, the synthesis and preparation of the drug-SQ NPs is easy and represents a flexible platform for drug delivery (for detailed information review 15 ). This technology has already been successfully applied to siRNA for silencing oncogenes in prostate cancer and thyroid papillary carcinoma [22][23][24] . Here, we show that in preclinical models of CMT1A, the dosed administration of siRNA PMP22-SQ NPs normalizes Pmp22 levels, improves motor and neuromuscular activities, restores electrophysiological endpoints and triggers the remyelination and regeneration of axons. These results open a new avenue for the use of siRNA in the treatment of CMT1A and other diseases caused by unbalanced chromosomal rearrangements and gene copy-number variations. Results Construction of siRNAs PMP22 and efficacy testing in MSC80 cells. To initiate this study, the common mRNA PMP22 sequences of homo sapiens (Supplementary Fig. 1a) and mus musculus ( Supplementary Fig. 1b), were determined by BLASTN (Supplementary Table 1a). Eight different siRNAs against PMP22 were designed according to Tafer and Reynolds method (Supplementary Table 1b) 25 . To investigate their inhibitory effect on PMP22, siRNAs PMP22 (named: siPMP1, siPMP2, siPMP3, siPMP4, siPMP5, siPMP6, siPMP7, siPMP8) and siRNA control (siRNA Ct), a commercial scramble sequence presenting no homology with any known eukaryotic gene, were transfected into MSC80 mouse Schwann cells. An optimal siRNA candidate should inhibit between 30 and 50% of PMP22 expression and restore normal levels of Pmp22 protein, which are increased by 1.5-to 2-folds in CMT1A patients as mention by Svaren et al. 26 . In addition, the selected siRNA should exert longlasting effects, not affecting the expression of myelin protein zero P0, which ensures cohesion between the spinal turns of the Schwann cell plasma membrane during myelin formation 27 . Notably, an inhibition greater than 70% of PMP22 could be expected to result in the development of HNPP 28 . Concerning P0, a modification of its expression was described to trigger the development of CMT1B 27,29 . Of the eight siRNAs PMP22 tested, siPMP7 met all the above-listed criteria: it inhibited PMP22 gene expression and protein levels in a long-lasting manner by about 50% without affecting P0 expression ( Supplementary Fig. 2a-d). This siRNA PMP22 targeted a region close to the 3′-UTR of PMP22 mRNA. Since the untranslated region is generally conserved after transcription, this could favor an efficient inhibition of the gene expression. Then, we selected the optimal concentration of siPMP7 (50 nM) that showed no significant effect on P0 and on cell viability, and this condition was used for further experiments and is referred to as siRNA PMP22 for the rest of the study ( Supplementary Fig. 2e). Naked siRNA PMP22 is not efficient in inhibiting PMP22 in vivo. To check if siRNA PMP22 works in vivo in the absence of nanoparticle protection, we administered 2.5 mg/kg of siRNA PMP22 divided into five intravenous (i.v.) injection of 0.5 mg/kg each twice per week to transgenic JP18 mice that carry one extra copy of the PMP22 gene. Results showed no significant differences between siRNA PMP22 treated and untreated mice for locomotion and muscular strength (Supplementary Fig. 3a and Video 1). Moreover, molecular analysis revealed no inhibition of Pmp22 protein expression ( Supplementary Fig. 3b). Synthesis of siRNA Control-SQ (siRNA Ct-SQ) and siRNA PMP22-SQ nanoparticle (NPs) resulted in hydrophobic and stable NPs. As the naked siRNA PMP22 had no effect in vivo, we decided to conjugate it to SQ, a safe and biocompatible endogenous triterpene with the ability to form NPs in H 2 O. We conjugated both siRNA PMP22 and siRNA Ct to SQ through a covalent link taking advantage of the "copper-free Click Chemistry" 30 . To this aim, the sense strand was modified by a dibenzocyclooctyne (DBCO) residue at the 5′-end of the siRNA. To avoid steric hindrance, a C6 linker was used. SQ was modified by a terminal azide group (SQ-N3) to react with the DBCO residue of the sense strand siRNAs. A quasi quantitative yield of siRNA PMP22-SQ and siRNA control (Ct)-SQ was obtained during the bioconjugation step. The bioconjugate siRNAs-SQ were more hydrophobic (Supplementary Fig. 4a) and showed a major peak (see MALDI-TOF MS spectrum in Supplementary Fig. 4b). The resulting NPs were stable over the period of 1 month at 4°C (Supplementary Fig. 4c). They were spherical in shape with a mean diameter of about 180 nm ( Supplementary Fig. 4c, d) and a polydispersity index (PDI) of 0.2 ± 0.02 for siRNA PMP22-SQ NP and, with a mean diameter of 255 nm and a PDI of 0.15 ± 0.02 for siRNA Ct-SQ NPs ( Supplementary Fig. 4c). Importantly, SQ conjugation did not affect siRNA efficacy. Indeed, siRNA PMP22-SQ NPs still downregulated PMP22 mRNA expression in MSC80 cells similarly to the naked siRNA PMP22 until 72 h without affecting P0 gene expression and cell viability ( Supplementary Fig. 5a, b). These results were in accordance with other studies showing that chemical modifications of siRNA improved their stability without affecting their performance 30,31 . Thus, siRNA squalenoylation by copper-free click chemistry could be used as a platform for siRNA delivery. The synthesized siRNA PMP22-SQ NPs were active in vitro without displaying cytotoxicity. JP18 and JP18/JY13 are representative models of CMT1A pathology. After in vitro validation of siRNA PMP22-SQ NPs, their therapeutic efficacy was tested in two transgenic mouse models of CMT1A, named JP18 and JP18/JY13, carrying respectively one and two extra copies of the PMP22 gene and developed on a B6 background 32 . Since birth, the PMP22 gene is overexpressed in both strains leading to dysmyelination in the embryonic life followed by demyelination in about 26% of myelinated nerve fibers in adult mice 33 . These observations are in parallel with the finding in human CMT1A patient. Dysmyelination was detected in young patients with CMT1A without affecting the NCV whereas; demyelination was detected in adults CMT1A patients 34 . Therefore, we believe that these models are representative of CMT1A neuropathy and are good candidates for gene therapy studies 7 . Consistently, a 1.8-fold increase in Pmp22 protein levels was observed in the JP18 group and a threefold increase in the JP18/JY13 group when compared to wild-type B6 (WT) mice ( Fig. 1a and Supplementary Fig. 6a). First, we examined CMT1A disease symptoms. A significant reduction in motor activity and weakness in muscular strength were observed in both strains compared with the WT mice (p < 0.001). The beam walking test showed a significant difference between the JP18 and JP18/JY13 strains (p < 0.05). However, this significance was not observed in the automated locotronic test, probably because the beam was harder to cross than the locotronic ladder ( Fig. 1b and Supplementary Video 3). Concerning the electrophysiological endpoints, CMAP and sensory NCV were decreased in both transgenic strains when compared to the WT mice (p < 0.001) (Fig. 1c). Interestingly, the NCV values were comparable to patients with CMT1A pathology (<38 m/s) 35 . However, significant differences between both transgenic mouse strains were observed (p < 0.001 for CMAP and p < 0.05 for NCV), with JP18/JY13 mice being more affected by the neuropathy. Results of the functional tests were supported by histological observations. Pathological hallmarks of CMT1A are alterations of the myelin sheaths and demyelination 35 . The number of myelinated fibers counted on semi-thin sections differed between all groups based on a quadratic regression analysis. The cutoff between small and large fibers was determined by fitting a quadratic statistical model for each group (Supplementary Table 2). Although statistical comparisons did not reach significance, a tendency towards a decrease in the number of the large fibers was observed in the JP18/JY13 mice (Fig. 1d). The ratio of the inner axonal diameter to the total outer diameter (gratio) is a widely used measure of axonal myelination. Histological analysis of ultrathin sections showed a significant increase in the g-ratios for both transgenic mouse strains when compared to the WT (p < 0.01 for J18 B6 and p < 0.001 for JP18/ JY13 B6), reflecting myelin alterations consistent with a model of a demyelinating neuropathy (Fig. 1e). Notably, myelin sheaths were significantly narrower in JP18/JY13 mice when compared with JP18 mice (p < 0.001), highlighting the higher severity of the pathology in the JP18/JY13 strain. Moreover, the periodic spaces between thick dense lines were enlarged in both transgenic animal models (Fig. 1e) and paralleled the g-ratio increase. Overall, these data confirmed that the mouse models used showed disease markers remarkably comparable to CMT1A patients, which were dependent on the PMP22 expression rate 32,36 . CMT1A patients exhibit severe, moderate or no signs of the disease depending on genetic and environmental factors 37 . siRNA PMP22-SQ NPs restore the functional and electrophysiological activity of JP18 and JP18/JY13 mice. Agematched JP18 and JP18/JY13 mice were treated with five consecutives i.v. injections of siRNA PMP22-SQ NPs at the dose of 0.5 mg/kg twice per week (the treatment duration was 20 days, cumulative dose 2.5 mg/kg). The dose of the siPMP22-SQ NPs was calculated based on in vitro studies and previous studies done in our lab on prostate and thyroid carcinoma. The schedule of treatment was also chosen based on previous data from our research on cancer models 22,23,30 . Mice treated with siRNA PMP22-SQ NPs showed restoration and normalization of the motor activity and muscular strength (Fig. 2a, b and Supplementary Videos 3 and 4). Importantly, the siRNA Ct-SQ NPs had no effect on the same parameters. Moreover, treatment of JP18/JY13 mice displaying a very severe disease phenotype and paraplegia by siRNA PMP22-SQ NPs resulted in a significant improvement in motor activity (Supplementary Video 5). Noteworthy, upon treatment with siRNA PMP22-SQ NPs, the electrophysiological endpoints CMAP and sensory NCV were restored in both CMT1A mouse models and became even comparable to the WT group (Fig. 2c). Taken together, these data demonstrate that siRNA PMP22-SQ NPs induce remarkable recovery from CMT1A neuropathy, with the improvement of motor and electrophysiological parameters reaching levels observed in WT mice. siRNA PMP22-SQ NPs normalize Pmp22 expression and promote myelin and axon regeneration. We further investigated molecular markers of the functional recovery observed after siRNA PMP22-SQ NPs treatment. First, Western blot analysis showed that Pmp22 levels were normalized after treatment with siRNA PMP22-SQ NPs (respectively for JP18 and JP18/JY13, Fig. 3a, b, Supplementary Fig. 6b, c). Moreover, the protein levels of the major myelination proteins Myelin Protein Zero (P0) and Myelin Basic Protein (MBP) were not affected, indicating a specific inhibition of siRNA PMP22-SQ NPs without off-target gene effect ( Supplementary Fig. 7a-f). Any change in P0 levels may lead to another demyelinating neuropathy, CMT1B 27 . This is in accordance with previous study showing that the myelin proteins were not affected by siRNA treatment in Tremble J mice 38 . Since CMT1A is a demyelinating neuropathy, we also investigated the expression of two transcription factors, SOX10 and KROX20 (EGR2), both implicated in Schwann cell development and myelination 39 . Indeed, the PMP22 gene harbors an intron site that is strongly activated by SOX10 and KROX20 and involved in the regulation of PMP22 expression 40 . A significant decrease in the levels of the transcription factors was observed in both transgenic mouse models when compared to WT mice (p < 0.001 for SOX10 and KROX20 in JP18 mice; p < 0.001 for SOX10 in JP18/JY13 mice; p < 0.01 for KROX20 in JP18/JY13 mice). Upon treatment with siRNA PMP22-SQ NPs, levels of KROX20 and SOX10 statistically increased in the JP18 mice without reaching the WT level (p < 0.05), whereas in the JP18/JY13 mice their levels became comparable to WT mice ( Supplementary Fig. 8b, c, respectively, for JP18 and JP18/JY13). A clinically highly relevant finding was that upon treatment with siRNA PMP22-SQ NPs, expression of the heavy 200-kDa neurofilament (NF-H), a marker of mature large diameter axons, was restored to WT levels (Fig. 3c, d and Supplementary Fig. 8a). A decrease in NF-H levels was observed in both transgenic mouse models (p < 0.001), and they increased upon siRNA nanoparticle treatment to levels observed in WT mice (right panels of Fig. 3c, d and Supplementary Fig. 8b, c, respectively, for JP18 and JP18/ JY13). NF-H, in addition to providing structural support, plays a key role in axonal functions and nerve conduction 41 . Low levels of NF-H may contribute to axonal dysfunction and myelin abnormalities and are consistent with the decreased CMAP and sensory NCV values measured in JP18 and JP18/JY13 mice. These data suggest that normalization of PMP22 expression by siRNA PMP22-SQ NPs improves functional outcomes through both myelin and axon recovery. Furthermore, measures of CMAP and sensory NCV, reflecting neurophysiological recovery, may be considered as biomarkers of treatment efficacy. We thus histologically examined myelin and axon morphology in the sciatic nerves of siRNA-SQ NPs treated JP18 and JP18/JY13 mice. The structure and density of the myelinated axons were first assessed on thionine blue-stained semi-thin sections from sciatic nerves. For either JP18 or JP18/JY13 mice, there were no significant differences in fiber counts between the siRNA PMP22-SQ NPs treatment group and the control groups (WT, dextrose or siRNA Ct-SQ NPs) ( Supplementary Fig. 9a, b). However, for both JP18 and JP18/JY13 mice, elevated numbers of small myelinated fibers were counted in the mice treated with siRNA PMP22-SQ NPs. This may reflect decreased axonal diameter, axonal regeneration or compensatory axon sprouting as has been described for CMT 42,43 . Electron microscopic analysis of sciatic nerve sections provided further strong evidence for the therapeutic efficacy of siRNA PMP22-SQ NPs. In JP18 mice, nerve ultrastructure was less disturbed than in JP18/JY13 mice, with a higher PMP22 copy number and a more severe disease phenotype (Fig. 4). There were no differences in g-ratios between the different JP18 treatment groups (Fig. 4a). The distances between individual myelin layers (interperiodic distances), representing the preservation of the myelin lamellar structure, followed the same pattern as the g-ratio results (Fig. 4b). The small changes in nerve fiber morphology did not reveal a structuring effect of siRNA PMP22-SQ NPs treatment in JP18 mice. On the contrary, alterations in myelin morphology were more marked in JP18/JY13 mice, with a significant increase in g-ratios, reflecting decreased myelin thickness, accompanied by a widening of the interperiodic distances (p < 0.001) (Fig. 4c, d). Knowing that the regulation of tight junctions and transmembrane adhesions are important functions of Pmp22, its abnormal expression is likely to result in alterations of myelin structure 44 . Treatment with siRNA PMP22-SQ NPs ameliorated both morphological parameters in the JP18/JY13 mice (i.e., g-ratio and interperiodic distances), however the g-ratio did not reach WT levels, most likely because of the short treatment duration (Fig. 4c). Interestingly, the interperiodic distance examination reached WT levels after siRNA PMP22-SQ NPs treatments (Fig. 4d). This was translated by the improvement of nerve tissue architecture with large axons surrounded by regular and compact myelin testifying the functional recovery. siRNA-SQ NPs penetrate the sciatic nerve. The examination of longitudinal nerve sections by electron microscopy revealed vesicles surrounded by a lipid layer with a size ranging from 160 to 210 nm, similar to the size of the siRNA-SQ NPs (Fig. 5). In contrast to transversal sections, it is possible to discriminate between these vesicles and neurotransmitter vesicles on longitudinal sections. Moreover, they could be easily differentiated from the mitochondria, in which the crescent-shaped interior could be seen (Fig. 5). Importantly, the small vesicles were only observed in nerve sections sampled from mice treated with siRNA SQ NPs. This allowed to hypothesize that after i.v. injection, the siRNA-SQ NPs interact with low-density lipoprotein (LDL) molecules present in the blood stream 45 and transported inside the Schwann cells. siRNA PMP22-SQ NPs treatment results in a long-lasting effect over 3 weeks. After demonstrating that dosed siRNA PMP22-SQ NPs treatment restored peripheral nerve functions and structure and especially promoted locomotor recovery and muscle strength with a remarkable efficacy in the two mouse models of moderate and severe CMT1A, we investigated the longlasting efficacy of this precision therapy. As in the previous experiments, JP18/JY13 mice aged 12 weeks were treated with either 5% dextrose, siRNA Ct-SQ NPs or siRNA PMP22-SQ NPs (2.5 mg/kg cumulative dose divided into five injections at 3 days interval) and compared to WTs. They were followed until the relapse period and then reinjected till functional recovery (Supplementary Fig. 10, Step 5). When the cumulative dose of siRNA PMP22-SQ NPs reached 1.5 mg/kg (after three injections), locomotor activity and muscle strength were already restored (Fig. 6a, b). After the last injection (5th injection), the positive effects of the siRNA PMP22-SQ NPs lasted for a period of 3 weeks. A second cycle of treatment was then started, and again full locomotor recovery and muscle strength were reached at the cumulative dose of 1.5 mg/kg (Fig. 6a, b). The two treatment cycles showed similar efficacy and did not affect body or organ weight as well as liver enzymes (AST and ALT) and kidney function (plasma creatinine and albumin) (Supplementary Table 3). The toxic effects of siRNA PMP22-SQ NPs were investigated in these two main excretory organs that showed high accumulation of siRNA-SQ NPs in our previous study 30 . In addition, since SQ is a Fig. 1 JP18 B6 and JP18/JY13 B6 mice are representative models for CMT1A pathology. a Representative western blot gel showing the Pmp22 protein content in sciatic nerve normalized to tubulin then, reported on WT and the corresponding protein quantification. b Time taken by the mice to perform the beam walking, the locotronic test and the force of their total limbs in gram. c Analysis of the electrophysiological tests CMAP and NCV. d Semi-thin section of sciatic nerves from WT B6, JP18 B6, and JP18/JY13 B6 mice scanned at 40×. Scale bar 20 µm. Fiber count quantification was done using image J software at 70×. To determine the cutoff between small and large fibers a quadratic model analysis was performed. The cutoff is: 7.4 with a 95% CI at [6.95-7.85] for WT (blue line), 7.20 CI [6.39-7.99] for JP18 (dark dashed) and 6.35 CI [5.54-7.10] for JP18/JY13 (pink line). e Transmission electron microscopy (TEM) images (5kx) of ultrathin sciatic nerve sections of the tested mice groups and the g-ratios analysis of myelinated fibers. Scale bar 2 µm. High magnification images (120kx) were taken to show the inter myelin layer distance which is represented by yellow lines for WT and red lines for JP18 and JP18/JY13 group followed by interperiodic analysis graph. Scale bar 50 nm. All the experiments were done on six mice per group. Data represent predicted means with 95% confidence intervals for layer distance while mean ± s.e.m. was represented elsewhere. *p < 0.05; **p < 0.01; ***p < 0.001 using ANOVA analysis followed by Tukey's multiple comparison tests. precursor of cholesterol 46 , the levels of different forms of plasma cholesterol were studied and we found no significant change in their levels between the treated groups (Supplementary Table 3). Observation on TEM images showed no toxicity of the siRNA-SQ NPs on the sciatic nerve represented by the absence of Schwann cell damage, no alterations in the number of mitochondria and no modifications in the intracellular organelles. Discussion The recent approval of the first-ever siRNA-based drug of the treatment of a neurological disease, hereditary transthyretinmediated amyloidosis, has paved the way for a new therapeutic approach based on post-transcriptional disease gene silencing 47,48 . The therapeutic siRNA exerts its beneficial effects by inhibiting hepatic transthyretin production. The liver is indeed the main source of circulating transthyretin, and accumulation of the mutant protein in peripheral nerves forms amyloid deposits leading to rapidly progressing polyneuropathy. The present study broadens the therapeutic use for siRNA for diseases of the nervous system. We used a hydrophobic siRNA-SQ bioconjugate forming NPs that allowed us to target a diseased gene within the nervous system. Importantly, we provide proof of concept that the dosed administration of nanoparticle-stabilized siRNA allows to normalize the expression of a dosage-sensitive gene. We show that the dosed administration of siRNA PMP22-SQ NPs normalizes the expression of Schwann cell-specific PMP22 in two preclinical CMT1A mouse models, without an offtarget effect on P0 or MBP, re-establishes electrophysiological activity of both motor and sensory nerves and results in the rapid recovery of locomotor functions and muscular strength of the limbs. Synthetic ASO against PMP22 have also been developed recently with encouraging results in CMT1A animal models 14 . However, barriers to their use are off-target actions and toxic side-effects during long-term treatment 49 . The siRNA-SQ NPs seem to have less off-target actions after long-term treatment due to the absence of phenotypic organ toxicity and unaffected biochemical markers. Nevertheless, a deeper toxicological study should be performed. The solely study using siRNA specific for PMP22 was initiated in the Trembler-J mouse mutant 38 where mutation of Leu 16 Pro is responsible for the development of CMT1E. In this study, the intraperitoneal injection to postnatal day 6 Tr-J mice resulted in positive outcomes. However, the authors did not investigate the treatment effect in adult mice by siRNA injection via i.v. route instead of intraperitoneal injection. In addition, our data are consistent with two recent studies showing that inhibiting PMP22 expression can ameliorate the neuropathological symptoms caused by PMP22 overexpression 50,51 . In contrast with our study, where siRNA was injected via i.v. route; both studies used as oligonucleotide delivery intraneural injections, a route hard to be applied in human treatment. In our study, the functional recovery was accompanied by the increased expression of neuronal (NF-H) and glial (Sox10, KROX20) markers and by the structural improvement of nerve fibers with the concurrent normalization of g-ratios. These results were remarkable as PMP22 expression needs to be fine-tuned, its overexpression resulting in CMT1A, when too low levels lead to HNPP. Noteworthy, there was no immediate decline in the pharmacological efficacy of siRNA PMP22-SQ NPs as effects lasted for at least 3 weeks after ending the treatment. Moreover, after interruption, the treatment could be initiated again with success. Therefore, treatment with siRNA PMP22-SQ NPs represents a potent promising therapy for CMT1A patients. The normalization of gene expression by specific siRNA conjugated to NPs also offers new perspectives for the treatment of the wide range of diseases, in particular nervous system disorders, caused by unbalanced genomic arrangements resulting in copy-number variations 52,53 . Possible applications of this therapeutic approach could go beyond the treatment of genetic diseases and may be extended to the normalization of gene expression altered by environmental factors, lifestyles, and age-related disorders. Methods Screening of siRNA against PMP22. To restore the basal levels of PMP22 gene expression, we first obtained the mRNA PMP22 sequence for both homo sapiens and mus musculus, then defined the homology part between these two sequences (Supplementary Table 1a). Different siRNAs were designed and blasted to check their specificity. The siRNAs were checked for scoring by using three different methods (Tafer software, Thermofisher software and Reynold method). Reynold scores were calculated respecting different criteria 25 . Eight siRNAs against PMP22 (Supplementary Table 1b) were chosen to continue the study, in addition to a siRNA control (siRNA Ct, a scrambled sequence). siRNAs and chemical modifications. The designed sequences of sense and antisense siRNA strands were purchased from Eurogentec, France. siRNAs were characterized by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) then purified by Reverse Phase-High-Performance Liquid Chromatography (RP-HPLC). Single-stranded RNAs were synthesized as 19-mers with two 3′-overhanging 2′-deoxynucleotides residues to increase their stability as described by Tuschl et al. 54 . To allow conjugation with SQ, a DBCO reactive group was introduced at the 5′-end of the sense strand of each siRNA sequence through a N-(hexamethylenyl)-6-oxohexanamide spacer (C 6 ). To generate siRNA from RNA single strands, equimolar amounts of both sense and antisense strands were annealed in annealing buffer [30 mM HEPES-KOH (pH 7.4), 2 mM Mg acetate, 100 mM K acetate] for 3 min at 95°C and then incubated for 45 min at room temperature before storing at −80°C. Bioconjugation of siRNA. To avoid any degradation of the siRNA by ribonucleases, precautions were taken before each synthesis. The bioconjugates siRNA-SQ were obtained by the copper-free 1,3-dipolar cycloaddition of azido-SQ with DBCO derived siRNAs sense strand 30 . The protocol to obtain the bioconjugates siRNA-SQ was slightly modified and described as follows. One nmol of the 5′-end modified sense strand DBCO-C 6 of the siRNA (1 mg/mL in DNAse/RNAse-free water) was mixed with 50 nmol of SQ-N 3 (1 mg/mL in DMSO) in a glass vial containing DMSO (286 µL) and acetone (65 µL). The solution was then incubated at room temperature for 12 h under stirring to obtain the bioconjugate sensestrand siRNA-SQ. In the following day, excess acetone was eliminated under nitrogen flow for 30 min, followed by lyophilization for 24 h to remove the solvents. Purification of the bioconjugate from the excess of unconjugated SQ was performed by RP-HPLC on a polymeric column. Purified products were lyophilized, and then solubilized in RNAse-free water at the desired molar concentration. Purification of siRNA-SQ bioconjugates via HPLC. HPLC purification was performed on a Thermo scientific high-performance liquid chromatography system (Dionex UHPLC-3000) equipped with a photodiode array detector (DAD-3000) whose wavelength ranged between 190 and 800 nm, a pump and a manual injector. The stationary phase consisted of a nonporous, alkylated polystyrene divinylbenzene column (Hamilton PRP-3 10 μm, 4.6 × 250 mm, PEEK, Ref: 79574) protected by a pre-column (Hamilton). Thermofisher Chromeleon 7 software was used for data acquisition. Flow rate was of 1.2 mL/min and injection volumes of 100 μL. A gradient of mobile phases A and B was applied. Mobile phase A was composed of 0.2 M TEAA (5%), pH 7.0, with 5% acetonitrile, water 90%, while mobile phase B consisted of 90% acetonitrile with 5% TEAA, 5% of water. The gradient applied was as follows: 0-8 min linear gradient from 0 to 24% of phase B; 8-16 min linear gradient from 24 to 90% of phase B; 16-18 min linear gradient from 90 to 100% of phase B; 18-30 min 100% of phase B; 30-32 min linear gradient from 100% of phase B to 100% of phase A and 32-42 min re-equilibration with 100% phase A. Bioconjugates sense-strand siRNA-SQ were purified by manual peak collection. Fractions were collected for 2 min, corresponding to a fraction volume of 2.4 mL, and then lyophilized. All lyophilized siRNA fractions were reconstituted in RNAse-free water. Fig. 2 siRNA PMP22-SQ NPs treatment normalized the motor and sensory activity of both JP18 and JP18/JY13 CMT1A mice. Behavioral tests performed to study the motor activity of age-matched JP18 (a) and JP18/JY13 mice (b). CMT1A mice of both strains treated with siRNA PMP22-SQ NPs showed normalization of the time taken by the mice to walk across the beam or the locotronic ladder when compared to the WT and they were significantly faster and showed stronger grip strength than mice receiving 5% dextrose or siRNA Ct-SQ NPs. c Electrophysiological analysis showed that CMAP (left panel) and sensory NCV (right panel) were normalized for both JP18 and JP18/JY13 mice treated with siRNA PMP22-SQ NPs. Data represent mean ± s.e.m. Before: represent the analysis of the data before treatment. After: is the analysis of the test performed after the end of treatment. The JP18 and JP18/JY13 mice groups were divided blindly. Asterisk represents the significance between WT B6 and other groups. Hashtag represents significance between JP18 5% dextrose and the other two groups. * ,# p < 0.05; **p < 0.01, *** ,### p < 0.001 using ANOVA followed by Tukey's multiple comparison tests. MALDI-TOF mass spectrometry. A MALDI-TOF/TOF UltrafleXtreme mass spectrometer (Bruker Daltonics, Bremen) was used for all experiments to verify the identity of the obtained bioconjugates. Mass spectra were obtained in linear positive ion mode. The laser intensity was fixed just above the ion generation threshold to obtain peaks with the highest possible signal-to-noise (S/N) ratio without a significant broadening of the peaks. All data were analyzed using the Flex Analysis software package (Bruker Daltonics). Annealing of siRNA-SQ bioconjugates to antisense siRNA strands. Annealing of both strands of siRNA PMP22 and siRNA Ct was performed after the bioconjugation of the sense strand to SQ and is the same as mentioned before for the generation of siRNA from single RNA strands. Precisely, equimolar amounts of both sense-strand siRNA PMP22-SQ bioconjugate and antisense siRNA PMP22 were mixed with an annealing buffer [30 mM HEPES-KOH (pH 7.4), 2 mM Mg acetate, 100 mM K acetate] and incubated at 95°C for 3 min, then incubated for ARTICLE COMMUNICATIONS BIOLOGY | https://doi.org/10.1038/s42003-021-01839-2 45 min at room temperature. The same protocol was performed to obtain annealed siRNA Ct-SQ bioconjugate which is represented in Supplementary Fig. 10, step 1. Preparation and characterization of nanoparticles siRNA-SQ. NPs siRNA PMP22-SQ and siRNA Ct-SQ were prepared by nanoprecipitation in acetone: water (volume ratio 1:2). One phase was slowly added to the other, under stirring, i.e., 10 nmole of siRNA-SQ was dissolved in 1 mL of RNase-free water and added drop wisely over 500 µL of acetone under stirring. Then the solution was incubated under stirring for 5 min after which acetone was completely evaporated using nitrogen flux to obtain an aqueous suspension of pure siRNA-SQ NPs at desired concentration. The hydrodynamic diameter (nm) of the obtained siRNA-NPs was measured by dynamic light scattering Malven Zeta Sizer NANO. Samples were analyzed at 10 µM concentration in H 2 O. Three measures of 5 min for each sample were performed and the average diameter ±S.D. of at least three independent samples was calculated. Cryogenic Transmission Electron Microscopy (cryo-TEM) was performed with the JEOL 2100 electron microscope at the Electronic Microscopy Platform (IBPS/ Institut de Biologie Paris-Seine, Université P. et M. Curie, Paris, FRANCE). 4 μL of siRNA PMP22-SQ NPs (concentration of 2.2 mg/mL) was placed on a carboncoated copper grid. A filter paper was used to remove the excess solvents, and the samples were directly dipped in liquid ethane solution using a guillotine-like frame and transferred to a cryo-sample holder. The siRNA-SQ NPs were observed at an acceleration voltage of 200 kV under a low electron dose. Analysis was performed with Image J software. Cell line. MSC80 cell line (mouse Schwann cell line) that expresses myelin genes PMP22 and P0 was used in this study 55 In vitro cell transfection. To choose the most efficient siRNA PMP22, 3 × 10 5 MSC80 cells were seeded in six-well plates containing complete medium until 60-70% confluency. Then, transfection was carried out using Lipofectamine 2000 ® according to the manufacturer's instructions in Opti-MEM reduced serum-free medium. Eight different siRNAs PMP22 and siRNA Ct were transfected at 50 nM concentration. Four hours later, the medium was replaced with a complete DMEM medium. After 48 and 72 h, cells were harvested, then mRNA and proteins were extracted to determine gene and protein expression. After choosing the most efficient siRNA PMP22 sequence based on the different criteria, MSC80 cells were seeded and transfected with different concentrations of siRNA PMP22 (25, 50, and 100 nM). Cells were collected after 48 and 72 h for gene expression analysis. To study the efficacy of siRNA PMP22-SQ NPs, the same protocol as mentioned above was performed. Each experiment was performed at least three times in duplicates. mRNA extraction and real-time PCR (RT-qPCR). Total RNA was extracted from MSC80 cells using RNeasy mini-kit (Qiagen, Courtaboeuf, France). First-strand cDNA was generated with M-MLV RT buffer pack (Invitrogen, Charbonnières-les-Bains, France). Real-time PCR (qPCR) was carried out using StepOnePlus PCR System (AB Applied Biosystems, Villebon-sur-Yvette, France) with Maxima Syber Green Rox qPCR master Mix (Thermo Scientific, Villebon-sur-Yvette, France), according to the manufacturer's instructions. Each experiment was performed at least three times in triplicate. Gene expression was determined by 2 −ΔΔCt method and normalized to 18S levels. Relative mRNA expressions of targeted genes were compared to non-treated cells. Western blot analysis. Total protein extracts from both MSC80 cells and sciatic nerve tissue were obtained using RIPA buffer (Sigma R0278) supplemented with a protease inhibitor cocktail (Roche, Neuilly sur Seine, France). Sciatic nerve tissues were first homogenized using Precellys 24 ® lysis and homogenizer (Bertin, Technologies, France). All samples (cells and tissues) were then incubated for 2 h, under rotation at 4°C for complete protein extraction and then centrifuged at 13,000 rpm for 20 min. Supernatant was stored at −80°C. Cell viability test. The efficacy of siRNA PMP22, siRNA Ct, siRNA PMP22-SQ NPs, and siRNA Ct-SQ NPs were tested for viability on MSC80 cells using MTT assay. A total of 20 × 10 3 MSC80 cells were seeded in 96-well plate containing complete medium. When the cells reached 60-70% confluency, they were transfected using lipofectamine 2000 as mentioned before. After 72 h, the cells were incubated for 2 h at 37°C with MTT solution according to the manufacturer's instructions. Then, the medium was removed and replaced by DMSO. Absorbance was read at 570 nm and the mean of three independent experiments was recorded ±standard error of the mean (s.e.m.). CMT1A mice models. Two separate lines of transgenic mice JP18 and JY13 established by Perea et al. 32 were used in this study. Transgenic mice were purchased after reviviscence from "Transgenesis, Archiving and Animal Models (TAAM-UPS44)". Both strains were archived by cryopreservation and generated on the C57BL/6J (B6) background. For the JP18 mouse model, the mouse PMP22 cDNA was introduced under control of the PhCMV*-1 promoter, therefore, mice overexpressed PMP22 throughout life. The JY13 mouse model carried the intact human PMP22 gene under a tTA open reading frame that gave Schwann cell-specific expression of tTA 32 . JY13 strain was found to have little adverse effect on myelination. Double transgenic mouse model (JP18/JY13) was generated by crossing JP18 and JY13 mice. In absence of tetracycline, PMP22 overexpression occurred throughout the mice lifespan 32 . The mice were genotyped at day 7 after birth by using specific primers of PMP22 and tTA genes already described by Perea et al. 32 . After PCR, the samples were run on 1.5% agarose gel and the bands were visualized under UV light. Experimental approach. The experimental approach and the performed biological and chemical studies are presented in Supplementary Fig. 10, steps 2 and 3. According to Perea et al., we used JP18 mice at 16 weeks of age. This age showed a sign of pathology with demyelinating fibers 32 . The animals were randomly assigned to each group before performing the behavioral and the electrophysiological tests. Therefore, age-matched JP18 B6 mice were divided into three groups of nine mice each. Group one received a vehicle of 5% dextrose solution, mice of group two were treated with siRNA Ct-SQ NPs and mice of the third group were treated with siRNA PMP22-SQ NPs. In addition, one group of WT B6 mice were used as a control (nine mice). All the treatments were administered by retro orbital i.v. injection, following a regular treatment schedule, i.e., a cumulative dose of 2.5 mg/kg at an interval of 0.5 mg/kg per injection twice per week. Beam walking test was performed before and after treatment while locotronic and grip strength tests were performed after treatment. After treatment, the body weight, heart and kidney were weighed. Sciatic nerves were collected for further studies. Blood was collected directly from the heart for biochemical analysis and analyzed in the Fig. 3 siRNA PMP22-SQ NPs inhibit Pmp22 protein expression and promote myelination and axonal regeneration factors. a, b Representative western blot showing the Pmp22 protein content of the different JP18 and JP18/JY13 treatment groups. Each well represents a different mouse. The protein quantification of Pmp22 was performed on at least five different mice per group for both strains. Pmp22 protein was normalized to tubulin as a reference protein. c, d Quantification analysis of immunohistochemistry images showing SOX10, Krox20, and Neurofilament on JP18 and JP18/JY13, respectively. SOX10 and Krox20 were normalized over DAPI expression to calculate their expression in percentage. Three different mice were analyzed per group. Data represent means ± s.e.m. Blue asterisks indicate significant differences between the WT and the other groups, purple asterisks indicate significant differences between 5% dextrose and the other groups, green asterisks indicate the differences between siRNA Ct-SQ NPs and the other groups and red asterisks the difference between siRNA-SQ NPs and the other groups. *p < 0.05; **p < 0.01, ***p < 0.001 using ANOVA analysis followed by Tukey's multiple comparison tests. Department of Biochemistry by Pr. Patrice THEROND. The experiment was repeated twice and the results were combined. In another similar experiment, to test siRNA PMP22-SQ NPs on a more affected CMT1A mouse model, the double transgenic model (JP18/JY13) was used that harbored two extra copies of PMP22 gene. At 12 weeks of age, JP18/JY13 mice were divided into three groups, similar to the groups of JP18 mice. The same protocol of treatment was performed and 6 mice per group were used, in addition to a WT B6 group (n = 5). Behavioral tests were performed as mentioned for the JP18 B6 mice group and sciatic nerves were collected afterward. Biochemical analysis were performed as for JP18. To study the long-lasting effect of siRNA PMP22-SQ NPs, JP18/JY13 B6 mice of 12 weeks age were used. Mice were again divided into three groups of six mice each: JP18/JY13 vehicle, JP18/JY13 siRNA Ct-SQ NPs, and JP18/JY13 siRNA PMP22-SQ NPs in addition to a WT B6 group as a control (n = 5). Two cycles of treatment were administered ( Supplementary Fig. 10, step 5). The first cycle was using a cumulative dose of 2.5 mg/kg of siRNA PMP22-SQ NPs and siRNA Ct-SQ NPs at an interval of 0.5 mg/Kg per injection, twice per week. Then, treatments were stopped for 3 weeks to check the relapse period. After relapse, a new cycle of treatment was initiated for another cumulative dose of 2.5 mg/kg of siRNA-SQ NPs at an interval of 0.5 mg/Kg per injection twice per week. Behavioral tests were performed before treatment, at 1.5 mg/kg of first treatment cycle, at 2.5 mg/kg of first treatment cycle, 2 weeks after stopping the first cycle treatment, 3 weeks after stopping the first cycle treatment, at 1.5 mg/kg of second treatment cycle and at 2.5 mg/kg of second treatment cycle. The number of animals per group, are in accordance with the 3R rule that aims to reduce the use of animals in preclinical research. All animal experiments were approved by the institutional Ethics Committee of Animal Experimentation and research council, registered in the French Ministry of Higher Education and Research « Ministère de l'Enseignement Supérieur et de la Recherche; MESR, autorisation N°: APAFIS#10131-2016112916404689 ». It is carried out according to French laws and regulations under the conditions established by the European Community (Directive 2010/63/UE). Investigation has been conducted in accordance with the ethical standards and according to the Declaration of Helsinki. All efforts were made to minimize animal suffering. Administration of treatments was performed under isoflurane anesthesia and animals were sacrificed by cervical dislocation. All animals were housed in sterilized laminar flow caging system. Food, water, and bedding were sterilized before animals were placed in the cages. Food and water were given ad libitum. Beam walking test. To test the gait of JP18 and JP18/JY13, the mouse beam walking test was performed ( Supplementary Fig. 10, step 3). Mice were allowed to walk on a platform with a rod of 3 cm diameter, 70 cm length and around 30 cm above a flat surface 56 . At one end of the rod, a secure platform was set to house the animal. First, the mouse was allowed to adapt and then trained to cross the beam from one side to the other. The time to cross the platform was recorded for analysis. Each animal was tested for three trials per session, before starting the treatment and after the treatments protocols as detailed above. For the long-lasting efficacy experiment, beam walking test was performed at different time points. The test was repeated three times per animal and was recorded by a camera. Data were presented as average time spent by the mice per group ± s.e.m. Locotronic test. The locotronic apparatus (Intellibio innovations A-1805-00049) was used to test the motor coordination when walking ( Supplementary Fig. 10, step 3). The mice were allowed to cross a 75 × 5 × 20 cm horizontal ladder with bars (7 mm in diameter), which were set 2 cm apart. Infrared photocell sensors situated above and below the bars monitored paw errors. The locotronic apparatus was linked to a software that automatically recorded the time taken by the mouse to cross the path. The time was assessed in three trials, with 20 min rest between trials. The statistical analysis of the data was performed by calculating the mean of three trials for each animal. Data are presented as the mean ± s.e.m. For the regular treatment, the test was performed at the end of the treatment, while for the longlasting experiment it was performed at different time points. Grip strength test. Neuromuscular strength was assessed by using the Grip strength test (BIOSEB Innovation Model: BIO-GS3) ( Supplementary Fig. 10, step 3). This test was performed using an automated grip strength meter. The apparatus consisted of a T-shaped metal bar and a rectangular metal bar connected to a strength transducer. To measure strength in the forepaws of the mice, each mouse was held gently by the base of the tail, allowing the animal to grasp the T-shaped metal bar with its forepaws. As soon as the mouse grasped the transducer metal bar with forepaws, the animal was pulled backwards by the tail until grip was lost. This Fig. 4 siRNA PMP22-SQ NPs are efficient to activate myelination. a Representative TEM images of ultrathin sections of sciatic nerves of WT (n = 5) and JP18 treated mice (n = 5) (×5000 magnification) followed by g-ratio analysis of myelinated fibers. Scale bar 2 µm. The bars represent the mean. b High magnification TEM images (×120 k) show myelin layer distance (Scale bar 50 nm) and its corresponding analysis. Data represent predicted means with 95% confidence intervals for layer distance analysis. The blue lines mark a zone of 14 myelin layers for the WT and show the difference between the JP18 groups. c Same analysis was performed for JP18/JY13 mice (n = 5 per group). siRNA PMP22-SQ NPs significantly decreased the g-ratio and d the inter myelin distance in JP18/JY13 mice. In JP18/JY13 mice, a compaction of the myelin layers distance was observed by the siRNA PMP22-SQ NPs and became identical to WT. The blue lines mark a zone of 14 myelin layers for the WT and show the difference between the JP18/JY13 groups. *p < 0.05; **p < 0.01; ***p < 0.001 using ANOVA analysis followed by Tukey's multiple comparisons test. step was repeated three times and the highest strength was automatically recorded in grams (g). To measure the strength on both limbs, each mouse was allowed to grasp the rectangular metal by the fore and hind limbs. After that, it was gently pulled by its tail perpendicular to the axes of the apparatus until the animal lost the grip. The highest strength was recorded automatically. For the regular treatment protocol, the test was performed at the end of the treatment, while for the longlasting experiment it was performed at different time points as detailed above. Data represent average strength per group ± s.e.m. Electrophysiological test. The test was performed with a standard EMG apparatus (Natus UltraPro S100 EMG) in accordance with the guidelines of the American Association of Neuromuscular and Electrodiagnostic Medicine. Anesthesia was performed by isoflurane inhalation where mice were placed in an induction chamber containing 1.5-2% isoflurane in pure oxygen. During the whole procedure, anesthesia was maintained on the same level through a face mask. Mice were placed on their frontal side on a heating pad to maintain their body temperature between 34 and 36°C. For recording the CMAP, three needles were inserted in mice thigh: the stimulator needle electrode at the sciatic nerve notch level, the anode electrode in the upper base part of the tail, while the receptor needle (or recording needle) was inserted in the medial part of the gastrocnemius muscle. A supramaximal square wave pulse of 8 mA was delivered through the stimulator needle and recorded through the muscle as amplitude. For the measurement of sensory NCV, multiple stimulation of the caudal nerve was delivered through the stimulator needle that was located at the 2/3 of the length of the tail, at a distance of 2-2.5 cm from the receptor needle. The ground electrode was inserted half-way between the stimulator and the receptor electrodes. The sensory NCV was calculated from the latency of the stimulus and the distance between the stimulator and receptor electrodes 57 . Sacrifice and organ collection. Body weight of each mice was measured before sacrifice and the heart and kidney were weighed to measure organ hypertrophy. Sciatic nerves were taken and used for the histological and protein investigations ( Supplementary Fig. 10, step 4). Investigation of myelination and axonal regeneration markers by confocal microscopy. Sciatic nerves were incubated in 4% paraformaldehyde solution at 4°C for 2 h for fixation. After that, samples were washed three times 5 min each with phosphate buffer solution, followed by 5 min incubation with 5% sucrose prepared in PBS. Then, overnight incubation with 20% sucrose was done. In the next day, each sciatic nerve was embedded in O.C.T. Tissue-Tek (Sakura 4583) and immediately snap-frozen in ice cold isopentane solution, placed in liquid nitrogen and then stored at −80°C. Frozen sections of 5 µm thickness were prepared with a cryostat at −20°C and placed on (3-Aminopropyl) triethoxysilane (Merck, 440140) coated super frost slides. Slides were either used directly or stored at −80°C. Before starting the permeabilization step, the slides were allowed equilibrating at room temperature for~30 min. For nuclear proteins SOX10 and KROX20, a permeabilization step was performed using 0.2% Triton X and 0.1% Tween 20 solution in PBS 1X for 30 min. Samples for cytoplasmic protein NFs were Treatment was stopped for 21 days between the two cycles to investigate the relapse. The mice were followed weekly to analyze the recovery and relapse periods. a Represents the time in seconds taken by the mice to perform the beam walking test and locotronic test. b Represents grip strength force analysis on the forelimbs and the total limbs. The red box highlights the efficient dose of 1.5 mg/kg at which the mice were able to perform better on both the motor and muscular activities. Before treatment: corresponds to the data analyzed before starting the treatment of each group that were chosen blindly. After 2 and 3 weeks: correspond to the data collected after stopping treatment during 2 and 3 weeks. Data represents mean ± s.e.m. Asterisk represents the significance between WT B6 and other groups; hashtag represents the significance between JP18/JY13, 5% dextrose and JP18/JY13 siRNA PMP22-SQ NPs. **p < 0.01, ***p < 0.001 (ANOVA analysis followed by Tukey's multiple comparisons test). permeabilized using ice cold methanol for 10 min. All samples were then blocked for 1 h using blocking buffer consisting of 2% BSA, 0.1% Tween 20 and 5% FBS (replaced by normal donkey serum for SOX10). Subsequently, samples were immunostained for markers specific for myelinating Schwann cells (Egr2 or Krox20), myelinating and non-myelinating Schwann cells (Sox10) and axonal fibers (NF). Primary antibodies listed in Supplementary Table 4 were prepared in their respective blocking buffer and samples were incubated overnight at 4°C. Then, samples were washed by PBS 1X three times 5 min each, followed by complementary secondary antibodies incubation (Supplementary Table 4) for 1 h at room temperature. Afterwards, three washes of PBS 1X 5 min each were done. Cell nuclei were stained with 5 µg/ml DAPI in PVA mounting medium (Inova Diagnostics, San Diego, CA). Digital images were obtained with an Olympus IX70 fluorescence microscope (Olympus, Tokyo, Japan) equipped with Leica DFC340 FX camera using Leica Application. Images were analyzed using image J software. Exploration of fibers count and g-ratio. Sciatic nerves were incubated in 3.6% glutaraldehyde solution (Sigma-Aldrich cat# G5882) for 4 h at 4°C. This was followed by a PBS 1X wash for 5 min and a post fixation step in 2% osmium tetroxide (Sigma-Aldrich cat# 251755-5 ml) for 2 h. Dehydration step consisted in 5 min washes, using respectively 50%, 80%, 95% and 100% ethanol. The samples were further incubated in acetone solution for 15 min twice, then in an acetone-EPON solution (50:50) for 15 min, followed by embedding in EPON solution twice for 30 and 60 min, respectively. Finally, the samples were carefully positioned in molds filled with liquid epoxy resin solution consisting of 25 mL EPON (Fluka, cat# 45345), 11 mL DDSA (Fluka, cat#45346), 15 ml MNA (Fluka, cat#45347), 0.70 mL DMP30 (Fluka, cat# 45348) in the desired orientation, either for transverse or longitudinal sections and polymerized for 48 h at 60°C. For thionine blue staining, semi-thin sections of 1 µm thickness of sciatic nerves were prepared using ultra microtome (Ultratome, Leica, Germany) and sections were stained with thionine blue for 1 h at 60°C followed by washes of 100% ethanol and xylene, respectively. Slides were covered with a cover slip using a mounting medium (Eukitt, Merck, France). Tissues were finally scanned by optical Microscopy and analyzed for fiber diameter density. For transmission electron microscopy, ultrathin sections of 70 nm were prepared using ultra microtome (Ultratome, Leica, Germany) and contrasted with uranyl acetate solution and lead citrate. Sections were analyzed using a 1010 electron microscopy (JEOL, Japan) and a digital camera (Gatan, US). Images were used to calculate the g-ratio and the interperiodic distance of myelin sheath. Statistics and reproducibility. Statistics were computed with GraphPad Prism 8.3.0 software. Outliers were identified using Grubbs' test. Differences in group means were calculated by one-way ANOVA followed by Dunn's multiple comparison test. For studies requiring grouped analyses, a one-way ANOVA followed by Tukey's multiple comparison test was performed. When we have two groups to compare, a Mann-Whitney analysis was performed to assess the statistical difference. A value of p < 0.05 was considered significant. All measures were taken from distinct samples, and the sample sizes are presented in figure legends. All the in vitro experiments were performed at least three times independently. Data are presented as mean ± s.e.m. To define the cut-offs of fibers diameter the relationship between fiber counts (after square root transformation) with diameter (after log transformation) was modeled using a quadratic model where predictions from this model were used to identify the cutoff. The estimated quadratic equations, where parameters defining the equation wee estimated using least squares regression, are presented in Supplementary Table 2.
2021-03-10T14:36:38.905Z
2021-03-09T00:00:00.000
{ "year": 2021, "sha1": "800a5eb73c59ab104ee3c9c9b4c319bb349e4f68", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-021-01839-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28b9b5f0af69bfaf4a810a8e111060b9ae68d4eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251976606
pes2o/s2orc
v3-fos-license
Electrospun Poly (Vinyl Alcohol) Nanofibrous Mat Loaded with Green Propolis Extract, Chitosan and Nystatin as an Innovative Wound Dressing Material Purposes The objective of this work was to produce and characterise biodegradable poly (vinyl alcohol) (PVA) nanofibre loaded with green propolis extract (GPE), chitosan (CS) and nystatin (NYS) alone and in mixtures as a potential wound dressing material. Methods The GPE, NYS and CS1% were loaded in electrospinning compositions based on PVA 7%, 8% and 12% solubilised in milli-Q water or a mixture of water and glacial acetic acid. The electrospinning compositions without actives (blank) and those loaded with actives were characterised by determining the pH, electrical conductivity and rheological properties. An image analysis procedure applied to photomicrographs obtained by scanning electronic microscopy (SEM) allowed the determination of the nanofibres’ diameter distribution and average surface porosity. The disintegration time and swelling ratio of the nanofibre mats were also determined. Results The physicochemical parameters of the electrospinning compositions (pH, electrical conductivity and rheology) and the incorporated active ingredients (GPE, CS and NYS) affected the electrospun nanofibre mats properties. The electrospun nanofibres’ mean diameters and surface porosity ranged from 151.5 to 684.5 nm and from 0.29 ± 0.04 to 0.50 ± 0.05. The PVA/CS electrospun nanofibres fibres exhibited the smallest diameters, high surface porosity, water absorption capacity and disintegration time. The characteristics of the PVA/CS nanofibres mat associated with the biodegradability of the polymers make them a novel material with the potential to be applied as wound and burn dressings. Introduction The need for efficient and accurate solutions in various industrial sectors has stimulated the development of novel materials and processes to obtain them. Special attention is given to the biomaterials used in a wide range of devices for human health, such as vascular stents, dental restorations, artificial hips and contact lenses [1,2]. Among the widely studied biomaterials, nanofibres have a prominent place. Nanofibres are nanostructures in the form of a mat of very thin and tangled fibres. They have applications in several knowledge fields due to their inherent properties, such as a high surface area, microporous structure, excellent stability and easy functionality. Nanofibres have been used as a drug delivery system [3,4], scaffolding [5] and as dressings for wounds and burns [6]. Various methods can be used to produce nanofibres, and among them, electrospinning is a promising and efficient technique. Electrospinning uses electrostatic forces to elongate fine fibres from a polymeric composition, forming a fibre mat [7,8]. The selection of the polymeric material (pure or mixed) used in the preparation of the formulations as well as its concentration and physicochemical properties are extremely important factors in the success of the operation and for obtaining electrospun fibres with properties suitable for the desired application (for example, wound dressing and drug delivery, among others). Natural (e.g. chitosan, pullulan, gelatin and collagen) and synthetic polymers (e.g. polylactic acid -PLA, polymethacrylate -PMMA and polyethene oxide -PEO) -alone or in blends -are commonly used for nanofibre production [9]. PVA is a semi-crystalline, synthetic organic polymer that is biodegradable under aerobic and anaerobic conditions. It has been widely used to prepare biomaterials such as hydrogels, films, membranes, scaffolds and nanofibres [10,11]. Pharmaceutical uses of PVA include topical and ophthalmic formulations, stabilising agents for emulsions, and sustained release formulations for oral use. It is biocompatible, water-soluble, easy to process and has good chemical and thermal stability [10,12]. These properties make the PVA an ideal material for the production of electrospun nanofibres loaded with active pharmaceutical ingredients; for example, herbal extracts (e.g. Ziziphus jujube Mill. [13], Aloe vera [14] and Glycyrrhiza glabra [15]), non-steroidal antiinflammatory drug (sodium salicylate, diclofenac sodium, indomethacin and naproxen) for transdermal drug delivery, and to improve the solubility of the poorly water-soluble drug Probucol [16]. The electrospinning process is influenced by the physical and chemical properties of the electrospinning composition such as pH, electrical conductivity, rheological properties, solvent system and the type and concentration of polymeric material. The equipment's operating conditions -electrical potential, the distance between the collector and the feeding needle, the diameter of the feeding needle and the feed flow rate of the electrospinning composition -and the environmental conditions of temperature and relative humidity affect the process and properties of the fibre mats formed. They should be optimised prior to use [17][18][19][20]. The use of volatile solvents in the preparation of the composition is preferable as they favour removal during electrospinning. An overview of the current literature indicates several studies that have evaluated the influence of processing conditions on the properties of the nanofibres formed [21][22][23]. However, given the process complexity and the lack of robust theoretical analysis, experimental studies are needed for each polymeric composition processed and equipment configuration. This work aimed to evaluate the feasibility of electrospinning to produce biodegradable nanofibres from polyvinyl alcohol (PVA) loaded with active pharmaceutical ingredients (APIs), directed at future pharmaceutical applications (e.g. as biodegradable dressings for wounds and burns). Three different APIs with proven antimicrobial efficacy were selected to be incorporated into nanofibres (pure or in blends): green propolis extract [24], chitosan [12,25] and nystatin [26]. Figure 1 shows the chemical structures of the PVA, chitosan, nystatin and of selected constituents of the green propolis extract used, namely coumaric acid, artepellin C, baccharin, drupanin, aromadendrin-4'-methyl ether and 2,2-dimethyl-6-carb-oxyethenyl-2H-1-benzopyran (DCBEN) [27]. The resulting polymeric solutions with and without APIs were characterised by determining the pH, electrical conductivity and rheological behaviour. Electrospun nanofibres were successfully engineered and characterised by morphology, fibre diameter distribution and average surface porosity. The disintegration time and swelling capacity in the aqueous environment were also evaluated. Electrospinning Apparatus The electrospinning apparatus was constituted by a high voltage (0 to 50 kV) direct current source (Electrotest HIPOT CC, Model EH6005C), an infusion pump (Harvard, Model Elite I/W PROGR SINGLE) and a nanofibre collector consisting of a stainless-steel cylinder with a diameter of 100 mm and length of 200 mm. The nanofibre collector was coupled to a variable speed motor that allowed for changing the collector's rotation speed up to 600 rpm. The electrospinning process took place inside a fully grounded, electrically insulated compartment to minimise the occurrence of electrical discharges during operation. Fig. 1 Molecular weight, chemical formula and structure of the PVA, chitosan, nystatin* and of selected constituents of the green propolis extract used*, namely coumaric acid, artepellin C, baccharin, drupanin, aromadendrin-4'-methyl ether and 2,2-dimethyl-6-carb-oxyethenyl-2H-1benzopyran (DCBEN) [27]. * Source: https:// pubch em. ncbi. nlm. nih. gov/ Preparation of the Concentrated Green Propolis Extract First, the crude green propolis (kept under refrigeration at 8 ℃ for 12 h) was grounded to a fine powder using a laboratory blender. The particle size was standardised by sieving in a 42-mesh sieve. Hence, 80 g of the powdered propolis was placed in contact with 1200 mL of a hydroethanolic solution at 70% (v/v) in a jacketed stirred vessel (dynamic maceration) at a controlled temperature of 50 ℃ [29] for 180 min. The extract was filtered and concentrated three times in a rotary evaporator to remove the excess ethanol. After cooling, the concentrated extract was centrifuged at 1056 g-force for 10 min in an Eppendorf centrifuge (5430 R) to remove the non-soluble material. The supernatant (green propolis extract (GPE)) was withdrawn and used as the GPE active ingredient in the nanofibres production. The solid concentration of the GPE was 5.5% (w/w) determined in a moisture analyser balance Sartorius MA-35 (Sartorius Lab Instruments GmbH & Co. KG, Goettingen, DE). Chemical Characterisation of the Concentrated Green Propolis Extract High-performance liquid chromatography with diode array detection (HPLC-DAD) is a method widely used for the separation, quantification and characterisation of the constituents of complex samples such as the GPE and herbal extracts [29][30][31]. Hence, a qualitative chemical characterisation of the GPE was done by obtaining its HPLC-DAD chromatographic profile (fingerprint) according to the method described by de Sousa et al. [27]. Analyses were performed in an HPLC Shimadzu Prominence LC-20A series using an LC-6A double pump (Shimadzu Corporation, Kyoto, Japan) using a C-18 column (Shimadzu Shim -Pack CLC(M) 4.6 mm × 25 cm, a particle diameter of 5 µ m, a pore diameter of 100 Å) at 30 ℃. The gradient analysis started with the mixture consisting of 75% of solvent A (acetic acid/ammonium acetate/methanol/water at the ratio of 0.8:0.3:5.0:93.9 v/w) and 25% of solvent B (acetonitrile), increasing linearly up to 100% of B over 60 min at a flow rate of 1.0 mL/min. The diode array detector (DAD) monitored the spectral data over the HPLC run at 270 to 320 nm. The chromatographic profiles were plotted at 280 nm. The sample of the GPE was diluted in methanol at a concentration of 5 mg/mL and filtered through a 0.45 µm Millipore membrane, while 10 µL was injected into the chromatograph. Standard methanolic solutions of caffeic acid (50 µg/mL), cinnamic acid (25 µg/mL) and ferulic acid (50 µg/mL) were prepared, and 10 µL was also injected into the chromatograph. Confirmation of the quality and authenticity of the GPE was performed by comparing the HPLC fingerprint, retention times and UV spectra of the GPE with the previous results reported in the literature [27]. Preparation of the PVA Solutions Loaded with GPE, NYS and CS The polymer concentrations used were chosen based on preliminary electrospinning runs, processing issues and data reported in the literature [32][33][34]. At concentrations of PVA and chitosan higher than 12% and 3%, the composition viscosity increases significantly, turning the electrospinning very hard to conduct. On the other hand, the composition rheology and solid concentration at small concentrations can hinder the electrospinning or increase the processing time needed to produce an adequate nanofibre mat (data not shown). The selected ranges allowed obtaining electrospun nanofibre mats of high quality. Hence, PVA solutions at 7% and 8% w/v (PVA/W7% and PVA/W8%) were prepared by mixing weighed amounts of PVA in purified water at a temperature between 80 and 90 ℃, maintaining the mixtures under stirring until complete dissolution. After cooling, the crude extract of green propolis was incorporated into the PVA 7% solution at 70:30 w/w (F1). The formulation loaded with GPE and Nystatin was obtained by adding the GPE in the PVA/W8% solution, then 20 mg of nystatin, solubilised previously in 1 ml of methanol. The resulting proportion used was 70:30:20 mg (PVA: GPE: NYS -formulation F2). Stirring was maintained until complete incorporation (~ 60 min). The same procedure was used to prepare the PVA solution at 12% (w/v). However, the solvent system was a water:glacial acetic acid solution at a 70:30 ratio (PVA/G12%). Chitosan acetate solution at 1% (w/v) was prepared by mixing a weighed amount of powdered chitosan in water:glacial acetic acid (30:70) at room temperature, maintained under stirring for 60 min (CS1%). The CS electrospinning solutions were prepared by blending the formulations PVA/G12% and CS1% at the proportion of 50:50 (F3). Additionally, 20 mg of NYS was weighed and solubilised in 1 ml of methanol and mixed with PVA/ G12%. The mixture was kept under stirring for 60 min for the complete incorporation (formulation F4). Finally, PVA/ G12% + CS1% + NYS was prepared at a ratio of 50:50:20 mg (F5). The volume total of the formulations prepared was 50 mL, independent of their compositions. The five different compositions tested were set to demonstrate the potential of the PVA alone or blended with CS for the production of versatile electrospun nanofibres loaded with antimicrobial agents GPE and NYS and were based on the literature reports and preliminary assays [35][36][37]. The antimicrobial properties of GPE and NYS are well described in the literature, besides the use in the practical clinic (NYS). Potential of the Hydrogen (pH), Electrical Conductivity and Rheology of the Formulations The properties of electrospun nanofibre mats formed are significantly affected by the physicochemical properties of the electrospinning formulations (e.g. hydrogen potential -pH, electrical conductivity and rheological behaviour) [14]. These properties were determined for formulations F1 to F5 and blank formulations (without the addition of actives), namely PVA/W7% and PVA/W8%, PVA/G12% and CS1%. The pH of formulations F1 to F5 was determined in a previously calibrated digital pH metre Metrohm, model 827 (Metrohm AG, Herisau -CH), while the electrical conductivities were measured at room temperature in a Metrohm 912 bench-top conductometer (Metrohm AG, Herisau, Switzerland). The measurements were done in triplicate, and the results were expressed as means and standard deviation. The rheology of formulations F1 to F5 and blank solutions was determined with a Brookfield LV-DVIII coaxial cylinder Rheometer (Brookfield Engineering Laboratories Inc., Middleboro, USA), equipped with the small sample adapter and SC4-18 spindle sensor. The formulation was placed into the small sample adapter, and the spindle was coupled to the equipment. After calibrating the system, the spindle rotation started, and the velocities were increased according to the preset programme. After reaching a predefined rotation, the reverse process was carried out. The system was connected to a personal computer running the Brookfield Rheocalc 3.2 software, which controlled the Rheometer and collected the experimental shear rate and corresponding shear stress data. Electrospinning of the PVA Compositions Loaded with GPE, NYS and CS1% The electrospinning operation was performed at a controlled temperature and relative humidity (~ 22 ℃ and 40%). The formulations (F1 to F5) were placed in a 5 mL plastic syringe attached to a metallic needle with a 0.6 mm opening. The positive electrode of the high voltage source was connected to the needle tip, while the negative was coupled to the metallic nanofibre collector. The distance from the tip of the metallic needle to the collector metallic surface was fixed at 10 cm. The collector was previously covered with aluminium foil to facilitate the removal of the deposited nanofibre layer. The rotation velocity of the collector was fixed at 595 rpm. The electrospinning conditions were maintained constant for all experimental runs at a flow rate of 0.5 mL/h, an electrical potential of 20 kV and 3 h of electrospinning. These conditions were selected following the preliminary runs (data not shown). Physicochemical Characterisation of the Electrospun Nanofibre Mats After the electrospinning operation, the nanofibre mat was removed from the collector and characterised by determining the morphology, nanofibre diameter distribution, average nanofibre diameter and average surface porosity. The behaviour of the formed nanofibre mats in an aqueous medium was also evaluated through the disintegration and swelling capacity assays. The procedures used in these characterisations are presented below. Morphology of the Electrospun Nanofibre Mats The morphology of the electrospun nanofibre mats was evaluated through photomicrographs obtained through scanning electron microscopy (SEM). Three samples of approximately 5 × 5 mm from the electrospun nanofibre mats were fixed in aluminium sample holders with double-sided carbon conductive tape and coated with carbon and gold in Bal-Tec SCD model-050 Sputter Coater under the pressure of 0.1 mbar. The nanofibre photomicrographs were obtained using the Philip XL-30 FEG equipment (initial tests) and a Carl Zeiss scanning electron microscope mod EVO 50 (optimised formulations) at magnifications of 10 kx, 20 kx and 30 kx. Diameter Distribution, Average Diameter and Surface Porosity of the Electrospun Nanofibre Mats The determination of the diameter distribution, average diameter and surface porosity of the electrospun nanofibre mats was done by analysing the photomicrographs obtained through SEM with the aid of DiameterJ, an ImageJ image analysis software plug-in. DiameterJ, developed by Hotaling et al. [38,39], is an open-source validated software used to analyse the nanofibres' diameter distribution, porosity fractions and other physical properties. The DiameterJ plug-in and its description can be found for free at the following link: https:// imagej. net/ Diame terJ. The results obtained were average values for two SEM photomicrographs of each electrospinning nanofibre mat. The nanofibres' diameter distribution allowed the SPAN determinations -SPAN = (d90 − d10)/d50 -where d10, d50 and d90 are the fibres' diameters corresponding to 10%, 50% and 90% of the distribution. Disintegration Time, DT The measurements of D T were made according to the methodology described by Çay et al. [35], with some modifications. Small parts of the electrospun nanofibre mats (~ 1 cm × 1 cm) were carefully cut and placed in Petri dishes. The addition of purified water was carried out dropwise using a Pasteur pipette (~ 22 drops or 1 ml). The disintegration behaviour was monitored through samples' pictures taken at preset intervals (monitored by a chronometer). Swelling Ratio The percentage swelling ratio (S R ) was determined by immersing the samples in a sufficient amount of purified water. In a watch glass, parts of the fibres were carefully cut and weighed (W d ). Drops of purified water were then deposited on the samples. Excess water was removed with filter paper, and the samples were reweighed at specified times to determine the swelled weight (W s ). S R was calculated using Eq. (1): Statistical Analysis The experimental results for the pH, electrical conductivity, the nanofibres' mean diameter and surface porosity were submitted to one-way ANOVA, followed by the Tukey posthoc test (p ≤ 0.05) to detect statistically significant differences. Chemical Characterisation of the Concentrated Green Propolis Extract The GPE was previously characterised by determining its HPLC-DAD profiles (HPLC fingerprint) according to the (1) S R = 100 ⋅ W s ∕W −1 d method presented in the 'Preparation of the Concentrated Green Propolis Extract' section (Fig. 2). The comparison of the HPLC fingerprint, retention times and UV spectra of the GPE with the previous results reported by de Souza et al. [27] exhibits a high similarity for the main peaks, confirming the authenticity and quality of the GPE used. By comparing the HPLC retention times and UV spectra with those obtained for the standard substances, it was also possible to identify the compounds: 1, caffeic acid (retention time 4.02 min); 2, ferulic acid (retention time 5.40 min); and 3, cinnamic acid (retention time 14.42 min). Table 1 presents the experimental values of pH and the electrical conductivity of the electrospinning formulations loaded with the active compounds and of the blank compositions (PVA/W7% and PVA/W8%, PVA/G12% and CS1%). Potential of Hydrogen, pH It can be seen in Table 1 that the pH and electrical conductivity of the electrospinning and blank formulations varied according to their constituents. The pH values were directly impacted by the solvents used, specifically water or the combination of glacial acid and water. Formulations with water as the solvent (F1 and F2) have a higher pH than the preparations in acetic acid (F3, F4 and F5) due to the high concentration of H + ions when the mixture of water and acetic acid was used. In general, adding the actives (GPE, NYS, CS1% and mixtures) did not cause a significant change in pH value compared to the blank compositions. Although the pH changed significantly, all electrospinning formulations produced good quality electrospun nanofibre mats. Rwei and Huang [40] showed in their study with PVA solutions at various concentrations that the pH did not affect the diameters of the electrospun fibres. Electrical Conductivity The electrical conductivity of the solution is a relevant parameter in the electrospinning process. It indicates the charges' concentration and mobility on the surface of the electrospinning solution. A critical conductivity value must be established since the formation of fibres without defects (beads) will not occur at conductivity values that are either too low or excessively high. The migration of charges to the surface of the electrospinning formulations is almost entirely avoided for compositions with excessively low electrical conductivity. However, the conductivity can be changed by replacing the solvent system or adding ionic additives such as salts and mineral acids. On the other hand, the Taylor cone generation and the beginning of flexural instability are impaired due to the difficulty of accumulating surface charges on the electrospinning composition drop if the conductivity is too high. Therefore, there is an ideal range of electrical conductivity values in which electrospinning is feasible, thus generating micrometric or nanometric fibres. In this range, fibre diameters tend to decrease inversely with electrical conductivity [41,42]. Usually, the electrical conductivity values are preset for each intended preparation. The experimental values of conductivity found in this study (Table 1) ranged from 311.5 to 500.4 µS/cm. For the blank formulations based on water (W), the conductivity ranged from 277.6 to 313.6 µS/cm when the PVA concentration increased from 7 to 8%. Similar behaviour was also observed by Niu et al. [43], who found an increase in conductivity (from 192.47 to 325.31 µS/cm) directly proportional to the concentration of zein (0 to 100%) in aqueous solutions. The behaviour was associated with the decrease in water content during polymer ionisation. The electrospinning formulations loaded with the actives (F1 to F5) presented approximately similar conductivity values that were higher than the blank PVA solutions. As reported in the literature, a slight variation in the formulation composition's conductivity can be observed by adding compounds [44,45] or increasing the polymer concentration [46]. The blank chitosan composition and composition F3, composed of PVA and chitosan, showed the highest conductivities, namely 500.4 µS/cm and 365.0 µS/cm. Although chitosan was used in this work for the biological functionalisation of fibres, it positively affects the formation and yield of electrospun nanofibres, perhaps due to the interactions of the PVA/CS functional groups promoting the formation of hydrogen bonds in the PVA/CS membranes. These compositions are more advantageous than pure PVA, especially when used in applications as tissue supports and wound healing materials [33]. Rheology The rheological properties of the electrospinning formulations strongly influence the electrospinning process and the properties of the electrospun nanofibres formed [22,40,47]. There is a direct relationship between the formulation's viscosity and the entanglement of the formed electrospun nanofibre mats. Figure 3 shows the rheograms of the electrospinning formulations used in the present work. It can be seen from the graphs presented that the rheograms of the formulations are similar, exhibiting slightly pseudoplastic behaviour, except for F4, which exhibited Newtonian behaviour. On the other hand, the rheology of the blank PVA/W and PVA/G solutions exhibited typical Newtonian behaviour. The CS composition showed a slightly thixotropic behaviour, exhibiting an area of hysteresis between the ascending and descending rheology curves with the downward curve located in an upper position. The experimental data of shear stress as a function of shear rates for blank, CS1% solutions and compositions loaded with the bioactive compounds and CS1% solutions (blank) were fitted using the classical Ostwald-de-Waele power-law model (Eq. 2) and exhibiting an excellent agreement (traced lines in the graphs of Figs. 3 and 4). where τ is the shear stress, ̇ is the shear rate, K is the consistency index which measures the difficulty of the fluid to flow, and n is the flow index, which measures the deviation of the fluid from Newtonian behaviour. Newtonian fluid has a flow index of 1.0. Table 2 presents the parameters of the (2) = K ⋅̇n Ostwald-de-Waele model adjusted to the rheological data of the electrospinning formulations loaded with GPE, CS and NYS (F1 to F5) and of the blank and CS1% solutions with the corresponding coefficients of determination (R 2 ). The value of apparent viscosity at a shear rate of 1 s −1 (ascending curve), similar to the value of the consistency index (Eq. 2), was used to compare the rheological behaviours of the different formulations under study. The slight pseudoplastic behaviour that emerged in the electrospinning formulations was caused by adding the actives. Macromolecules can be organised in a state of (3) ap * = K ⋅̇n −1 = K lower energy, such as coils. By applying the shear rate, they reorganise and orient themselves towards the flow with a decrease in apparent viscosity [24]. This behaviour applies to samples F1, F2, F3 and F5. Formulation F4 shows a remarkable increase in the consistency of the formulation, reaching a viscosity value of 253.510 P. This is caused by the simultaneous increase in the PVA concentration and changing the solvent system. An abrupt increase in the viscosity of PVA in water and binary solvents was also observed by Rosic et al. [22] and Mahmud et al. [48] at PVA concentrations above 10% (w/v). This change can be attributed to the greater number of chain tangles and inter-and intra-molecular interactions of the polar -OH groups of PVA in the system. Guerrini et al. [47] also observed pseudoplastic behaviour in 12.4% PVA solutions in water, while Goncalves et al. [34] faced difficulty in electrospinning formulations containing PVA and chitosan due to their very high viscosities. Regarding the addition of active compounds, the combination GPE + NYS decreases the viscosity of the PVA/ W8% blank solution from 5308 to 3195 P. Adding CS1% to the PVA/G12% solution leads to a rheological behaviour similar to the PVA/W electrospinning formulations (F1 and F2), perhaps due to a dilution effect since a 50:50 PVA/ G12%:CS1% ratio was used. Although the rheology of compositions F1, F2, F3 and F5 showed almost similar behaviour, the characteristics of the nanofibre mats changed significantly. This trend reinforces the direct impact of the interaction between the constituents of the electrospinning formulation on the nanofibre mat structure. Figures 5 and 6 show SEM photomicrographs of the electrospun nanofibres formed from PVA/W7% and PVA/ W8% formulations loaded with GPE and GPE + NYS (respectively, F1 and F2) and PVA/G12% loaded with CS1%, NYS and CS + NYS (respectively, F3 to F5), at a magnification of 20 kx. It can be seen from the figures that the active therapeutic agents in the solutions affected the morphological fibre characteristics such as their appearance, diameters and entanglement significantly. The images do not evidence the presence of granules, beads or other defects. Diameter and Surface Porosity of the Electrospun Nanofibre Mats The diameter distribution, average diameter and the average surface porosity of the electrospun nanofibre mats were determined through image analysis of the SEM photomicrographs. From the results, it was possible to plot the differential and accumulated distribution of nanofibre diameters (Figs. 7 and 8) for the electrospun nanofibres based on PVA/W formulations loaded with GPE and GPE + NYS (F1 and F2) and on PVA/G12% compositions loaded with CS, NYS and CS + NYS (F3 to F5). The data plotted in Figs. 7 and 8 allowed for the determination of the average fibre diameter (d average ), the distribution parameters d 10 , d 50 and d 90 , and the distribution SPAN, given by SPAN = (d 90 − d 10 )/d 50 . The parameters d 10 , d 50 and d 90 refer to the fibres' diameters corresponding to 10%, 50% and 90% of the distribution. Table 3 summarises the results obtained with the average porosity values (ε average ) estimated by image analysis. The electrospun nanofibre mats show significant morphological differences. The nanofibre samples F1 and F2 exhibited lower tangles and misshapen distribution than samples F3, F4 and F5. The morphology suggests a fused fibre format (mainly for the F1 sample), thus revealing the The mean diameters found (Table 3) for these fibres were the largest, 684.5 ± 13.7 (F1) and 476.2 ± 10.7 (F2). The diameter distribution parameters (Table 3) indicate greater diameters in d 90 due to the proportion of fibres in the form of wider ribbons, tending to become larger than the narrow ones. Although these features are common to both fibres, they are more prominent in F1. For sample F2, the association of GPE and NYS significantly altered the fibres' morphological characteristics, exhibiting a decrease in fibre diameter on the connection points (interconnections). There are reports of a significant increase in the diameter of electrospun fibres loaded with propolis and other natural compounds, including the microfibre formation, depending on the concentration of actives in the formulation [49,50]. However, studies involving associations between natural and synthetic actives in nanofibres still demand more experimental studies. The propolis concentration in the formed nanofibre mats reached 30% (w/v) in both the PVA/W7% and PVA/W8% solutions, a concentration also used by Razavizadeh and Niazmand [50]. The lower volatility of the F1 and F2 electrospinning formulations loaded with GPE can be pointed out as interfering with the morphology of the nanofibre mats. The material subjected to electrospinning might remain slightly moist when it reaches the collector surface, making the fibres wider when they are deposited wet on top of each other. The low solubility of propolis, combined with its adhesive propensity, can favour the connection of fibres at the crossing points. These points can represent an excellent reinforcement element for the formed fibres and may be desirable for application as wound dressings [51]. Perhaps, the decrease of the GPE concentration in the polymer solution can improve the morphology and diminish the size of the propolis nanofibres. Kim et al. [51] found that the measured diameters were uniform and the morphological changes were minimised for small propolis concentrations in the electrospinning formulations. Nanofibres loaded with CS1% and NYS (F3, F4 and F5) showed a predominance of well-defined entanglements, random orientation and the smallest diameters. The solution composition (polymer concentration and solvent system) and the addition of CS1% (F3 and F5) had a positive effect on processability and fibres' morphology and diameter. It was possible to obtain elegant nanofibres from the more viscous PVA/G12% + NYS formulation (F4) which exhibits a Newtonian rheological behaviour. In addition, the presence of 70% acetic acid in the solvent system and NYS's addition greatly increased the F4's formulation consistency, forming nanofibres of a smaller diameter (220.9 ± 9.3). Shibata [16] reported an increase of the nanofibre's diameters when the PVA concentration was in an aqueous solution up to 10% w/v, with a slight tendency to reduce from the 12% PVA concentration. These researchers also propose the occurrence of an arrangement of the PVA molecules in solution according to the concentration, showing that they can orient themselves in the direction of the applied electric field, allowing for complete elongation during electrospinning, in addition to the formation of defect-free fibres of adequate diameters. For Mahmud et al. [48], the smallest diameters were obtained from formulations with 20% glacial acetic acid:water, at concentrations of PVA of 7%, 10% and 15%. Chitosan can be used both as a thickener for PVA solutions, improving the morphology [52], and for the biological functionalisation of nanofibre mats, as proposed in this study. However, when dissolved in an acidic medium, chitosan assumes a polyelectrolyte behaviour, increasing the charge density on the drop's surface during electrospinning, reducing the nanofibre diameter [53]. In the micrographs of F3 and F5, we can see the smallest and closest diameters. With the addition of the antimicrobial nystatin to F5, there was a slight increase in nanofibre diameter. This trend was also observed by Taepaiboon et al. [7] during the production of PVA electrospun nanofibres loaded with four model drugs: sodium salicylate, sodium diclofenac, indomethacin and naproxen. The authors found that the morphology of the nanofibres was directly affected by the properties of the polymer solution and the type of model drug added. Regarding the surface porosity of the electrospun nanofibre mats, it can be observed that the presence of larger and smaller fibres, as observed in F1 and F2, resulted in good pore size distribution with high interconnectivity. This result was also observed for the F4 electrospun nanofibre sample. The size of the pores can be directly related to the fibre diameter, which can impact the interactions between the electrospun matrix with bacteria, fungi and other living cells. Nanofibres exhibit greater cell adhesion and proliferation advantages, whereas microfibres are ideal for providing larger pores and promoting cell infiltration [53,54]. Therefore, the presence of micro and nanofibres can be advantageous, taking into account the positive characteristics inherent to each of them. The formed materials have excellent properties linked to their use in smart dressings. Disintegration Time and Swelling Ratio of the Electrospun Nanofibre Mats The evaluation of the behaviour of the electrospun nanofibres mats in an aqueous environment was undertaken as their disintegration time and swelling capacity are of great importance when it is intended to apply them as dressings for chronic wounds or even in the topical or mucosal delivery of active ingredients. Figure 9 shows the disintegration behaviour of the electrospun nanofibres based on the PVA/W formulations loaded with GPE and GPE + NYS (F1 and F2) and on the PVA/G12% compositions loaded with CS, NYS and CS + NYS (F3 to F5). Nanofibres composed of PVA are expected to present predominantly hydrophilic characteristics since PVA is a water-soluble polymer. The shape and disintegration time of nanofibre samples F1, F2 and F4 point to this aspect, showing an almost immediate breakage of the mats after contact with water. For sample F4, the disintegration was almost instantaneous (~ 2 s), while it was nearly 15 min for samples F1 and F2. Small fragments were observed after the disintegration of samples F1 and F2, which remained 24 h later. Nanofibre samples F3 and F5 showed a change in appearance when in contact with water, behaving like "gel" after absorbing water, possibly linked to the chitosan presence in their composition. After 12 h of the test, it was possible to observe the presence of small fragments in the F1 and F2 nanofibre samples and the undissolved parts of F3 and F5. Compatibility with a humid environment is considered to be an advantage in terms of the materials proposed for use in wound dressings, as healing occurs more efficiently in a dry environment [55]. Upon the dissolution of samples F1, F2 and F4, it is remarkable that the water quickly broke the structure of the nanofibre mats composed of PVA, allowing for the quick release of the actives. The small fragments observed during the tests with samples F1 and F2 may be related to GPE, a resinous compound with low solubility and NYS, which is practically insoluble in water. Nystatin, an antifungal drug of the polyene class, is not absorbed through the gastrointestinal tract, being commonly administered topically, and through the mucosa (e.g. buccal and vaginal) [56]. The external use of propolis for the treatment of oral, skin and genital diseases is well described in the excellent review reported by Sung et al. [57]. Currently, Berreta et al. [58] reported the internal use of propolis to be a tool for combating SARS-COV-2's action mechanisms. In the study by Adomaviciute et al. [59], it was found that the structures of the nano and microfibres of PVP added with propolis and silver nanoparticles presented visual disintegration and dissolution up to 10 min. The water was also able to break the mats immediately after contact. The release of fragments was observed at different stages of dissolution, as in this study. In samples with PVA and CS, dissolution was slow and incomplete due to the interaction of PVA with the polysaccharide chitosan. This feature is also interesting as it highlights the possibility of the absorption of exudates as it tends to swell when immersed in a humid environment. A measure was calculated to verify the swelling ratio for the samples that remained insoluble, specifically F3 and F5. The result was 924% (F3) and 1100% (F4) at the end of 8 h of immersion in an aqueous environment. Archana et al. [60] found maximum swelling ratios of 1215% in a pH 2 buffer and 900% in a pH 7 buffer for chitosan, pectin and TiO 2 nanofibres. Generally speaking, the sample with the highest swelling ratio will have the highest surface area/ volume ratio. The wettability and hydrophilicity added to the greater swelling capacity of the nanofibres provided by the addition of CS can favour the absorption of exudates in the wound beds and can also be used as an innovative delivery system for the incorporated active ingredients. There is a positive relationship between better water absorption by the fibres that exhibit a good presence of pores. Zhang et al. [61] studied PVA/chitosan membranes and observed the best morphology of PVA/CS nanofibres with a higher proportion of PVA (80:20). The membranes' high surface volume and porosity are linked to a greater propensity to absorb water. Herein, we have demonstrated that a PVA/CS ratio of 50:50 in samples F3 and F5 provided a product with high water absorption capacity. This result is promising since membranes with greater water absorption and swelling capacity are suitable for application as a matrix for wound and burn dressings. In addition, the electrospinning compositions developed were very stable, confirming the compatibility between the polymers and active ingredients used. The constituents of the electrospinning compositions are biodegradable and biocompatible, non-toxic, water-based, promptly available and low costs. These advantages show the technical and economic feasibility of the nanofibres developed here as a matrix for innovative wound dressings, which is the subject of ongoing studies, which include in vitro and ex vivo assays. Conclusions It is feasible to produce electrospun biodegradable micro and nanofibre mats from PVA, incorporated with green propolis extract (GPE), nystatin (NYS) and chitosan (CS) -alone or in mixtures. The properties of electrospun nanofibres are significantly affected by the constituents and physicochemical properties of the electrospinning formulations, such as pH, electrical conductivity and rheology. Adding chitosan with the concentrated acetic acid positively affects the properties of electrospun nanofibres, resulting in fibres of small diameter with reduced defects and high swelling capacity. These characteristics are ideal for applying the produced PVA/CS nanofibre mats as a novel material for wound and burn dressings or for transdermal delivery systems.
2022-09-02T05:15:39.564Z
2022-08-30T00:00:00.000
{ "year": 2022, "sha1": "28227313a72417043a63e64efbc48345c5cfe1fb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "28227313a72417043a63e64efbc48345c5cfe1fb", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
245488510
pes2o/s2orc
v3-fos-license
A Quasioppositional-Chaotic Symbiotic Organisms Search Algorithm for Distribution Network Reconfiguration with Distributed Generations Department of Power Systems, Ho Chi Minh City University of Technology (HCMUT), 268 Ly %uong Kiet Street, District 10, Ho Chi Minh City, Vietnam Vietnam National University Ho Chi Minh, Linh Trung Ward, %u Duc District, Ho Chi Minh City, Vietnam Institute of Engineering and Technology, %u Dau Mot University, %u Dau Mot City, Binh Duong Province, Vietnam PEC Technology, 170-170 Bis Bui %i Xuan Street, Pham Ngu Lao Ward, District 1, Ho Chi Minh City, Vietnam Introduction In distribution networks, reconfiguration is a traditional technique to minimize power loss in the system by opening/ closing switches to establish the latest optimal network structure. Recently, distributed generation (DG) has been swiftly implemented in distribution networks caused by its great economic, environmental, and technical benefits. e DG unit's optimal allocation in the distribution network can reduce power loss and other risks such as excess reverse power flow, harmonic distortion, overload in line, and overvoltage in the operation of the network [1]. It is realized that the combination of the optimal network reconfiguration and DG placement problems would significantly decrease power loss and enhance the performance of the distribution network. Nevertheless, the distribution network reconfiguration (DNR) is a complex optimization problem since this problem has 2 n candidate solutions (n is the number of switches) [2]. Finding the optimal solution among these solutions while satisfying the radial structure and operating constraints is a barrier to problemsolving techniques. In addition, the optimal DG placement (ODGP) problem refers to a complex mixed-integer nonlinear optimization problem. Moreover, this is deemed as an obstacle for optimization methods. erefore, the issues with ODGP and DNR in combination (DNG-DG) become a more complex optimization problem for the optimization solving approach. is research aims to propose a successful optimization method in solving the DNR-DG problem. Merlin and Back initially solved the DNR problem for power loss decrement [3]. In this study, the authors expressed the DNR as a mixed-integer nonlinear optimization problem and overcame it with a discrete branch-andbound method. In [4], Civanlar et al. recommended a branch exchange scheme to address the DNR problem for minimizing balancing load and power loss. In [5], Martín and Gil developed a novel heuristic methodology of branch exchange depending on the branch power flow's direction to decrease the real power loss for the DNR issue. In [6], Gohokar et al. formulated the DNR problem by a network topology concept, in which nodes and branches can be numbered in any order. e single loop optimization procedure was developed to find the optimum network topology. e aforementioned heuristic methods are simple implementation and provide an excellent solution for smallscale problems. Nevertheless, they expose limitations when facing complex optimization problems of changing objective functions. erefore, these methods have not really attracted researchers. Advanced optimization methods (i.e., metaheuristics) have been developed and applied to various engineering fields. ey are capable of handling various complex constraints and different objective functions. Genetic algorithm (GA) is deemed a well-known metaheuristic, thrivingly implemented on the DNR problem. In [7], a modified GA was suggested for the DNR alongside the power loss reduction objective for 16-and 33-bus RDNs. In [8], GA was enhanced by utilizing the edge-window-decoder encoding system to minimize the power loss via the DNR problem. Another famous metaheuristic approach is particle swarm optimization (PSO). In [9], an adaptive PSO was presented for the RDN's reconfiguration for the real power loss reduction. In [10], the niche binary PSO algorithm was developed to optimally reconfigure the RDN. is algorithm overcame the prematurity of the original PSO for a betterobtained solution. In [11], improved selective binary PSO was offered as an alternative to the DNR to decrease the power loss. Other metaheuristic algorithms have been implemented effectively to the DNR problem, for instance, ant colony search algorithm [12], cuckoo search algorithm (CSA) [13], fireworks algorithm [14], honeybee mating optimization [15], stochastic fractal search [16], and binary group search optimization [17]. e DNR problem becomes more complex when DG units are integrated into distribution networks. us, the metaheuristic-based approaches are more suitable than heuristic methods for an optimal solution. Several metaheuristic methods have been recommended for the DNR-DG problem. In [2], a harmony search algorithm (HSA) was proposed in solving the DNR-DG problem for the 33-and 69-bus RDNs. e aim of this research was voltage profile increment and real power loss reduction. Also, in studies [18,19], fireworks algorithm (FWA) and adaptive cuckoo search algorithm (ACSA) were, respectively, presented for the DNR-DG within the similar coverage in the research [2]. In [20], the DNR-DG problem was fixed by the adaptive shuffled frogs leaping algorithm (ASFLA). In this study, the simulated outcomes from 33-bus and 69-bus RDNs for various circumstances found that the ASFLA was efficient than ACSA, FWA, and SFLA. In [21], the salp swarm algorithm (SSA) was suggested for handling the DNR problem with the DG placement. e effectiveness of SSA was also tested on 33-and 69-bus RDNs. Generally, the aforementioned metaheuristic methods were implemented in smalland medium-scale test systems. e research did not take into account large-scale systems. Moreover, the majority of the research employed metaheuristic approaches to tackle the DNR-DG issue. Despite these techniques having a significant search capacity for an optimal solution, there is no guarantee that they would be effective for all optimization problems. A metaheuristic method can effectively solve a specific optimization problem; however, it may not be effective for others. erefore, there is always room to suggest new effective metaheuristic methods for dealing with complex optimization problems. is research suggests a powerful optimization strategy to manage the DNR-DG problem towards minimum real power loss. e suggested approach is the Quasioppositional Chaotic Symbiotic Organisms Search (QOCSOS) method developed in our previous work [22]. e QOCSOS algorithm embedded QOBL and CLS search strategies to boost the obtained solution quality and convergence speed of the original SOS. In QOCSOS, the QOBL strategy helps the algorithm to explore more promising domains; thus, it increases the chance of obtaining a better solution. As a result, the algorithm's exploration capacity is enhanced. In addition to QOBL, the CLS strategy also helps the algorithm to avoid trapping in local optima. It locally explores the neighbourhood of the current best solution for better exploitation. Consequently, the integration of both QOBL and CLS strategies would keep a balance between exploration and exploitation and significantly improve the performance of the SOS algorithm. e suggested QOCSOS technique is applied to simultaneously obtain the optimal configuration and DG placement in the 33-, 69-, and 119-bus RDNs. is research's contributions are outlined as the statements as follows: (i) e QOCSOS was modified to the DNR-DG problem for power loss decrement. (ii) e QOCSOS was successfully applied to 119-bus large-scale system for the DNR-DG problem. (iii) e simulation results showed that the simultaneous consideration of optimal network reconfiguration and DG placement substantially enhanced the distribution networks' performance with regard to voltage profile and power loss compared to only network reconfiguration or DG placement. (iv) e outcome comparison illustrated that the QOCSOS technique is more successful than the original SOS and other compared approaches for the acquired excellent solution standard. e remaining sections of the paper are as follows. Section 2 describes the problem formulation of the DNR-DG. Section 3 presents the QOCSOS algorithm, in which the original SOS, QOBL, and CLS are introduced. Section 4 explains the implementation of the proposed QOCSOS to the DNR-DG problem. e results of numerical simulations are presented in Section 5. Finally, the conclusions are given in Section 6. Problem Formulation e main goal of the DNR-DG problem is to minimize real power loss (P L ) in RDNs, while all operating constraints are satisfied as follows: where R k is the resistance of the k th branch, I k is the current passing through that branch, and N L is the number of branches in an RDN. e operational constraints for the objective function in equation (1) are given as follows: i. Power balance constraints: where N DG is the number of DGs; N B is the number of buses in RDN; P D,j and Q D,j are the active and reactive power of load demands at the j th bus, respectively; P L,k and Q L,k are the active and reactive power losses in the k th branch, respectively; P SS and Q SS are the active and reactive power outputs at the slack bus, respectively; P DG,i and Q DG,i are the active and reactive power outputs of the i th DG, respectively. ii. Voltage constraint: where V min,i and V max,i denote the voltage bounds at the i th bus. iii. ermal limit: where I max,k represents the maximum current allowed to flow through the k th branch. iv DG generation constraint: where P DGmin,i and P DGmax,i denote the capacity limit of the i th DG, respectively. v DG penetration constraint: vi Radial configuration constraint: e radial topology must be maintained after reconfiguration as follows [23]: where A represents a matrix for the connection of branches and buses in the RDN [23]: Original SOS Algorithm. e SOS method was developed based on a natural ecosystem with symbiotic relations between two different organisms [24]. e search process is started by randomly generating a population of organisms (ecosystem) as follows: In the population, each organism represents a solution. For each iteration, the population is updated based on mutualism, commensalism, and parasitism phases. Each phase is defined as follows. Mutualism. Based on mutualistic relationships, the j th random organism is assigned from the population to associate with the i th organism during this phase. Vectors O i and O j are the i th and j th organisms in the population, respectively. New organisms are created as follows [24]: where MV is the average of the i th and j th organisms, representing a mutualistic relationship; bf 1 and bf 2 are randomly chosen as 1 or 2; O best denotes the best organism in the population. e fitness value is calculated for each organism. e new organism is updated as follows: Commensalism. In this phase, the i th organism interacts with the j th organism, which is randomly selected from the population based on commensalism interaction. A new organism is generated as follows [24]: e new organism is updated as described by equation (10). Parasitism. In this phase, the i th organism acts as a parasite, and the j th random organism acts as the host. In the parasitism interaction of two different organisms, the parasite gets benefits, while the host gets harm. Vector O i is duplicated to create a Parasite_Vector (PV). A new candidate solution (O PV ) is created by randomly modifying some variables of the PV vector [24]. e O PV vector is updated or discarded as follows: QOBL Strategy. e QOBL strategy is performed when SOS generates a new population of organisms. e QOBL approach is also applied when the initial population is randomly initialized. e opposite point O o i of each organism O i is calculated as follows [25]: en, the quasiopposite point O qo i is given by the following equation [22]: e pseudocode of QOBL (Algorithm 1) is illustrated in Figure 1. CLS Strategy. For increasing the likelihood of finding better solutions, the CLS approach is utilized to explore the vicinity of the current best solution. A new candidate solution is created via the CLS strategy as follows [27]: where O new best,k is the new organism created via CLS at the k th iteration; O best,k is the current best organism; O i,k and O j,k denote the two random organisms selected from the population, respectively; Z k is generated from "logistic map" [28]. e fitness values are computed for the organisms O new best,k and O best,k . e new organism is updated as follows: QOCSOS. e QOCSOS method is developed based on the original SOS with the integration of QOBL and CLS strategies. Firstly, QOCSOS generates a population of organisms O. Afterwards, the quasiopposite population O qo is created via the QOBL strategy. From the set [O, O qo ], QOCSOS selects N (i.e., size of the ecosystem) of best organisms as an initial population according to their best fitness values. Next, the operation of SOS is performed. At the end of this stage, when a new population is created, a jumping rate j r parameter determines whether to implement the QOBL approach or to keep the current population. Lastly, the CLS approach is used to obtain the best organism. e QOCSOS operation is implemented until the stopping condition is satisfied. Figure 2 shows the pseudocode of the QOCSOS method. Implementation of QOCSOS to DNR-DG Problem is section discussed how QOCSOS was deployed to the DNR-DG issue to minimize real power loss in RDNs. erefore, three scenarios are considered as follows: (i) Case 1: considering only the DNR. (ii) Case 2: considering only the optimal DG location. (iii) Case 3: simultaneous consideration of DNR and optimal DG location (DNR-DG problem). Initialization of Population. In the population of QOCSOS, each organism O i (i � 1,. . ., N) denotes a solution vector, which consists of opened switches, locations, and capacities of DGs. Equations (18), (19), and (20) express a solution for Cases 1, 2, and 3, respectively. (20) in which N SW denotes the number of opened switches. Each organism is randomly generated in its boundaries, in which the opened switches and locations of DGs are natural numbers. Hence, the designed variables are generated as follows: in which i � 1, 2, . . ., N SW ; j � 1, 2, . . ., N DG ; SW min, i � 1for all variables; SW max, i denotes the length of the i th fundamental loop vectors. e principle of finding the fundamental loop vectors can be found in [29]; L DGmin, j � 2 for all variables, which means that DG units can be installed at all buses except the slack bus. Fitness Value. Fitness values for organisms of QOCSOS are calculated as follows: in which K p and K q denote penalty coefficients for voltage and thermal current, respectively; x lim denotes the limit value of the dependent variable x (bus voltages and currents) as follows: where x denotes the V i and I k values; x lim denotes the limitations of V i and I k . Overall Procedure. e QOCSOS implementation to the DNR-DG problem can be drawn as follows: Step 1. set QOCSOS parameters (D, N, maxIter, j r , K). Step 2. perform the fundamental loops to define the lower and upper bounds of SW i Mathematical Problems in Engineering Step 5. define the best organism O best having the best fitness value. Step 8. execute the parasitism phase. (In each phase, check the radial condition and constraints for new organisms and apply an approach using equation (23) if any organism violates its limits.) Step 9. move to Step 5 if organism O i is not the final organism of the ecosystem (O N ). Otherwise, continue to the next step. Check the radial constraint and apply the repairing strategy if necessary. Step 10. if rand() < j r , implement the QOBL approach to obtain quasiopposite points of the current population. Check the radial constraint and deploy the repairing strategy. Calculate fitness value and define the best organism O best . Otherwise, define the best organism O best from Step 9. Step 11. implement the CLS approach to obtain the best organism O best . Check the radial constraint and apply the repairing strategy Step 12. if Iter < maxIter, go to Step 4. Else, the process is done. Simulation Results In this study, 33-bus, 69-bus, and 118-bus RDNs were used to test the QOCSOS method with three scenarios. e QOCSOS technique was performed in thirty separate trials for each case study to find the optimum solution. Besides, the SOS approach was also applied to the same problem for result comparisons. QOCSOS's initial parameters were chosen in Table 1. 33-Bus Test Network. e proposed QOCSOS method was initially implemented on the 33-bus RDN. e comprehensive data of this network was obtained from [30]. For Case 1, the QOCSOS method obtained the opened switches: 7-9-14-32-37, where the real power loss of the network was decreased from 202.67 kW (base case) to 139.5513 kW in relation to 31.14% of the power loss reduction (PLR). e system's minimum voltage amplitude was raised from 0.9131 p.u. to 0.9378 p.u. Table 2 shows the reconfiguration results acquired using various optimization methods for the 33-bus RDN. For accurate comparison of the best obtained result improvement (power loss), the following equation can be used to assess quantitatively the result improvement: where RI QOCSOS is the QOCSOS result improvement compared to other methods; R compared_method is the power loss obtained by other compared methods; R QOCSOS is the power loss obtained by the QOCSOS method. In equation (24), the best result improvement (RI) can be the plus sign (+) or minus sign (−). If RI has a plus sign (+), the result obtained by QOCSOS is better than the compared method; otherwise, the compared method has a better result than QOCSOS. Besides, if RI is zero, QOCSOS and compared methods have the same result. For Case 2, where three DG units were installed in the network, the QOCSOS determined the optimum positions to place the DG units at buses 14, 24, and 30 with capacities of 0.7540 MW, 1.0994 MW, and 1.0714 MW. Consequently, a minimal real power loss of 71.4572 kW (64.74% PLR) and a minimal voltage of 0.9687 p.u. were established. Table 3 portrays the findings of optimal DG placement obtained by QOCSOS and distinct methods for the 33-bus RDN. It can be displayed that QOCSOS had the same power loss as SSA [21] and was lower than those of other methods in the table. For Case 3, QOCSOS acquired the best network configuration using the opened switches: 10-28-31-33-34. Concurrently, DGs were deployed to positions at buses 7, 18, and 25 having capacities of 0.8708 MW, 0.7118 MW, and 1.2274 MW. After optimal DNR-DG problem, the minimum real power loss for this case obtained by the QOCSOS was 51.5388 kW, corresponding to a PLR of 74.57%. Table 4 displays the findings of DNR-DG obtained by QOCSOS and different methods for the 33-bus RDN. Based on the RI values with the plus sign "+," the QOCSOS method obtained the best real power loss results among the other compared methods for this case. Tables 2-4 show that the PLR percentages for Cases 1-3 were 31.14%, 64.74%, and 74.57%, accordingly. e PLR of Case 3 was the largest of the three instances. is demonstrated that considering DNR and ODGP at the same time has a substantial influence on power loss reduction. e convergence curves of the QOCSOS and the original SOS are shown in Figure 3. For Cases 1 and 2, both the SOS and QOCSOS algorithms yielded the equivalent real power loss outcome. Nevertheless, the QOCSOS's convergence speeds for optimal outcomes were faster than SOS in all scenarios. Moreover, the voltage profiles of the 33-bus test network for Cases 1, 2, and 3 are illustrated in Figure 4. When the scenario of simultaneous consideration of ODGP and DNR (Case 3) was addressed, the network's voltage profile was greatly enhanced. 6 Mathematical Problems in Engineering 69-Bus Test Network. e QOCSOS technique was used on the 69-bus RDN to verify its scalability. e data are presented and tabulated in [32]. Table 5 portrays the findings of the DNR problem obtained by QOCSOS and other techniques for the 69-bus RDN. For Case 1, the switches, 14-57-61-69-70, were opened by the QOCSOS to form the system's optimal configuration. en, the system's real power loss was 98.6062 kW, which corresponds to a PLR of 56.17% in comparison to the baseline case. According to the RI values, the real power loss obtained by QOCSOS is slightly better than SFS [16], SSA [21], and HSA [2] and close to that of other methods, as seen in Table 5. Table 6 presents the results of the ODGP problem obtained by QOCSOS, SOS, ACSA [19], FWA [14], HSA [2], SSA [21], and SFS [16]. For Case 2, QOCSOS found three buses, 11, 18, and 61, to install three DGs. e corresponding DGs capacities were 0.5268 MW, 0.3804 MW, and 1.7190 MW. e system's real power loss is reduced from 225.0005 kW (base case) to 69.4284 kW with the DG installation, which is equal to SOS, close to SSA [21], and better than other methods. Table 7 reports the results of the DNR-DG problem yielded by QOCSOS and other algorithms for the 69-bus network. QOCSOS simultaneously obtained the optimal opened switches: 14-58-61-69-70. Moreover, three DGs of capacities 0.5376 MW, 1.4340 MW, and 0.4903 MW were located at buses 11, 61, and 164, correspondingly. As a result, the obtained real power loss was 35.1624 kW, which is the same as SOS and SFS [16] and better than other compared methods. From Tables 4-7, the QOCSOS generated the real power losses of 98.6062 kW, 69.4284 kW, and 35.1624 kW for the cases of 1 to 3, accordingly. In comparison to Cases 1 and 2, the real power loss from Case 3 was the smallest. is demonstrated that simultaneously considering ODGP and DNR problems greatly decreased the system's real power loss. Figure 5 shows the QOCSOS and SOS's convergence characteristics for Cases 1-3. In every case, QOCSOS outperformed SOS in terms of convergence speed. e network's minimal voltage value was 0.9495 p.u., 0.9790 p.u., and 0.9813 p.u. for Cases 1-3, accordingly, after optimization for three cases. ese reduced voltage values were enhanced from 0.9092 p.u. (base case), indicating that the network's voltage profile has improved significantly (Figure 6). Case 3 had the greatest minimum voltage out of the three cases. is indicated that taking into account both DNR and ODGP concurrently boosted the system's voltage profile tremendously. 119-Bus Test Network. e effectiveness of the QOC-SOS technique was justified on the large-scale 119-bus RDN. e data for branch and network load was provided in [33]. Table 8 demonstrates the outcomes of QOCSOS and other techniques for the DNR problem of the 119-bus RDN. As observed from this table, QOCSOS acquired the opened switches, 23-25-34-39-42-50-58-71-74-95-97-109-121-129-130, generating a real power loss of 854.0309 kW. It can be from RI values that this result was slightly better than that of SFS [16] and better than other compared methods. Table 9 presents the results offered by QOCSOS and other methods for the ODGP problem. For this case, QOCSOS found that three DGs were located at buses 50, 71, and 109, along with the corresponding capacities of 2.8833 MW, 2.9785 MW, and 3.1199 MW. e power loss result acquired by QOCSOS was identical to those of SOS and SFS [16] methods. Table 10 which is better than those of SOS, SFS [16], and ACSA [19]. e QOCSOS method can improve the result 1.0250%, 0.7155%, 0.3702%, and 3.6128% compared to SOS, SFS [16], and ACSA [19], respectively. e real power losses produced by QOCSOS in Cases 1-3 were 854.0309 kW (34.21% PLR), 667.2940 kW (48.59% PLR), and 565.0605 kW (56.47% PLR), according to . It is shown that Case 3 had the greatest PLR value. is claimed that when the ODGP and DNR problems were examined together, the system's real power loss was significantly decreased. In terms of convergence rate, QOCSOS converged to the near-optimal solution at a quicker pace than SOS, as portrayed in Figure 7. e 119-bus network's voltage profiles are shown in Figure 8 for all cases. e minimal voltage values produced by QOCSOS after solving the DNR-DG issue were 0.9323 p.u., 0.9541 p.u., and 0.9599 p.u. for Cases 1-3, respectively. In addition, Case 3 substantially enhanced the voltage profile after simultaneously considering the ODGP and DNR problems, as seen in Figure 8. In this case, QOCSOS outperformed the comparable techniques with regard to the solution quality, demonstrating its suitability for a large-scale system. Conclusion In the study, the improved QOCSOS is successfully implemented to solve the simultaneous problems of network reconfiguration and DG allocation in RDNs to reduce the real power loss. e efficacy of QOCSOS has been carried out on the 33-bus, 69-bus, and 119-bus RDNs. It was found that Case 3 (combination of optimal network reconfiguration and DG allocation) offered the best real power loss and minimum voltage amplitude compared to Case 1 (only network reconfiguration) or Case 2 (only optimal DG placement). For this case, the power loss reductions were 74.57%, 84.37%, and 56.47% for the 33-bus, 69-bus, and 119-bus RDNs. Furthermore, the findings showed that the suggested QOCSOS algorithm delivered higher solution quality with regard to loss minimization than the original SOS and several other optimization approaches, particularly for large-scale systems, as seen from the outcome evaluations. As a result, the QOCSOS algorithm provides a viable solution for the DNR-DG problem in RDNs. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2021-12-26T16:10:27.503Z
2021-12-24T00:00:00.000
{ "year": 2021, "sha1": "77df6079d96db3ebeec7e4f309c800d0fc183146", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2021/2065043.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d58cd816c66bd7ea09fa551a0054d9fbf9244191", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
254326714
pes2o/s2orc
v3-fos-license
A Reduction of Calcineurin Inhibitors May Improve Survival in Patients with De Novo Colorectal Cancer after Liver Transplantation Background and Objectives: After liver transplantation (LT), long-term immunosuppression (IS) is essential. IS is associated with de novo malignancies, and the incidence of colorectal cancer (CRC) is increased in LT patients. We assessed course of disease in patients with de novo CRC after LT with focus of IS and impact on survival in a retrospective, single-center study. Materials and Methods: All patients diagnosed with CRC after LT between 1988 and 2019 were included. The management of IS regimen following diagnosis and the oncological treatment approach were analyzed: Kaplan–Meier analysis as well as univariate and multivariate analysis were performed. Results: A total of 33 out of 2744 patients were diagnosed with CRC after LT. Two groups were identified: patients with restrictive IS management undergoing dose reduction (RIM group, n = 20) and those with unaltered regimen (maintenance group, n = 13). The groups did not differ in clinical and oncological characteristics. Statistically significant improved survival was found in Kaplan–Meier analysis for patients in the RIM group with 83.46 (8.4–193.1) months in RIM and 24.8 (0.5–298.9) months in the maintenance group (log rank = 0.02) and showed a trend in multivariate cox regression (p = 0.054, HR = 14.3, CI = 0.96–213.67). Conclusions: Immunosuppressive therapy should be reduced further in patients suffering from CRC after LT in an individualized manner to enable optimal oncological therapy and enable improved survival. Introduction Liver transplantation (LT) is still the only option for various conditions resulting in end-stage liver disease as well as primary malignancies of the liver itself. After LT, life-long or at least long-term immunosuppression (IS) remains standard for the prevention of graft rejection. Here, calcineurin inhibitors (CNI), mycophenolate mofetil (MMF), glucocorticoids (GC) and mammalian target of rapamycin inhibitors (mTORI) are most frequently used, and their fine-tuned regimen is one of the main reasons for the markedly prolonged survival of graft function after LT in the last decades [1]. However, side effects such as chronic kidney injury and neoplasms in the decade-long administration of CNI are well known, and the overall beneficial effects of mTORI are controversial [2,3]. With increasing graft survival, long-term outcomes after LT including comorbidities and complications of IS therapy are gaining more interest. For example, the risk of de novo malignancies (DNM) in patients after LT is significantly elevated with an reported incidence 2-to 3-fold compared with the general population [4,5]. Further, cancer-associated mortality is expected to become the most frequent cause of death in the cohort of LT patients and is already the leading cause of death in the second decade after transplantation [6][7][8]. Colorectal cancer (CRC) is one of the most common malignancies worldwide, and its incidence is elevated after LT [9,10]. The stage-dependent therapeutic regimen is highly standardized and consists of radiotherapy, chemotherapy, surgical resection and optional antibody treatment regarding the individual profile. Compared with the overall population, CRC in LT patients is associated with an increased incidence, comparable with the overall rate of DNMs, and occurrence is reported to be earlier in life [11,12]. Of note, certain underlying diseases leading to LT such as PSC alone or in coincidence with inflammatory bowel diseases (IBD) elevate the risk of development of CRC even further to more than seven times [13,14]. Additionally, non-alcoholic liver disease and hepatocellular carcinoma (HCC) have been associated with increased risk after LT [15]. Reports of outcome after CRC in LT patients are heterogenous. Comparable survival rates have been shown, but also poorer long-term survival in patients after solid organ transplantation [11,16]. However, the handling and especially the clinical impact of modification of IS for LT after the diagnosis of de novo CRC remain unclear, and scientific data are not available, although recommendations have been established recently [17,18]. Previously, we investigated the effect of reduction of immunosuppression in patients suffering from recurrent primary liver malignancies such as hepatocellular carcinoma (HCC) or lung cancer after LT and found an impact on survival for patients with dose reduction upon diagnosis independent of oncological treatment [19,20]. In this study, we investigate patients' course after diagnosis of de novo CRC after LT with a focus on the impact of immunosuppressive management. Patients and Methods Patients undergoing LT for various conditions at our institution between 1988 and 2020 and with diagnosis of de novo CRC post LT were included in the analysis. Diagnosis of CRC was confirmed by histopathology, and staging was conducted according to guidelines using the classification of the Union for International Cancer Control (UICC) based upon the TNM-classification [21,22]. Oncological regimen was categorized into curative or palliative and best supportive care (BSC). After LT, all patients were followed up periodically at our outpatient center. Intervals were based on the time after transplantation, ranging from two times a week to every twelve weeks. Here, clinical and laboratory examinations were conducted, and ultrasoundguided, transcostal needle biopsies of the graft were performed according to internal standard protocol at 1, 3, 5, 7, 10 and 13 years and on individual basis thereafter. Routine surveillance via colonoscopy was conducted as recommended by current guidelines, but with intervals of at least five years and intensified surveillance in patients suffering from inflammatory bowel disease (IBD) ranging from once or twice per year to individual intervals as recommended by treating endoscopists [23,24]. To evaluate IS, a score first introduced by Vasudev et al. was used, allowing semiquantitative comparability of different substances (one unit for each daily dose of: prednisone-5 mg, cyclosporine a-100 mg, tacrolimus-2 mg, MMF-500 mg, sirolimus-2 mg) [25]. Cumulative Vasudev score calculated by addition of score over the years and median score were evaluated. Using the approach presented by Rodríguez-Perálvarez et al., impacts of tacrolimus trough levels were analyzed after classification into minimized exposure (<5 ng/mL) and conventional exposure (>5 ng/mL) [26]. Here, mean trough level was calculated (at least one measurement/year) after diagnosis of CRC. For assessment of impact of IS after diagnosis of CRC, management of immune suppressive regimen was grouped in two categories for analysis: (i) maintaining immunosuppression or (ii) new restrictive immunosuppressive management (RIM). RIM was defined when dose reduction or complete discontinuation of IS after diagnosis of cancer was documented. Of note, alteration of mTOR therapy was classified differently: initiation of mTORI without reduction of prior IS was classified as (i) and only if concomitant reduction of other IS (CNI, GC, MMF) was performed were these cases grouped in (ii). Oncological course of patients was followed up by in-hospital data and reports from corresponding institutions, as therapy for LT patients was outlined in an interdisciplinary approach with primary care physicians and oncologists. Thus, data on clinical course as well as laboratory, histological or radiological parameters were extracted from our prospectively maintained database. Statistical analysis was performed using SPSS Statistics Version 26.0 (IBM Co., Armonk, NY, USA). By its retrospective character, the study design was exploratory. For the testing of statistically significant differences, cross-tables were used for nominal-scaled variables. T-test was applied for continuous, normal-distributed variables. For the testing of nonnormally distributed values, the Mann-Whitney U-test or Kruskal-Wallis test were chosen. For the analysis of impact on survival, univariate analysis and Kaplan-Meier analysis were conducted, and log rank tests were calculated. To evaluate effect strength, multivariate and univariate Cox regression models were used, and hazard ratio (HR) and confidence interval (CI) were calculated. Putative relevant variables or confounders for integration in multivariate analysis were identified by clinical experience, such as patients characteristics (relevant comorbidities, age, sex) or oncological parameters. A p-value of <0.05 was considered significant. The study was conducted in accordance with the guidelines of the Declaration of Helsinki and was approved by the local ethics committee of our institution (protocol code EA1/255/20; date of approval: 20 October 2020). Results From 2744 patients receiving LT over a 33-year span, 33 patients were identified with de novo colorectal cancer, forming a prevalence of 1.2% in this population. Median time from transplantation to DNM was 12.0 years (0.9-27). Indications for initial LT and overall patient characteristics are displayed in Table 1. Prior to the diagnosis of CRC, immunosuppressants used were CNI (n = 28; 84.8%), MMF (n = 7; 21.2%), mTORI (n = 4; 12.1%) and glucocorticoids (n = 1; 3%). A group of 31 (93.9%) patients were diagnosed with colon cancer and two (6.1%) with rectal cancer. Using the UICC criteria, 14 (42.4%) patients were stage I, eight (24.2%) stage II, six (18.2%) stage III and three (9.1%) stage IV at initial diagnosis. Based on staging and patients' constitution, 32 (97.0%) patients were treated with curative and only one (3.0%) patient with palliative intention. Regimens consisted of oncological resection in 32 (97.0%) cases, chemotherapy in nine (27.3%) and radiotherapy in two (6.1%), with either combination in eight (24.4%) cases based on therapy standards at the specific time. Adjuvant chemotherapy was administered in eight (24.2%) cases, and only one patient received palliative chemotherapy (3.0%). Median survival after diagnosis of de novo CRC was 49.6 (0.5-298.9) months. At the end time of observation, 11 (33.3%) patients had died, and in eight (24.2%), the malignancy was stated as cause of death. In all patients undergoing surgery, histopathology confirmed local R0-resection. We did not find statistical impact of T-stadium or N-classification on survival in Kaplan-Meier analysis, but M1 status was associated with significant shorter survival (log rank 0.001). Kaplan-Meier analysis also revealed the statistical significance of UICC stage on survival after diagnosis with a median of 66. Median IS-score assessed according to Vasudev et al. at time of diagnosis was 2.0 (0.25-6.0) units, and median cumulative IS-score was 30.5 (3.0-87.5). After diagnosis of CRC, 20 (60.6%) patients were identified, where reduction of immunosuppression according to RIM-criteria in response to new malignancy was initiated. Thus, two groups were formed termed RIM and maintenance, respectively. In four patients, IS was withdrawn completely. Mean IS-score did not differ between groups at time of diagnosis with 2.1 (±1.5) units in group RIM and 2.5 (±1.4) units in maintenance group (p = 0.5). In RIM-patients, reduction of CNI was initiated in all patients, with relative dosage reduction of 45.0% (0.25-1). Additionally, MMF was reduced in four (20.0%) patients. In four (20.0%) patients, mTORI was introduced into regimen. Immune suppressive regimen prior to the diagnosis of CRC did not differ between the two groups with CNIs as backbone in 19 (95.6%) patients in RIM and in nine (69.3%) patients in the other group. The Wilcoxon test for non-parametric paired variables revealed a dose reduction of IS with statistical significance with an IS-score after prior to diagnosis of 2.1 (±1.5) units and 1.4 (±1.5) after diagnosis of CRC in the RIM-group (p < 0.01). The most frequent indications for LT were alcoholic liver disease (ALD) and primary biliary cholangitis (PBC)/primary sclerosing cholangitis (PSC) in both groups without significant differences (p = 0.35). Further, the prevalence of inflammatory bowel disease (IBD) did not differ between groups (p = 0.68). Median time to de novo CRC was comparable (RIM: 12.5 (1.0-29.0) years/maintenance: 11.0 (0.9-27.0) years, p = 0.44). Furthermore, stage of malignancy using the UICC classification showed no significant difference between groups; most patients were diagnosed with local tumor stages of I/II in 14 (70.0%) patients in the group with restrictive IS management and eight (72.8%) in those with unaltered IS-regimen (p = 0.36). Table 1 shows an overview of patient characteristics including oncological parameters. Here, no statistically significant differences between those two groups were found. Additionally, no rejection or loss of graft occurred in the group undergoing further reduction of IS, and thus, no patient received a re-installment of a previous IS-regimen. Median survival from initial diagnosis was 83.46 (8.4-193.1) months in the RIM group and 24.8 (0.5-298.9) months in maintenance. At the end of the observation period, four patients (20.0%) had died under restrictive immunosuppression and seven (46.2%) in the group of unaltered IS. Cause of death was CRC in two (20.0%) and five (38.5%). No significance was found in causes of death between groups (p = 0.38; see Table 1). Comparison using Kaplan-Meier survival analysis showed statistically significant differences in both short-term and long-term survival (log rank = 0.02); see also Figure 1. We did not find improved survival after the diagnosis of CRC for the five (15.2%) patients receiving mTORI before compared with those without (log rank 0.13) or those five (15.2%) with mTORI therapy after diagnosis (log rank 0.29). The subgroup analysis of patients with regard to N-and M-status showed trends for a survival benefit for patients with RIM but did not reach statistical significance except for We did not find improved survival after the diagnosis of CRC for the five (15.2%) patients receiving mTORI before compared with those without (log rank 0.13) or those five (15.2%) with mTORI therapy after diagnosis (log rank 0.29). The subgroup analysis of patients with regard to N-and M-status showed trends for a survival benefit for patients with RIM but did not reach statistical significance except for short-term survival in patients with M1 status (see Figure 2). Analyzing the survival of patients with or without RIM subgrouped for UICC stage showed no impact in stages I and II but significantly longer survival for patients with UICC stages III and IV when a restrictive immune suppressive regimen after diagnosis of CRC was conducted. Here, median survival was 48.8 (16.2-193.1) months and 3.2 (0.5-55.4) months, respectively (log rank 0.02); see Figure 3. In multivariate analysis using the clinically important variables of age at tumor diagnosis and preexistent cardiovascular disease and the oncological staging parameters using the TNM classification and RIM, no significant statistical impact on improved overall survival after diagnosis of de novo CRC after LT was found. However, a trend regarding impact of RIM was seen (p = 0.054); see also Table 2. In multivariate analysis using the clinically important variables of age at tumor diagnosis and preexistent cardiovascular disease and the oncological staging parameters using the TNM classification and RIM, no significant statistical impact on improved overall survival after diagnosis of de novo CRC after LT was found. However, a trend regarding impact of RIM was seen (p = 0.054); see also Table 2. Analyzing the impact of tacrolimus trough levels after diagnosis of CRC, we found significantly improved survival in groups with mean trough levels of <5 ng/mL (minimized exposure) and >5 ng/mL (conventional exposure) after diagnosis of de novo CRC, (log rank 0.03); see Figure 4. Analyzing the impact of tacrolimus trough levels after diagnosis of CRC, we found significantly improved survival in groups with mean trough levels of <5 ng/mL (minimized exposure) and >5 ng/mL (conventional exposure) after diagnosis of de novo CRC, (log rank 0.03); see Figure 4. Discussion In this study, we analyzed the course of patients with CRC after LT. Focus was current IS and its impact on survival, as its influence gains more relevance in recent studies on outcome after LT with regard on long-term survival [26][27][28]. We only found 33 patients out of our cohort of over 2700 patients in a time span of three decades with reported manifestation of CRC, forming a total prevalence of 1.2%, highlighting effective colorectal cancer screening. Studies report an incidence in the general population between 30 and 50/100,000 of new CRC per year in western countries, and an incidence of CRC in LT patients of 4.9% was reported by Altieri et al. [29][30][31]. Due to the life-long follow-up of our patients with high compliance, we do not expect underreporting in our collective but excellent patient adherence to our recommended follow-up examinations that include endoscopies after LT, and thus, many precancerous lesions Discussion In this study, we analyzed the course of patients with CRC after LT. Focus was current IS and its impact on survival, as its influence gains more relevance in recent studies on outcome after LT with regard on long-term survival [26][27][28]. We only found 33 patients out of our cohort of over 2700 patients in a time span of three decades with reported manifestation of CRC, forming a total prevalence of 1.2%, highlighting effective colorectal cancer screening. Studies report an incidence in the general population between 30 and 50/100,000 of new CRC per year in western countries, and an incidence of CRC in LT patients of 4.9% was reported by Altieri et al. [29][30][31]. Due to the lifelong follow-up of our patients with high compliance, we do not expect underreporting in our collective but excellent patient adherence to our recommended follow-up examinations that include endoscopies after LT, and thus, many precancerous lesions might have been treated before the manifestation of CRC. This notion was supported by the high fraction of UICC stage I and stage II CRC that formed two thirds of our cohort. Subsequently, curative surgical therapy was available in every patient but one. We found a median occurrence of CRC after LT of 12 years, reflecting the impact of chronic IS and the shift in comorbidities that challenge the aftercare of patients after LT in the long run. Staging-dependent survival rates in our study were comparable with the general population and with LT patients from other reports [13,32]. Staging using the UICC criteria for CRC and the TNM classification demonstrated prognostic value in our cohort, reflecting their importance in decision making [33][34][35][36]. As most patients (all but one) were treated with curative intention with surgical resection, the most relevant impact for survival-surgical resectability-could not be assessed in our study, thus, however, an important potential bias for our study was ruled out in favor of the impact of IS-redesign. Evaluating the effect of altered handling of IS after diagnosis of CRC, we found the two groups that were formed comparable in all relevant clinical aspects. Thus, impact of RIM could be assessed with validity. Survival analysis revealed positive effects of reducing IS further after de novo malignancy in LT patients, similar to findings for patients suffering from recurrent HCC after LT and in congruence of pathophysiology of administered substances [19,37]. The effect did not reach statistical significance in multivariate analysis, possibly to the very small population. Analyzing the effect of RIM in subgroups, we found impact especially in stages where tumor manifestation was advanced (UICC stages III/IV, M1-status at time of diagnosis). While the utmost importance with highest impact lies undoubtedly within stage-dependent oncological regimen, we hypothesize that the effect of RIM might become evident in cases where overall systemic immune control is overwhelmed, reflecting advanced stages [38][39][40]. As most patients suffering from CRC after LT are found years after the initial transplantation with stable liver function-as indicated in our cohort by the feasibility of major visceral operation-we conclude that RIM should be evaluated as an additional oncological aspect in this special cohort of patients with the aim of complete withdrawal. While early withdrawal has been shown to be of only minor success in certain subsets of patients, long-term discontinuation seems to be more favorable and feasible [41][42][43][44]. However, in an event of a life-threatening disease associated with failed immune response, we deem it mandatory to investigate its practicability in every individual in a step-by-step manner [45,46]. Recently, Colmenero et al. presented guidelines from the ILTS-SETH Consensus Conference regarding the incidence and management of DNMs [17]. While they note the lack of data altogether and the practical absence of prospective studies, their recommendations reflect this study s findings. The exact approach to reducing IS in LT patients remains partly unclear and always requires knowledge of the individual patient s risk profile, comorbidities and tolerance to different substances and their adverse effects [47][48][49]. CNIs remain the most important substance, and all patients undergoing RIM in our study were found with reductions in this drug class. Additionally, we found a tacrolimus through-level-dependent survival difference with beneficial outcome for patients with lower CNI burden. In contrast, mTORI are the only substance of IS where anti-proliferative properties are reported, although its clinical impact remains controversial, and optimal regimen is unclear [50][51][52][53][54][55][56]. We did not find any impact of mTORI on survival, whether administered before or after the diagnosis of CRC, but the number of patients with mTORI was very low. In this regard, using the IS scale proposed by Vasudev et al. might be misguided, as mTORI are weighted equally to CNI, and from our regard, the influence of MMF might be overestimated [25]. However, using the IS scale, a low immunosuppressive burden in the overall cohort was shown, reflecting the modern approach of reducing IS after LT to the tolerable minimum. Certain limitations of this study have to be addressed. The retrospective, three-decadespanning character certainly inherited different approaches in post-LT management as well as oncological strategies and therapeutic options that were not explored in depth. Additionally, while the low number of patients reflects the rarity of this special constellation, it especially limited validity for subgroup analysis and also restricts overall statistical analysis. The use of different immune suppressants in over 30 years of LT with diverging focuses (preventing rejection at all cost vs. minimizing adverse side effects for the future) is certainly present in this study and the calculation of IS-score and definition of RIM may be unprecise. However, strategies for CRC have made enormous progress, and regimens including total neoadjuvant concepts for rectal cancer and targeted therapies for metastatic conditions as well as extended concepts for colorectal liver metastases have improved survival for patients immensely [57][58][59][60][61]. We did not evaluate the differentiated oncological strategies, but the distribution of diagnosis of CRC over the decades did not differ between our groups, and thus, possible bias of diverging options even for advanced stages should be ruled out. It has to be acknowledged that while this study further confirms recent recommendations, it inherits the methodical limitations of the few studies published before investigating this issue. Thus, the presented collective can only be regarded as an addition to the growing, but still scarcely existing, body of evidence. Conclusions A remarkable oncological benefit for a restrictive, reflective management of IS upon diagnosis of CRC after LT with significant impact on survival for the individual patients was found in this study. This observation requests timely action from the physician in charge after LT in an individualized manner with close correspondence to treating oncologists as IS reduction can be regarded as an additional oncological measure. To achieve a profound scientific foundation for the reduction of IS in this context, prospective, multi-center data must be acquired in regard to the rarity of occurrence. Funding: There was no financial support or funding received by the authors related to the presented work. Institutional Review Board Statement: The study was conducted in accordance with the guidelines of the Declaration of Helsinki and was approved by the local ethics committee of our institution (protocol code EA1/255/20; date of approval: 20 October 2020). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to local ethics committee stipulations and privacy policy of Charité-Universitätsmedizin Berlin. Conflicts of Interest: All authors declare no conflicts of interest related to the presented work.
2022-12-07T19:29:41.494Z
2022-11-29T00:00:00.000
{ "year": 2022, "sha1": "77c1c95d89d976cb2a89d3f7fe0645092ed16693", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/58/12/1755/pdf?version=1669886719", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d275d084e772521f9cfe89657b57aa61f37fccaf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59443502
pes2o/s2orc
v3-fos-license
Time functions and $K$-causality between measures Employing the notion of a coupling between measures, drawn from the optimal transport theory, we study the extension of the Sorkin-Woolgar causal relation $K^+$ onto the space $\mathscr{P}(\mathcal{M})$ of Borel probability measures on a given spacetime $\mathcal{M}$. We show that Minguzzi's characterization of $K^+$ in terms of time functions possesses a"measure-theoretic"generalization. Moreover, we prove that the relation $K^+$ extended onto $\mathscr{P}(\mathcal{M})$ retains its property of antisymmetry for $\mathcal{M}$ stably causal. Introduction In causality theory, i.e. the subfield of mathematical relativity studying the causal properties of spacetimes, the fundamental role is played by the binary relations of chronological and causal precedence called I + and J + , respectively. An event p is said to chronologically (causally) precede another event q if they can be connected by means of a future-directed timelike (causal) curve, what is usually denoted p ≪ q (p q). For a concise review of causality theory we refer the reader to [1], whereas a more detailed exposition can be found e.g. in [2,3,4,5,6]. There are, however, other binary relations studied in causality theory, which offer an alternative way of modelling causal influence. One of the important examples is the relation K + , introduced by Sorkin and Woolgar [7] within the programme of reformulating causality theory along topological and order-theoretic lines, which in particular would encompass spacetimes with metrics of low regularity. K + was defined there as the smallest transitive and topologically closed relation containing I + . Of course, if the spacetime M is causally simple, then J + is a closed subset of M 2 and hence K + = J + , but without the assumption of causal simplicity only the inclusion K + ⊇ J + holds, thus allowing for causal influence to involve more pairs of events than the causal curves alone can connect. The Sorkin-Woolgar relation turns out to give some non-trivial insight into the property of stable causality, as attested by the following result due to Minguzzi [8] (see also the article by Bernard and Suhr in the present volume): Theorem 1. Spacetime M is stably causal iff it is K-causal, that is iff the relation K + is antisymmetric (and hence a partial order). In addition, it has been shown (also by Minguzzi [9]) that for M stably causal the relation K + can be completely characterized in terms of time functions, i.e. continuous real-valued maps on M, which are strictly increasing along every future-directed causal curve. It is worth noting that such maps exist precisely on stably causal spacetimes [10]. Concretely, the characterization is as follows. Theorem 2. Let M be a stably causal spacetime and let p, q ∈ M. Then (p, q) ∈ K + ⇔ For every time function t t(p) ≤ t(q). (1) Inspired by the causal structure in noncommutative Lorentzian geometry ( [11], see also [12] and references therein) we have recently proposed a way to extend the causal precedence relation J + onto the space P(M) of all Borel probability measures on a given spacetime M [13]. The proposed extension employs the notion of a coupling of a pair of measures, drawn from the optimal transport theory [14,15] adapted to the Lorentzian setting, and can be successfully used to model the causal time-evolution of spatially distributed quantities from classical physics, such as charge or energy densities [16,Section 2]. Surprisingly enough, it turns out to provide also a suitable framework for the study of quantum wave packets [17]. Let us emphasize, however, that the method of extending J + onto P(M) can be in fact applied to any (Borel) relation R ⊆ M 2 (see Definition 1 below). In particular, in [18] we have studied the extension of the Sorkin-Woolgar relation K + , addressing the question whether it retains its defining properties of transitivity and closedness, as well as establishing a "measuretheoretic" analogue of characterization (1). Although the first question has been answered positively, the characterization has been proven only under a stronger assumption of M being causally continuous. What is more, the problem whether K + extended onto P(M) is still antisymmetric for M stably causal has been left for future investigations. The aim of the current paper is to fill both the above-mentioned gaps. To this end, we begin in Section 2 by providing the prerequisite definitions and some general results. Section 3 constitutes the main part of the article, with Theorem 5 answering the question concerning the characterization with time functions, and Theorem 7 tackling with the problem of antisymmetry. The (somewhat technical) proofs involve introducing and studying the so-called multi-time orderings, which generalize time orderings employed in [9]. Preliminaries From now on, the term "measure" will always stand for "Borel probability measure". Given a pair of measures µ, ν ∈ P(M), we call ω ∈ P(M 2 ) a coupling of µ and ν if the latter two measures are ω's marginals, that is if π 1 ♯ ω = µ and π 2 ♯ ω = ν, where π i : M 2 → M, i = 1, 2 denote the canonical projections. We shall denote the set of all such couplings by Π (µ, ν). Borrowing the terminology from Suhr [19,Definition 2.4], let us put forward the following general definition: Definition 1. Let M be a Polish space and let R ⊆ M 2 be a Borel binary relation. For any µ, ν ∈ P(M) we say that µ is R-related with ν if there exists ω ∈ Π (µ, ν) such that ω(R) = 1. Example 1. As a simple illustration of the above concept, consider the equality relation =, which, when regarded as a subset of M 2 , is nothing but ∆(M), where ∆ : M → M 2 , ∆(p) = (p, p) denotes the diagonal map. Then for any µ, ν ∈ P(M) the existence of ω ∈ Π (µ, ν) such that ω(∆(M)) = 1 is a necessary and sufficient condition for µ and ν to be actually equal. By analogy with J + , for any X ⊆ M we introduce the notation R + (X ) := π 2 ((X × M) ∩ R) and R − (X ) := π 1 ((M × X ) ∩ R). Notice that the sets R ± (X ) need not be Borel even if X is a Borel set. Nevertheless, being projections of Borel sets, they are universally measurable, which means that for any measure µ ∈ P(M) the sets R ± (X ) are Borel up to a µ-negligible set, and therefore the quantity µ(R ± (X )) is well defined [20]. We will need the following powerful characterization of R-relatedness due to Suhr [19, Theorem 2.5]. Theorem 3. Let M be a Polish space and let the relation R ⊆ M 2 be closed (topologically). Then for any µ, ν ∈ P(M) the following conditions are equivalent In this paper we will be dealing with closed preorders, i.e. those relations R ⊆ M 2 which are reflexive, transitive and topologically closed. We will write µ R ν to express the R-relatedness of µ and ν. Notice that for any compact C ⊆ M the sets R ± (C) are closed. As a first general result, we obtain the following alternative characterizations of R . Theorem 4. Let M be a Polish space and let R ⊆ M 2 be a closed preorder. For any µ, ν ∈ P(M) the following conditions are equivalent: Additionally, conditions 2 • and 3 • are equivalent with their "past" counterparts: Proof . We adapt here of the first part of the proof of [18, Theorem 2]. 1 • ⇒ 2 • By the closedness of R, for any compact C ⊆ M the set R + (C) is closed and hence Borel. Denoting its characteristic function by χ (which is a Borel map), the inequality χ(p) ≤ χ(q) holds for all (p, q) ∈ R by transitivity. Finally, on the strength of 1 • , there exists ω ∈ Π (µ, ν) supported on R. Altogether, one can write that One similarly proves that 1 • ⇒ 2 ′• . 2 • ⇒ 3 • Let the set X be as specified in 3 • . Taking any compact C ⊆ X , we have that R + (C) ⊆ R + (X ) ⊆ X and hence where in the second inequality we have used 2 • . Using the fact that µ, being a Borel measure on a Polish space, is inner regular (a.k.a. tight [20, Lemma 12.6]), we hence obtain that The possibility of moving between these two conditions relies on the obvious equality µ(X c ) = 1 − µ(X ) valid for any µ ∈ P(M) and any Borel set X ⊆ M, and on the following equivalence of inclusions: The latter has been stated in [13, Proposition 1] for J + , but the proof conducted there is in fact valid for any relation R. Main results In [18], we have provided several characterizations of the K-causality relation between measures, some of which have been proven to hold for all stably causal spacetimes, whilst others seemed to demand a stronger requirement of causal continuity. Below, however, we show that the latter requirement is in fact redundant. In other words, we upgrade [18, Theorem 2] to the following result. The numbers 2 • , 3 • have been omitted here as they refer to the conditions listed in Theorem 4, which of course holds in the special case R := K + . Additionally, the following two remarks from [18] are still valid (with their proofs unchanged). Remark 1. Without loss of (or gain in) generality, in condition 5 • the term "bounded" can be replaced with "µ-and ν-integrable", whereas the term "time" can be substituted with "temporal", "smooth time", "smooth causal" or "continuous causal". In order to prepare the ground for the proof of Theorem 5, we begin by slightly strengthening Minguzzi's Theorem 2. and, moreover, Proof . The implication '⇒' in (9) follows trivially from Theorem 2. In order to show the converse, we will demonstrate how to construct a countable family of time functions {t α : M → (0, 1)} α∈N such that, for any p, q ∈ M To begin with, consider the open set M 2 \ K + . By Theorem 2, for any pair (p, q) ∈ M 2 \ K + we can pick a time function t p,q such that t p,q (p) − t p,q (q) > 0. Notice now that the map (t p,q • π 1 − t p,q • π 2 ) : M 2 → R is continuous, and therefore the following family of inverse images: constitutes an open cover of M 2 \ K + . Since the latter is a separable metric space (being an open subspace of a separable metric space M 2 ), it possesses the Lindelöf property [21, Theorem 16.11], i.e. we can choose a countable subcover of the above cover: (t pα,qα • π 1 − t pα,qα • π 2 ) −1 ((0, +∞)) α∈N . In [18] it was proven that the Sorkin-Woolgar relation retains its defining properties of transitivity and closedness when extended onto measures. With the help of Theorem 6, we can show that this extension is also antisymmetric (and hence a partial order) provided M is stably causal. In other words, Minguzzi's Theorem 1 still holds in the more general setting of K-causality between measures. Theorem 7. Let M be a stably causal spacetime. Then the relation K on P(M) is antisymmetric. Proof . The major part of the proof can be straightforwardly adapted from that of [13,Theorem 12], where it was proven that the relation J + (extended onto P(M)) is antisymmetric under some mild assumptions on the causal properties of M. In fact, there is only one step of that proof which requires a nontrivial modification. Namely, we need to show here that, for any fixed µ ∈ P(M), the only ω ∈ Π (µ, µ) satisfying ω(K + ) = 1 is ∆ ♯ µ. On the strength of [13,Lemma 4], it suffices to prove that ω(∆(M)) = 1. To this end, observe first that for any f ∈ C b (M) we have what can be obtained by subtracting the identities M f (p)dµ(p) = K + f (p)dω(p, q) and M f (q)dµ(q) = K + f (q)dω(p, q), true by the very assumption on ω. In the proof of Theorem 5 we will need another kind of causal relation, which generalizes Minguzzi's time orderings [9]. Recall that, given a time function t : M → R, its corresponding time ordering is a (closed total) preorder defined as We generalize this definition onto finite collections of time functions in a straightforward way. In order to show the converse implication, we will demonstrate that 2 • implies the following condition which is equivalent to 1 • by Theorem 4. Let us first assume that C is finite. In order to show that inequality (14) holds for C = {q 1 , . . . , q s }, consider the maps T k,l : M → R defined, for any k, l ∈ N, via Observe that the sequence (ϕ + k ) is pointwise convergent to the characteristic function of the closed half-line χ [0,+∞) , whereas the pointwise limit of the sequence (ϕ − l ) is the characteristic function of the open half-line χ (0,+∞) . But this actually means that, for any p ∈ M, In other words, the iterated pointwise limit of the sequence (T k,l ) calculated first with respect to k and then l is nothing but the characteristic function of the set T + [F ](C). Observe now that for any k, l ∈ N the map T k,l is of the form Φ(t 1 , . . . , t n ) with Φ ∈ C b (R n ) componentwise increasing. On the strength of 2 • , we thus have that Invoking Lebesgue's dominated convergence theorem twice, we pass first to the limit k → +∞ and then to the limit l → +∞, thus obtaining (14) for C finite. Suppose now that C ⊆ M is any compact subset. Our aim now is to construct a sequence (C m ) m∈N of finite subsets of M such that as this would already complete the proof of (14) via Two immediate observations follow. First, for any R > 0 the family {V (p)} p∈B(C,R) constitutes an open cover of C (and hence it possesses a finite subcover). Indeed, for any q ∈ C one can always choose p ∈ B(q, R) \ {q} which causally precedes q and thus t(p) < t(q) for any time function t. Second, it is easy to notice that V (p) = T + [F ](p). To begin the construction of (C m ), let R 1 := 1 and let the family {V (p sm } for some m ∈ N. In order to construct C m+1 , define first where the distance (calculated with respect to some fixed auxiliary Riemannian metric on M) is positive since C is compact and M \ sm i=1 V (p ) ≤ t α (q) for α = 1, . . . , n. Notice that, by construction, and therefore, altogether, there exists i ∈ {1, . . . , s m } such that for any α ∈ {1, . . . , n} t α (p Finally, it remains to prove that ∞ m=1 T + [F ](C m ) = T + [F ](C). In order to prove "⊆", take any q ∈ M satisfying ∀ m ∈ N ∃ p m ∈ C m ∀ α ∈ {1, . . . , n} t α (p m ) ≤ t α (q). By construction, for any m ∈ N one has dist(C, p m ) < R m ≤ 2 m−1 ≤ 1. This implies that the sequence (p m ), being contained in a precompact set B(C, 1), has a subsequence convergent to some p, which actually must lie in ∞ m=1 B(C, 2 m−1 ) = C = C. By the continuity of time functions, this means that t α (p) ≤ t α (q) for α = 1, . . . , n and so q ∈ T + [F ](C). To obtain the converse inclusion "⊇", fix any m ∈ N and recall that the family {V (p With the above characterization of multi-time orderings between measures at hand, we are finally ready to prove the characterization of K by means of time functions for M stably causal. Proof of Theorem 5. Implications 1 • ⇒ 4 • ⇒ 5 • have been established in [18], whereas the remaining implication 5 • ⇒ 1 • has been proven there only under the additional assumption of the causal continuity of M. In order to show that this implication holds in all stably causal spacetimes, let us first fix a countable family of time functions {t α } α∈N with the property (9). On the strength of 5 • and Theorem 8 we obtain that for any n ∈ N there exists ω n ∈ Π (µ, ν) such that ω n T + [{t 1 , t 2 , . . . , t n }] = ω n n α=1 T + [t α ] = 1. Indeed, observe first that for any k, n ∈ N k ≤ n ⇒ ω n where in the first equality we have used property (9).
2018-01-16T23:44:58.000Z
2018-01-16T00:00:00.000
{ "year": 2018, "sha1": "e125dc2562c60dd96e0b1ab507b50285de42468d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/968/1/012008", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e125dc2562c60dd96e0b1ab507b50285de42468d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
238606145
pes2o/s2orc
v3-fos-license
Identification of the Problems during the Implementation of Thesaurus for Hindi Language Well designed thesaurus reveals its results in the applications of browsing and information searching in the document. Thesaurus is an important tool, which is well suited to find more and/or better terms during writing and reading the documents. Such monolingual thesaurus for Hindi language has been presented in this paper. A thesaurus contains synonyms (words which have basically the same meaning) and antonyms, which is important for many other applications in NLP too. Implementation of thesaurus includes various challenges the foremost being designing the database for synonyms and antonyms for efficient retrieval of results. Therefore this paper describes the various problems that arise during the implementation of monolingual thesaurus for Hindi language. I. INTRODUCTION Natural language processing (NLP) is concerned with the interaction of computers and human languages. Thesaurus is one of the research works of NLP. Thesaurus is usually embedded in an application system such as document analysis system or a text retrieval system. Thesaurus lists words which are grouped together according to the similarity of meaning called synonyms and sometimes contain their antonyms. Synonym is a word or phrase that is perfectly substitutable in a context for another word or phrase. It may be possible that the substituted word is not the ideal synonym. Monolingual means concerned with only one language as in this case it is Hindi. Thesaurus is a controlled vocabulary. The primary purpose of vocabulary control is to achieve consistency in the description of content objects and to facilitate retrieval [8]. We go to thesaurus when we have an idea, some concept or meaning in our mind but we are unable to get just the right word that fits our need or when we want to put more weight behind a concept by using some more appropriate word.. Thesaurus helps to understand the meaning of a term. For example: if user is not aware of word "tremor" then while going through the list of words of synonyms user may understand the word tremor i.e. earthquake or quake. The paper is divided into seven sections: section two discusses the need of a thesaurus, section 3 covers the related work already done, section 4 covers the properties and nature of Hindi language, section 5 discuss the thesaurus for Hindi language section 6 details the problems faced during development of thesaurus for Hindi language and section 7 concludes the paper and references are given at the end. II. NEED FOR THESAURUS Tool like thesaurus is essential because all documents and queries are expressed in language. Language is complex and ambiguous. Ambiguity means same word having different meaning in different context i.e. Mercury (is a Planet) and Mercury (is a Metal) also. For example: we know "cycle" is an object which is named as "cycle" but we can also call "cycle" as "Bicycle" or "Push-ride". Likewise "Beam balance" is also called "pair of scales" or "balance" or "scales". It is difficult to decide that which one is to choose? Methods for solving the language issue are difficult. Even some systems don't attempt to deal with such issues. Such problems of ambiguity are resolved by thesaurus, as thesaurus helps us to understand the meaning of term. It also helps to express and improve the sentence, paragraph and queries of the document in a better way. Thus, thesaurus is organized for us to help and find those words that we want, but cannot think of. III. NATURE OF HINDI LANGUAGE Hindi is an Indo-Aryan and a national language of India. Hindi share the title of India's constitutionally recognized national language with English. It is the mother tough of "Hindi Belt" of north and central India,and it is the world's fourth major spoken language. To write Hindi (िहन्दी), Devanagari (दे वनागरी) is a widely used script (िलपी/lipi). Each devanagari script characters represents a syllable, not a alphabet. It is written from left to right. Alphabet of Devanagari called 'varNNamaalaa' (वणर् माला; varnamala). Varnamala is also called 'Aksharmala' (अक्षरमाला; AkShar-maalaa). Vowels and consonants together are called AkShars. The basic units of the writing system are referred to as Aksharas. The shape of an Akshara depends on its composition of consonants and the vowel, and sequence of the consonants. India is a country with rich diversity in languages, culture, customs and religions. But, the language is making hindrances in the advantages of Information Technology revolution in India. So, there is the need of the adequate measures to perform natural language processing through computer processing so that computer based system can be interacted by users through natural language like Hindi and handled by users who have knowledge of regional language. We require such a tool which can resolve all the Indian queries written in Hindi. With the use of thesaurus they can improve their vocabulary also. Yet many of the major languages of India have no thesaurus till date. IV. HINDI THESAURUS The idea of Hindi Thesaurus is inspired by the English Thesaurus. Hindi Thesaurus is such a tool which is important to the country like India where a very large fraction of the population is not conversant with English and consequently does not have access to the vast store of information that is available in English on the internet. In India, there are also many people who know English, but not fluent enough to be able to formulate their queries in it. Moreover Hindi is the official language of India. The biggest advantage to thesaurus is that once we find the correct term; all other relevant terms are grouped together in one place under all of the other synonyms for that term and antonyms, when sometimes user want to know the term with opposite in their meaning. Using a thesaurus routinely can help to expand a writer's vocabulary. Most of the Indian languages have letters that sounds mostly alike. Example श (sha), ष (sha), स (sa). This is the difficult part to recognized the words if pronounced. In Microsoft Word you can look up a word quickly if you right-click anywhere in your document, and then to find a synonyms for a specific word, either type the word in the task pane search field or highlight it in your document. Then list of all possible synonyms appear in the context menu. Likewise Hindi Thesaurus is worked for you. For example if user select and right clicked on word "अमृ त" then resultant words are shown in the popup menu and synonyms as well as antonyms are listed as shown below in the table 1. V. PROBLEMS FACED There are several difficulties that are arise while building the Hindi thesaurus. Thesaurus builders should keep in mind all the problems mentioned below, while building the thesaurus. • Ambiguity : Ambiguity arises when same word having different meaning in different context. Ambiguity which won't be resolved by computers is वह आम खा रहे । (आम as mango) आम आम आदमी की पिरिध से परे हो गया है । (आम as common person). Here in the previous sentence 'आम' can have two different meaning 'Mango' and 'common person'. Another example "काल" is the word having two contexts. One represents as meaning "अं धकार" (i.e. Darkness) and second represents it as "यमराज". • Use of half Consonants and Use of Dot with Vowels: Half consonants mean half character, In Unicode half consonants are formed if characters make use of (हलं त) (◌् ) like क् ख् ग् घ् च् छ् ण् त् द् ध् न् etc. Some words are written with half consonants and sometimes same words are written with the help of small dot called Anusvaar/अनु ःवार /bindu, placed above the word (◌ं ) which is nasalized that is a nasal quality is added to the vowel sound. For example बन्द = बं द, लम्बा = लं बा and खण्ड = खं ड. • Different way to write same word. Some words with same meaning can be written differently example: पं जाबी = पं ाबी, डाक्टर = डॉक्टर= डौक्टर • There are many words which contain "-" special symbol for their continuation. • In thesaurus, identifying the words that are semantically related to one another is the major difficult task. • Selection of the word : There are two ways to select the words for look up Hindi thesaurus. Fig.1 and Fig.2 has shown these two ways as below There are two main reasons explained below as: First reason is if document is written in Unicode format then both ways of selection of the word is done except to the cases when words contained nukta letters as mentioned above in (v) options. This nutka is considered as delimiter which separates the word into two parts if written with double key stroke. For example : With double key stroke लड़की = ल+ड+◌़ +क+◌ी in which (◌़ ) is considered as delimiter or end of the word, which split the word into two parts as ल+ड in first part and क+◌ी in second part. Second reason is if word document is in Non-Unicode format then also we have to select the whole word. Because In Non-Unicode format there are many characters which are generated with the key stroke of different special characters as well as with the delimiters also. When these ~ ! @ # $ % ^ and * ( ) _ -+ { } [ ] \ : ; ' , . / ? characters are encountered then as mentioned above because of the same reason, user need to select the whole word. • Context Menu: A context menu (also called contextual, shortcut, and popup or popup menu) is a menu in a graphical user interface (GUI) that appears upon user interaction, such as a right mouse click operation. Text that is written in the word document use different properties of MS word like make use of bullets, tables, hyperlink etc. When user selects the word to check the thesaurus facility, there are different types of context menus which are popped-up. For example: for simple text, text written within the table, text with bullets and numbering, text with the hyperlink properties etc; all generate different context menus. It is difficult to recognize the context menu, because there are near about 180 context menu. VI. CONCLUSION This paper provides the knowledge about the experiments and their effects involving in the applications of thesaurus. This paper also presented the various difficulties that are occurred during the implementation of Hindi thesaurus and various major challenges. The biggest challenge in constructing a thesaurus, therefore, is to find out the context menu for Hindi thesaurus in which it is popped-up and to identifying the words that are semantically related to one another. To implement Hindi thesaurus one should keep in mind all these difficulties as mentioned in section 5.
2020-09-16T10:47:36.420Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "7864cfeffcda6dd7bb381fd7e5e88b3278465300", "oa_license": null, "oa_url": "https://doi.org/10.47893/ijcsi.2012.1042", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f06570f1f8593bc169d7c2466ea7c81c1e0b8cda", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
268902144
pes2o/s2orc
v3-fos-license
Exploring the Distribution of 3D-Printed Simulator Designs Using Open-Source Databases to Facilitate Simulation-Based Learning Through a University and Nonprofit Collaboration: Protocol for a Scoping Review Background Advancements in technology have enhanced education, training, and application in health care. However, limitations are present surrounding the accessibility and use of simulation technology (eg, simulators) for health profession education. Improving the accessibility of technology developed in university-based research centers by nonprofit organizations (NPOs; eg, hospitals) has the potential to benefit the health of populations worldwide. One example of such technology is 3D-printed simulators. Objective This scoping review aims to identify how the use of open-source databases for the distribution of simulator designs used for 3D printing can promote credible solutions for health care training while minimizing the risks of commercialization of designs for profit. Methods This scoping review will follow the Arksey and O’Malley methodological framework and the Joanna Briggs Institute guidance for scoping reviews. Ovid MEDLINE, CINAHL, Web of Science, and PsycINFO will be searched with an applied time frame of 2012 to 2022. Additionally, gray literature will be searched along with reference list searching. Papers that explore the use of open-source databases in academic settings and the health care sector for the distribution of simulator designs will be included. A 2-step screening process will be administered to titles and abstracts, then full texts, to establish paper eligibility. Screening and data extraction of the papers will be completed by 2 reviewers (MS and SS) for quality assurance. The scoping review will report information on the facilitation of distributing 3D-printed simulator designs through open-source databases. Results The results of this review will identify gaps in forming partnerships with NPOs and university-based research centers to share simulator designs. The scoping review will be initiated in December 2024. Conclusions The information collected will be relevant and useful for stakeholders such as health care providers, researchers, and NPOs for the purpose of overcoming the gaps in research regarding the use and distribution of simulation technology. The scoping review has not been conducted yet. Therefore, there are currently no findings to report on. International Registered Report Identifier (IRRID) PRR1-10.2196/53167 Introduction Simulation-based education (SBE) is a rapidly growing field that requires sufficient training and expertise from health care providers who play essential roles in delivering safe and adequate care to patients [1].Integrating health care provider training in various fields of health care facilitates advancements in technology and improves the performance of health care providers to strengthen both experiential and educational practices [1].For decades, advancements in technology, such as the introduction of medical printing and digital technology, have promoted effective education, training, and application in health care [2].To optimize the use of technology for adequate practice and services in health care, simulation techniques have been studied to explore the potential benefits of using 3D-printed simulators to train health care providers and the subsequent benefits on patient health and safety [3].This education-based training method tests the technical, nontechnical, and clinical skills that are critical for health care providers to apply during patient-provider interactions [3].Enhancing the accessibility of technology developed in university-based research centers and nonprofit organizations (NPOs) by targeting the barriers that limit the evolution of modern innovations has the potential to benefit the health of many communities worldwide [2].Most importantly, it reflects the importance of bridging the gap between theory and application in health care and educational settings as simulation has the potential to limit medical errors [3]. SBE is an educational strategy that is used to improve training and assessment for health care providers [4].SBE offers hands-on experience by enforcing interactions with simulators to mimic real-world scenarios to develop the expertise of health care providers and improve the quality of care that patients receive [4].SBE in health care is emerging as a crucial educational modality as it enables learners to improve their proficiency through experiential learning using 3D-printed simulators [3].It provides the replication of a real task without impairing the time and safety of patients [4].The purpose of 3D-printed simulators is to supplement, not replace, existing technologies to evolve the understanding of SBE in health care [5].In addition to educational objectives, 3D-printed simulator designs can be used for implants, prosthetics, tissue or organ modeling, therapeutic testing, and also have the potential to help patients when attempting to understand their health condition by using visual models [5]. The development of effectively designed simulation technology strengthens practices associated with training and education through the use of adequate resources in university-based research centers and NPOs [6].Currently, there is discussion surrounding the execution of SBE from small-to large-scale interventions to be used globally [7].However, to optimize the sustainability of simulation technology, investments in training and expertise must be targeted [7].The consistent use of SBE creates positive patient outcomes as it increases health care professionals' confidence in real-world situations [6].Despite the acknowledgment of experiential learning, limited information was found on targeting the barriers surrounding training and expertise to support the use of simulation techniques.The lack of research on recognizing credible solutions for health care training and experiential learning regarding patient care through simulation techniques limit the potential of transferring theory into practice [8].For example, it was found that students of entry-level surgical technology have no experience in the operating room, making it beneficial for health systems and schools to incorporate SBE into their curriculum to allow students to have hands-on experience and an opportunity to practice before interacting with patients [7].It is fundamental that high-quality training for health care providers is available in both low-and high-income environments to strengthen the use of SBE globally [8].The implementation of simulation technology relies predominantly on the capacity and availability of resources to support the expansion of SBE.The cost of acquiring resources is one of the main limitations of this intervention, posing conflict with stakeholder interactions due to the expenses of SBE [6].Although organizations are beginning to incorporate 3D printing for health care purposes, it is found that industries are not moving fast enough to supply instruments and resources to stimulate its universal use [6].To administer this, consistent assessment of resources required for SBE, including determining what is necessary to expand the use of simulation technology, the cost of distributing resources, and identifying what resources are available, will benefit stakeholders from university-based research centers and NPOs [8]. Despite the benefits regarding the use of simulator designs in health care, the lack of implementing the optimal use and distribution of digital designs limit the evolution of 3D-printed augmented SBE globally [9].The use of databases as an open-source network to store data and information enables the management and collaboration of simulator designs through varying repositories [10].Databases have the potential to facilitate the process of sharing designs to improve health care training by making designs accessible to varying institutions such as university-based research centers, hospitals, and other NPOs.While 3D-printable object databases that are readily available to the public are beneficial in introducing and contributing to designs, limitations are present in terms of the distribution of the simulators due to cost, accessibility, space, time, and expertise [10].Forming a partnership between appropriate stakeholders can support institutions that do not have the design expertise to produce their own simulator designs.Appropriate administration for the use of simulator designs by different institutions needs to be examined to optimize and strengthen the use of the simulators. The ability to share designs of 3D-printed simulators on different platforms benefits the use of these designs primarily by regions that experience challenges in developing a stronger workforce capacity to improve the delivery of health care services.Given that there are several open-source repositories, guidelines must be met to understand the purpose and use of these databases by XSL • FO RenderX varying institutions to protect the intellectual property (IP) of designs [10].Building partnerships among university-based research centers, hospitals, and NPOs may enhance the distribution of 3D-printed simulator designs using databases that are accessible across many institutions [9].However, it requires critical attention to how these designs can be freely used by other organizations while also protecting them from being commercialized for profit.The need to modify IP laws to constant changes in technology is an important measure that must be recognized for the security and accountability of designs published on software systems [10].The protection of repositories requires recognition surrounding the verification and identification of the initial rights holder of the designs and frameworks published on databases [10].Currently, limited research has been conducted on the use of databases and repositories across academic institutions to share designs and their use in multi-institutional partnerships.The objective of this scoping review is to examine the current scope of literature regarding the use of databases and repositories to store and manage simulator designs among academic institutions, hospitals, and NPOs. Overview For this scoping review, the Arksey and O'Malley [11] methodological framework will be used and will follow the five stages of conduct: (1) determining the research question, (2) identifying relevant literature, (3) selecting studies, (4) charting data, and (5) reviewing, synthesizing, and reporting the results.The scoping review will follow the Joanna Briggs Institute guidelines for conducting a scoping review [12], as well as the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist (Multimedia Appendix 1 [13]) [12]. Stage 1: Identifying the Research Question The purpose of this scoping review is to explore the current literature on the distribution of 3D-printed simulator designs through open-source databases and determine how to implement the sharing of designs among academic institutions, hospitals, and NPOs while preventing the commercialization of designs for profit.The research question developed based on this purpose and in consultation with the research team is: what is the nature and breadth of research examining the use of databases and repositories to store, manage, and distribute 3D-printed simulator designs among academic institutions, hospitals, and NPOs?Preliminary searches indicate that sufficient research has not been conducted on the use of databases across various institutions and between organizations.Barriers exist in protecting the external use and commercialization of designs, resulting in the limitation of using databases to share 3D-printed simulator designs.The objectives of this scoping review are to (1) identify which open-source databases can be used to build partnerships with stakeholders to address the facilitation of SBE and (2) determine how to protect IP shared using open-source databases from being commercialized. Stage 2: Identifying Relevant Studies The published literature databases that will be searched for this scoping review are Ovid MEDLINE, Web of Science, Elsevier ScienceDirect, and IEEE Xplore.Free-text codes and database-specific subject headings will be used to create the search strategy using concepts from the research question.The keywords that were extracted from the research question include "simulation," "educational institute," "database," "healthcare," "3D-printed models."Synonyms of these keywords will also be used in the search strategy.A search strategy has been drafted on Ovid MEDLINE and is viewable in Multimedia Appendix 2. The search will involve the selection of papers based on specific criteria, such as the publication year of the paper and the paper language, to limit the searches and establish applicable information relevant to the research question. Gray literature databases will be used to search for papers, as these may provide information on studies that are limited in published literature databases.The gray literature databases that will be searched include OpenGrey, Grey Matters, Google Scholar, and Google.The reference list of relevant papers selected for the study will also be manually searched to find additional papers relevant to the research question.Due to the current demand for SBE in health care throughout the most recent decade, a time frame of 10 years, from 2012 to 2022, will be placed on the search to enable results of papers with the latest findings on the distribution of simulation techniques using databases.In addition, the search will also be confined to papers published in the English language.The search strategy has been developed in consultation with a health science librarian as recommended by the Peer Review of Electronic Search Strategies guidelines [14]. Stage 3: Study Selection A 2-step screening process will be conducted to refine the selected papers.First, titles and abstracts will be screened to identify the purpose and relevance of the publications, with papers that do not fall within the inclusion criteria being removed from the review.Second, a full-text screening will be conducted on the selected publications from the first step by applying the same inclusion criteria to select papers for the review.The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram will be used to summarize the screening process.The selection of papers will involve applying precise inclusion and exclusion criteria to each publication to identify applicable papers relevant to the research question and explore papers with information on the suggested limitations.To be included in the review, papers must address the usage of simulation education in health care, discuss the use of a database or digital repository, discuss the barriers that restrict the distribution of 3D-printed simulator designs, focus on postsecondary education and incorporate university-based research centers and NPOs. Papers will be excluded if they were published before the year 2012, are not in the English language, or focus on kindergarten to grade school education. Stage 4: Charting the Data Elements of the selected papers that will be extracted are details of the author, publication year, country of publication, the purpose of the study, the study design, details of participants, sentences describing IP, and key information that addresses the research question (the database used and purpose of using the database). Stage 5: Collating, Summarizing, and Reporting the Results The results on the use of simulation technology in health care and education regarding the facilitation in distributing 3D-printed simulator designs through open-source databases will be reported in the scoping review.Information regarding the sharing of designs between NPOs while preventing the commercialization of designs for profit will also be identified.The reported information will be organized and presented in tables and graphs where necessary and will include both qualitative and quantitative data with information summarized descriptively in the text.The findings from the papers that address the distribution of 3D-printed simulator designs as well as the barriers will be synthesized and grouped into specific themes.Subthemes will be created using information from papers that is applicable to the research gap and distribution process. Quality Assurance Endnote 20 (Clarivate Analytics) is a referencing management program that will be used to manage duplicate checking.EPPI-Reviewer software (EPPI Centre), used to screen, analyze, and select papers, will be used to extract and chart the data [15].References from Endnote 20 will be imported into EPPI-Reviewer where screening and selection of papers, analysis, and reporting will be managed.To screen and chart the data, 2 reviewers will participate in the process to maximize the validity of the results.The 2 reviewers will contribute to the screening and selection of papers individually based on relevant titles and abstracts from the search results while referring to the eligibility criteria.The reviewers will discuss their findings for a subset of papers and make changes to the eligibility criteria as required with a third reviewer settling any disagreements between the primary 2 reviewers.The first reviewer will then screen the remaining references and discuss any concerns with the second reviewer.Following the screening of papers, the first reviewer will extract data from 100% of the studies, and the second reviewer will extract data from 2 random samples of 5% each from the selected studies.This will be done to ensure adequate charting quality. Ethical Considerations This scoping review will identify the current use of open-source databases in academic settings and the health care sector and build upon the importance of collaborating with partners for the distribution of simulator designs to promote solutions in health care and education.The results of this scoping review will provide an understanding of the gaps in existing literature surrounding the distribution of 3D-printed simulator designs.In addition, the results of this review will demonstrate the potential barriers in establishing partnerships with NPOs and university-based research centers to share designs globally.The information collected will be relevant and useful for stakeholders such as health care providers, researchers, and NPOs for the purpose of overcoming the gaps in research regarding the use and distribution of simulation technology.The results of this scoping review will be submitted to an academic peer-reviewed journal for publication.Ethical approval is not required for this scoping review as the data are gathered from publications that are in the public domain. Results The scoping review will be initiated in December 2024.The results on the implementation of simulation technology in health care and education regarding the use and distribution of 3D-printed simulator designs through open-source databases will be reported in the scoping review.Based on prior experience in a research laboratory at the Ontario Tech University, Oshawa, Ontario, Canada, partnerships were formed with organizations to share and distribute 3D-printed simulator designs using open-source databases.Due to the challenges in sharing these resources through the database, it was determined that more research needs to be conducted on this topic. Discussion This scoping review will explore the current literature on the distribution of 3D-printed simulator designs through open-source databases to facilitate the sharing of designs among academic institutions, hospitals, and NPOs.The findings from this scoping review will provide an understanding of the limitations of the existing literature regarding the distribution of 3D-printed simulator designs and identify the potential barriers in establishing partnerships with NPOs and university-based research centers to share designs globally.The findings will be used to create robust partnerships with stakeholders to ensure the delivery of SBE is effective, reliable, and accessible in educational environments.This will increase the sharing of designs while building connections with organizations to address the implementation of SBE and identify the gaps in research surrounding potential solutions in health care and education. The scoping review has not been conducted yet.Therefore, there are currently no findings regarding the distribution of 3D-printed simulator designs to report on.However, it is expected that the scoping review will help identify the barriers surrounding how to acquire resources to support the distribution of simulation technology.The scoping review will be the first to explore current literature to determine how to facilitate the distribution of simulation technology between partners such as NPOs and university-based research centers using open-source databases.Papers will be selected from 4 published literature databases and 3 gray literature databases.Preliminary searches indicate there is a lack of findings regarding the distribution of simulator designs, limiting information on the implications and comparisons to existing literature.There is also limited information on databases that are used to share 3D-printed simulator designs, which is important in order to determine how the designs can be used by potential stakeholders and expand the use of simulation technology within organizations.The
2024-04-05T18:43:35.737Z
2023-09-27T00:00:00.000
{ "year": 2024, "sha1": "a4b1eb10c6688eb971784f655ffc095e4b363388", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/53167", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "821f89c2c9c14e87cc91c91effe154d1720bcf8d", "s2fieldsofstudy": [ "Education", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
119699722
pes2o/s2orc
v3-fos-license
A PDE approach to a 2-dimensional matching problem We prove asymptotic results for 2-dimensional random matching problems. In particular, we obtain the leading term in the asymptotic expansion of the expected quadratic transportation cost for empirical measures of two samples of independent uniform random variables in the square. Our technique is based on a rigorous formulation of the challenging PDE ansatz by S.\ Caracciolo et al.\ (Phys. Rev. E, {\bf 90} 012118, 2014) that"linearise"the Monge-Amp\`ere equation. Introduction Optimal matching problems are random variational problems widely investigated in the mathematical and physical literature. Many variants are possible, for instance the monopartite problem, dealing with the optimal coupling of an even number n of i.i.d. points X i , the grid matching problem, where one looks for the optimal matching of an empirical measure i 1 n δ X i to a deterministic and "equally spaced" grid, the closely related problem of optimal matching to the common law m of X i and the bipartite problem, dealing with the optimal matching of i See the monographs [Y98] and [T14] for many more informations on this subject. In addition to these problems, one may study the optimal assignment problem, [C04], where the optimization involves also the weights of the Dirac masses δ X i and the closely related problem of transporting Lebesgue measure to a Poisson point process [HS13], which involves in the limit measures with infinite mass. In this paper we focus on two of these problems, namely optimal matching to the reference measure and the bipartite problem. Denoting by D the d-dimensional domain and by m ∈ P(D) the law of the points X i , Y i , the problem is to estimate the rate of convergence to 0 of where p ∈ [1, ∞) is the power occurring in the transportation cost c = d p (also the case p = ∞ is considered in the literature, see for instance [SY91] and the references therein), finding tight upper and lower bounds and, possibly, proving existence of the limit of the renormalized quantities as n → ∞. The typical distance between points is expected to be of order n −1/d , and therefore it is natural to guess that the quantities c n,p,d introduced in (1.1) behave as n −p/d . However, it is by now well known that this hypothesis is true for d ≥ 3, while it is false for d = 1 and d = 2. Despite plenty of heuristic arguments and numerical results, these are (as far as we know) the main results that have been rigorously proved (we focus here on the model case when m is the uniform measure and we do not distinguish between optimal matching to m and bipartite), denoting a n ∼ b n if lim sup n a n /b n < ∞ and lim sup n b n /a n < ∞: • when D = [0, 1] or D = T 1 , then c n,p,1 /n −p ∼ n p/2 and, when p = 2, lim n→∞ nc n,2,1 can be explicitly computed, see [CS14]; • when D = [0, 1] 2 , then c n,p,2 /n −p/2 ∼ (log n) p/2 , see [AKT84]; • when D = [0, 1] d with d ≥ 3, then c n,1,d /n −1/d ∼ 1 and the limit exists [BM02], [DY95], for general p > 1 and 2p > d one has c n,p,d /n −p/d ∼ 1 and the limit exists [B13]; a combination of these results and Hölder's inequality gives c n,p,d /n −p/d ∼ 1 for p ∈ [1, ∞) and d ≥ 3, but it is not known whether the limit exists for p ∈ (1, d/2). In the more recent paper [FG15] also non-asymptotic upper bounds have been provided. Notice that some of the results listed above provide not only convergence of the expectations, but also almost sure convergence which, under some circumstances (see for instance [B13]) can be obtained from concentration inequalities as soon as convergence of the expectations is known. In the case d = 2, the convergence of (log n) −p/2 c n,p,2 /n −p/2 as n → ∞ and the characterization of the limit are still open problems, particularly in the case p = 1 [T14, Research problem 4.3.3]. Our interest in this subject has been motivated by the recent work [CLPS14] where, on the basis of an ansatz, very specific predictions on the expansion of have been made on T d , for all ranges of dimensions d and powers p. In brief, the ansatz of [CLPS14] is based on a linearisation (ρ i ∼ 1 in C 1 topology, ψ ∼ f + 1 2 |x| 2 in C 2 topology) of the Monge-Ampère equation ρ 1 (∇ψ)det∇ 2 ψ = ρ 0 , (which describes the optimal transport map T = ∇ψ from the measures having probability densities ρ 0 to ρ 1 ) leading to Poisson's equation −∆f = ρ 1 − ρ 0 . This ansatz is very appealing, but on the mathematical side it poses several challenges, because the energies involved are infinite for d ≥ 2 (the measures being Dirac masses), because this procedure does not provide an exact matching between the measures (due to the linearisation) and because the necessity of giving lower bounds persists, as matchings provide only upper bounds. While we are still very far from justifying rigorously all predictions of [CLPS14], see also Section 6 for a discussion on this topic, we have been able to use this idea to prove existence of the limit and compute explicitly it in the case p = d = 2, in agreement with [CLPS14]: In our proof the geometry of the domain D enters only through the (asymptotic) properties of the spectrum of the Laplacian with Neumann boundary conditions; for this reason we are able to cover also abstract manifolds (where another example of interest could be the two dimensional sphere). Even though in dimension d = 1 (but mostly for the case D = [0, 1]) a much more detailed analysis can be made, see for instance Remark 4.2, we include proofs and statements of the 1-d case, to illustrate the flexibility of our synthetic method. Let us give some heuristic ideas on the strategy of proof, starting from the upper bound. In order to obtain finite energy solutions to Poisson's equation we study the regularized PDE − ∆f n,t = (u n,t − 1) (1.4) where u n,t is the density of P * t (µ n − m) and P * t is the heat semigroup with Neumann boundary conditions, acting on measures. Then, choosing t = γn −1 log n with γ small, we have a small error in the estimation from above of c n,2,2 if we replace µ n by its regularization P * t µ n . Eventually, we use Dacorogna-Moser's technique (see Proposition 2.3) to provide an exact coupling between P * t µ n and m, leading to an estimate of the form To conclude, we have to estimate very carefully how much the factor in front of |∇f n,t | 2 differs from 1; this requires in particular higher integrability estimates on |∇f n,t |. Let us consider now the lower bound. The duality formula is the standard way to provide lower bounds on W 2 ; given φ, the best possible ψ = Q 1 φ compatible with the constraint is given by the Hopf-Lax formula (2.2). Choosing again φ = f n,t as in the ansatz, we are led to estimate carefully in events of the form {|u n,t − 1| ≤ η} (whose probabilities tend to 1). We do this using Laplacian estimates and the viscosity approximation of the Hopf-Lax semigroup provided by the Hopf-Cole transform. In the bipartite case, the result can be obtained from the previous ones playing with independence. Heuristically, the random "vectors" pointing from m to µ n and from m to ν n are independent, and since P(D) is "Riemannian" on small scales when endowed with the distance W 2 , we obtain a factor 2, as in the identity E[(X − Y ) 2 ] = 2 Var(X) when X, Y are i.i.d. random variables. Interestingly, the rigorous proof of this fact provides also the information (1.3) on the mean displacement as function of the position. The paper is organized as follows. In Section 2 we first recall preliminary results on the Wasserstein distance and the main tools (Dacorogna-Moser interpolation, duality, Hopf-Lax semigroup) involved in the proof of the upper and lower bounds. Then, we provide moment estimates for √ n(µ n − m). In Section 3 we introduce the heat semigroup P t and, in a quantitative way, the regularity properties of P t needed for our scheme to work. We also provide estimates on the canonical regularization of the Hamilton-Jacobi equation provided by the Hopf-Cole transform −σ(log P t e −f /σ ). The most delicate part of our proof involves bounds on the probability of the events sup x∈D |u n,t (x) − 1| > η , η > 0 which ensure that the probability of these events has a power like decay as n → ∞ if t = γn −1 log n, with γ sufficiently large (this plays a role in the proof of the lower bound). Finally, in light of the ansatz of [CLPS14], we provide a formula for where f n,t solves the random PDE (1.4), and prove convergence of the renormalized quantity as n → ∞, if t ∼ n −1 log n. Section 4 provides the proof of our main result, together with Theorem 4.1 dealing with the simpler case d = 1. We first deal with the optimal matching to m, and then we deal with the bipartite case. In Section 5 we recover the result found in [AKT84] as a consequence of our estimates via a Lipschitz approximation argument. Finally, Section 6 covers extensions to more general classes of domains and open problems, pointing out some potential developments. Acknowledgment. The first author warmly thanks S. Caracciolo for pointing out to him the paper [CLPS14] and for several conversations on the subject. Wasserstein distance Let (D, d) be a complete and separable metric space. We recall (see e.g. [AGS08]) that the quadratic Wasserstein distance W 2 (µ, ν) between Borel probability measures µ, ν in D with finite quadratic moments is defined by where Γ(µ, ν) is the class of transport plans (couplings in Probability) between µ and ν, namely Borel probability measures Σ in D × D having µ and ν as first and second marginals, respectively. We say that a Borel map T pushing µ to ν is optimal if This means that the plan Σ = (Id ×T ) # µ induced by T is optimal. The following duality formula will play a key role, both in the proof of the upper and lower bound of the matching cost: In (2.1) above, Lip b (D) stands for the class of bounded Lipschitz functions on D and, for t > 0, Q t φ is provided by the Hopf-Lax formula This formula also provides a semigroup if (X, d) is a length space, and Q t φ ↑ φ as t ↓ 0. We recall a few basic properties of Q t , whose proof is elementary: if φ ∈ Lip b (D) then inf φ ≤ Q t φ ≤ sup φ and (where Lip stands for the Lipschitz constant) In particular with equality if (D, d) is a length space (but we will only need the inequality). In (2.3), |∇Q t φ| is the metric slope of Q t φ, which corresponds to the norm of the gradient in the Riemannian setting. We recall that W 2 2 is jointly convex, namely if This easily follows by the linear dependence w.r.t. Σ in the cost function, and by the linearity of the marginal constraint. More generally, the same argument shows that, for a generic index set I, with µ i , ν i and Θ probability measures, under appropriate measurability assumptions that are easily checked in all cases when we are going to apply this formula. The following result is by now well known, we detail for the reader's convenience some steps of the proof from [AGS08]. Proposition 2.1 (Existence and stability of optimal maps). Let D ⊂ R d be a compact set, µ, ν ∈ P(D) with µ absolutely continuous w.r.t. Lebesgue d-dimensional measure. Then: (a) there exists a unique optimal transport map T ν µ from µ to ν. is a simple generalization of Brenier's theorem, see for instance [AGS08, Theorem 6.2.4] for a proof. The proof of statement (b) is typically obtained by combining the stability w.r.t. weak convergence of the optimal plans ν → (Id ×T ν µ ) (see [AGS08, Proposition 7.1.3]) with a general criterion (see [AGS08,Lemma 5.4.1]) which allows to deduce convergence in µ-measure of the maps T h to T from the weak convergence of the plans (Id ×T h ) # µ to (Id ×T ) # µ. Transport estimate Assume in this section that D is a connected Riemannian manifold, possibly with boundary, whose finite Riemannian volume measure is denoted by m, with d equal to the Riemannian distance. The estimate from above on W 2 2 provided by Proposition 2.3 below is closely related to the Benamou-Brenier formula [B00], [AGS08,Theorem 8.3.1], which provides a representation of W 2 2 in terms of the minimization of the action 1 0 D |b t | 2 dµ t dt, among all solutions to the continuity equation d dt µ t + div(b t µ t ) = 0. It is also related to the Dacorogna-Moser scheme, which provides constructively (under suitable smoothness assumptions) a transport map between µ 0 = u 0 m and µ 1 = u 1 m by solving the PDE ∆f = u 1 − u 0 in D, and then using the flow map of the vector field b t = u −1 t ∇f at time 1, with u t = (1 − t)u 1 + tu 0 , to provide the map. We provide here the estimate without building explictly a coupling, in the spirit of [K10] (see also, in an abstract setting [AMS15, Theorem 6.6]), using the duality formula (2.1). This has the advantage to avoid smoothness issues and, moreover, uses (2.6) only in the weak sense, namely Notice that uniqueness of f in (2.6) is obvious, up to additive constants. Existence is guaranteed for u i ∈ L 2 (m) with D (u 1 − u 0 ) dm = 0 under a spectral gap assumption, thanks to the variational interpretation provided by Lax-Milgram theorem. Notice also that with the choice b t = u −1 t ∇f the continuity equation d dt u t + div(b t u t ) = 0 holds, in weak form. We will also need this definition. Bounds for moments and tails In this subsection (D, d) is a complete and separable metric space equipped with a Borel probability measure m. We assume diam D < ∞. For n ∈ N + , let X 1 , . . . , X n be independent and uniformly distributed random variables in D, whose common law is m. Let µ n = 1 n n i=1 δ X i be the random empirical measure. We define the measures r n = √ n(µ n − m), where we use the natural scaling provided by the central limit theorem. Our goal is to derive upper bounds for the exponential moments exp(λ D f dr n ) and, as a consequence, tail estimates for and 2 is a quadratic form, therefore we introduce also the associated bilinear form Analogously, we consider also the following quantity Lemma 2.5 (Moment generating function). Let As a consequence Proof. It is sufficient to show the result for λ = 1. The general statement then follows by taking λf in place of f . By the definition of empirical measure we have The equality above gives Then and it is sufficient to compute the second and fourth derivatives with respect to λ at λ = 0 in the expression for E [exp (λ D f dr n )] provided by Lemma 2.5 to obtain, respectively, the first identity in (2.8) and The remaining two identities follow by polarization. For c, η > 0, define the function Notice that F (c, η) is decreasing in c, increasing in η and that the formula shows that cF (c, η) is increasing in c. We will use the function F to estimate the tails of D f dr n . Lemma 2.7 (Tail bound). Let X be a real random variable such that, for some c 1 , c 2 > 0, Then for every η ≥ 0 we have Proof. We have P(|X| > η) ≤ P(X > η) + P(X < −η). For the first term and λ > 0 Hence For the other term, we use the fact that P(X < −η) = P(−X > η) and −X satisfies the same hypothesis. Heat semigroup In this section we add more structure to D, assuming that (D, d) is a connected Riemannian manifold (possibly with boundary) endowed with the Riemannian distance, and that D has finite diameter and volume. Then, we can and will normalize (D, d) in such a way that the volume is unitary, and let m be the volume measure of (D, d). The typical examples we have in mind are the flat d-dimensional torus T d and the d-dimensional cube [0, 1] d , see also Section 6 for more general setups. We denote by P t the heat semigroup associated to (D, d, m), with Neumann boundary conditions. In one of the many equivalent representations, it can be viewed as the L 2 (m) gradient flow of the Dirichlet energy 1 2 D |∇f | 2 dm. Standard results (see for instance [W14]) ensure that P t is a Markov semigroup, so that it is a contraction semigroup in all L p ∩ L 2 (m) spaces, 1 ≤ p ≤ ∞; thanks to this property it has a unique extension to all L p (m) spaces even when p ∈ [1, 2). Moreover, the finiteness of volume and boundary conditions ensure that P t is mass-preserving, i.e. t → D P t f dm is constant in [0, ∞) for all f ∈ L 1 (m) and thus it can be viewed as an operator in the class of probability densities (which correspond to the measures absolutely continuous w.r.t. m). More generally, we can use the Feller property (i.e. that P t maps C b (D) into C b (D)) to define the adjoint semigroup P * t on the class M of Borel measures in D with finite total variation by and to regularize with the aid of P * t singular measures to absolutely continuous measures, under appropriate additional assumptions on P t . Since P t is selfadjoint, the operator P * t can also be viewed as the extension of P t from L 1 (m) to M. We denote by p t (x, y) the transition probabilities of the semigroup, characterized by the formula We denote by ∆ the infinitesimal generator of P t , namely the extension of the Laplace-Beltrami operator on D. Besides the "qualitative" properties of P t mentioned above, our proof depends on several quantitative estimates related to P t . Quantitative estimates on P t . We assume throughout the validity of the following properties: there are positive constants d, C sg , C uc , C ge , C rt , C dr and K such that In the sequel, since many parameters and constants will be involved, in some statements we call a constant geometric if it depends only on D through C sg , C uc , C ge , C rt , C dr and K. Notice that (GC) encodes a lower bound on Ricci curvature, see for instance [W11]. Let us draw now some easy consequences of these assumptions. Spectral gap implies that for f ∈ L 2 (D, m) with ∆f ∈ L 2 (D, m) we have the representation (3.1) Ultracontractivity entails that P t : L 1 → L ∞ continuously for t > 0, because Hence, by interpolation P t : L p → L q for any 1 ≤ p ≤ q ≤ ∞ with norms bounded from above by geometric constants. If p = 1, by approximation we also get P * t : M → L q continuously for 1 ≤ q ≤ ∞. Notice also that from the joint convexity of W 2 2 (2.5) we obtain By duality, see [K10], the gradient contractivity property leads to contractivity w.r.t. hence the bound ∇P t g ∞ ≤ ct −1/2 g ∞ for t ∈ (0, 1] and some geometric constant c. Using the representation formula (3.1) and the previous estimate with g = ∆f and g = P t−1 ∆f we obtain (3.3) as In the following lemma we collect some more consequences of the gradient contractivity. Proof. Write G = e −g . Inequality (3.4) follows from the fact that P s is Markov and the inequalities e − max g ≤ G ≤ e − min g . In order to prove (3.5) we use (GC) to get Lemma 3.2 (Viscous Hamilton-Jacobi). Assume that D is a compact Riemannian manifold without boundary. Let σ > 0, f ∈ C(D), and define, for t ≥ 0, Proof. The smoothness of φ σ t for positive times follows by the chain rule and standard (linear) parabolic theory. To check that φ σ solves (3.6), it is sufficient to compare with the terms arising from the application of the diffusion chain rule (3.12) Inequalities (3.7) and (3.8) follow in a straightforward way, respectively from (3.4) and (3.5) of Lemma 3.1, with s = (σt)/2 and g = f /σ. Corollary 3.3 (Dual potential). Assume that D is a compact Riemannian manifold without boundary. For every Lipschitz function Proof. For σ > 0, consider the functions g σ = φ σ 1 solving the initial value problem (3.6) with f replaced by −f . Inequalities (3.4) and (3.5) entail that g σ are uniformly bounded in the space of Lipschitz functions: as σ → 0, we can extract a subsequence (g σ h ) pointwise converging to some bounded Lipschitz function g. Inequality (3.10) gives in the limit the first inequality of the thesis, while (3.11) yields the second one, by dominated convergence. Remark 3.4 (On the equality g = Q 1 (−f )). Recall that the theory of viscosity solutions [CL83], [BC97] is specifically designed to deal with equations, as the Hamilton-Jacobi equations, for which the distributional point of view fails. This theory can be carried out also on manifolds, see [F] for a nice presentation of this subject. Since one can prove (using also apriori estimates on the time derivatives, arguing as in Corollary 3.3) the existence of a function φ t , uniform limit of a subsequence of φ σ t , since classical solutions are viscosity solutions and since locally uniform limits of viscosity solutions are viscosity solutions, the function φ t is a viscosity solution to the HJ equation ∂ t u + 1 2 |∇u| 2 = 0. Then, if the initial condition is −f , the uniqueness theory of first order viscosity solutions applies, and gives that φ t is precisely given by Setting t = 1, this argument proves that actually the function g of Corollary 3.3 coincides with Q 1 (−f ), and that there is full convergence as σ → 0 (see also [C03] for a proof of the convergence, in Euclidean spaces, based on the theory of large deviations). We preferred a more elementary and self-contained presentation, because the weaker statement g ≤ Q 1 (−f ) provided by the Corollary is sufficient for our purposes, and because our argument works also in the more abstract setting described in Section 6 (in which neither large deviations nor theory of viscosity solutions are yet available), emphasizing the role played by the lower Ricci curvature bounds. Density fluctuation bounds Recalling the notation µ n = 1 n n i=1 δ X i , r n = √ n(µ n − m), we now define our regularized empirical measures. Definition 3.5 (Regularized empirical measures). For t ≥ 0 define so that for t > 0 one has The goal of this subsection is to collect apriori estimates on the deviation of r n,t from 0. Lemma 3.6 (Pointwise bound). For y ∈ D and η > 0 one has where F is defined in (2.9). Lemma 3.7 (Deterministic bound). With probability 1 one has Proof. Using (GE) and the fact that the total variation of the measures r n is 2 √ n, we get We shall need another geometric function related to D. Definition 3.8 (Minimal δ-cover). In the sequel, for δ > 0 we denote by N D (δ) be smallest cardinality of a δ-net of D, namely a set whose closed δ-neighbourhood contains D. We pick δ = η 4Cge t 3/2 , so that, by Lemma 3.7, with probability 1 we have (3.17). Let T be a minimal δ-net. Then the condition t 3 ≤ 16C D C 2 ge implies C D δ −2 ≥ 1, where we used also the inequality t ≥ γn −1 log n ≥ η −2/3 /n. From an application of Lemma 3.6 with η/2 instead of η we get Our choice of γ then gives We now report some estimates on the logarithmic mean. Lemma 3.11. For a, b ≥ 0 and q > 0 we have The thesis follows by applying these inequalities to a q and b q . In the following lemma we estimate the logarithmic mean of the densities of µ n,t,c obtained by a further regularization, i.e. by adding to µ n,t a small multiple of m. Proof. Fix x ∈ D and η ∈ (0, 1). By Lemma 3.6 we have where q ∈ (0, 1) depends only on d, C uc , γ and η. In the event {|u n,ε (x) − 1| > η}, using the first inequality in Lemma 3.11 we can estimate the squared difference with the sum of squares to get In the complementary event {|u n,t (x) − 1| ≤ η}, we have |u n,t,c (x) − 1| ≤ (1 − c)η ≤ η and, expanding the squares and using both inequalities in (3.18), we get hence the growth condition on c gives lim sup Letting η → 0 we obtain the result. Energy estimates Retaining Definition 3.5 of r n,t from the previous subsection, here we derive energy bounds for the solutions to the following random PDE: which are uniquely determined up to a (random) additive constant. As we will see (particularly in Section 6), these estimates involve either the trace of ∆ or sums indexed by the spectrum σ(∆) (which contains {0} and, by the spectral gap assumption, satisfies ; it is understood that the eigenvalues in these sums are counted with multiplicity. We recall the so-called trace formula where {u λ } λ∈σ(∆) is an L 2 (m) orthonormal basis of eigenvalues of ∆. The following expansion (3.21) of the trace formula as s → 0 will be useful. In this paper we will only use the leading term in (3.21). Proposition 3.13 (Expansion of the trace formula). Let D be a bounded Lipschitz domain in R n with unit volume. Then The same holds if D is a smooth, compact d-dimensional Riemannian manifold with a smooth boundary (possibly empty). Proof. The first statement is proved in [B93]. The second one, also with additional terms in the expansion, in [MKS67]. Lemma 3.14 (Representation formula). Let f n,t be the solution to (3.19). For all t > 0 one has Proof. Using the representation formula g = − ∞ 0 P s ∆g ds with g = f n,t we get f n,t = − ∞ 0 P s r n,t ds, so that The following lemma basically applies only to 1-dimensional domains, in view of the ultracontractivity assumption with d = 1. Lemma 3.15 (Energy estimate and convergence, d = 1). Let f n,t be the solution to If ultracontractivity holds with d = 1 we have also and, in particular, the limit in (3.24) is finite. Proof. The identities (3.24) follow by (3.22) by taking the limit as n → ∞. If ultracontractivity holds with d = 1, we show that the lim sup in (3.24) is finite by splitting the integration in (t, 1) and (1, ∞) in the identity which is a by-product of the intermediate computations made in the proof of Lemma 3.14. In conclusion, for some geometric constant C, one has from which the finiteness of (3.24) readily follows. To show (3.25), we start from (3.23) and estimate with the aid of Lemma 2.6 In order to show that the lim sup of last integral is finite we split the integration in (t, 1) and (1, ∞). For s ∈ (t, 1) we use Putting these estimates together, which is bounded, uniformly in y and t, because P * 1 (δ y − m) 2 ≤ 2 P * 1 M→L 2 . Lemma 3.16 (Renormalized energy estimate and convergence, d = 2). Assume that ultracontractivity holds with d = 2. Let f n,t be the solution to (3.19). If t = t(n) → 0 as n → ∞ and t ≥ C/n for some C > 0, then (3.27) In particular (3.28) Moreover, under the assumptions on D of Proposition 3.13, one has Proof. We will prove first (3.28) as an intermediate step in the proof of (3.27), starting from the representation formula (3.26). For s ∈ (t, 1) we estimate In conclusion, for some geometric constant C, one has E D |∇f n,t | 2 dm ≤ C 1 t s −1 ds + ∞ 1 e −2Csgs ds ≤ C(|log t| + 1), from which (3.28) readily follows. In order to prove (3.29), we notice that the estimates given in the proof of (3.28) show that for any p ∈ (1, ∞) and g with D g dm = 0, we obtain Using the fact that ∂ t P t = ∆P t and that the operators ∆, P t and (−∆) 1/2 commute we have For y ∈ D fixed, consider the operators and notice that In addition, since T t s : L 2 (m) → L 2 (m) is self-adjoint, the kernel K t s is symmetric and Taking the expectation of the integrand, E (−s∆) 1/2 P * s+t r n 2 (y) (−s ′ ∆) 1/2 P * s ′ +t r n 2 (y) = E (T t s r n ) 2 (y)(T t s ′ r n ) 2 (y) Integrating in s and s ′ we obtain Since (P t ) t≥0 is a bounded analytic semigroup, complex interpolation yields that, for p ∈ (1, ∞), (−τ ∆) 1/2 P τ /2 : L p → L p is continuous with norms uniformly bounded for τ ≥ 0 [Y80, Sections X.10-11], hence we have the estimate where in the first equality we used (3.30). We consider Now we split the integrals for s ∈ (t, 2) and s ∈ (2, ∞). In the former interval we use the estimate In the latter interval we use the estimate Putting these estimates together, in the case p = 2 we have, for some geometric constant C. In the case p = 4 we have also This yields In conclusion is uniformly bounded as n → ∞ by the assumptions on t = t(n). Proof of the main result In this section we prove Theorem 1.1. In the proof of the upper bound we need only to assume the regularizing properties of P t listed in Section 3; in particular this inequality covers also the case D = [0, 1] 2 and compact 2-dimensional Riemannian manifolds with smooth boundary. In the proof of the lower bound we need also to assume that D has no boundary; by a comparison argument, since the distance in T 2 is smaller than the distance in [0, 1] 2 , we recover also the lower bound for D = [0, 1] 2 . We include also the 1-dimensional case (whose proofs are a bit simpler), which covers the case of the interval and the case of the circle. For brevity we state the result only in the Riemannian case, but the strength of this method relies in the fact that it can be extended to more general 1-dimensional spaces (see also Section 6). In particular, from Euler's formula π 2 = 6 k≥1 k −2 , the limit equals 1/6 for D = [0, 1] and 1/12 for D = T 1 . Remark 4.2. In the case D = [0, 1] and m = L 1 ¬ D we can explicitly compute n E[W 2 2 (µ n , m)] and n E[W 2 2 (µ n , ν n )] as follows (and in particular, the former is identically equal to 1/6). For any fixed n ∈ N, let X (k) and Y (k) denote the order statistics of the random variables ( It is well known that X (k) and Y (k) are distributed according to the beta distribution X (k) ∼ Y (k) ∼ B(k, n + 1 − k). Upper bound Proof. Fix q ∈ (1/2, 1), η ∈ (0, 1) and let t = t(n) = n −2q . For η ∈ (0, 1) consider the event By Proposition 3.9, since W 2 2 (µ n , m) ≤ (diam D) 2 , for n large enough we have Using the Young inequality for products with α > 0 and W 2 2 (µ n , µ n,t ) ≤ C dr t we have To this end, we apply Proposition 2.3 with u 0 = u n,t and u 1 = 1. Since f n,t solves (3.19) from Proposition 2.3 we get In the event A η we have u n,t ≥ 1 − η in D, hence the first inequality in (3.18) gives The previous two inequalities and Lemma 3.15 give In conclusion we have and we obtain the thesis by letting first α → 0 and then η → 0. Proof. Fix γ > 0 and let t(n) = c(n) = γn −1 log n. Let us set µ n,t,c = (1 − c)µ n,t + cm, u n,t,c = (1 − c)u n,t + c as in Lemma 3.12. From the joint convexity of W 2 2 (see (2.4)) we immediately get Using the Young inequality for products with α > 0 and W 2 2 (µ n , µ n,t ) ≤ C dr t, we have We start by estimating the contribution of the first term. To this end we apply Proposition 2.3 with u 0 = u n,t,c and u 1 = 1. Recalling that f n,t solves the PDE ∆f n,t = √ n(u n,t −1) with homogeneous Neumann boundary conditions, In conclusion lim sup and the thesis follows letting first γ → 0 and then α → 0. Theorem 4.6 (Lower bound, d = 2). Assume that ultracontractivity holds with d = 2 and that N D (δ) ≤ Cδ −2 for every δ > 0. Then Proof. By Proposition 3.10, for any η ∈ (0, 1) there is γ > 0 such that, if we let t = t(n) = γn −1 log n, the event A η in (4.2) satisfies P(A c η ) ≤ C/n, for n large enough and some C > 0 independent of n. As in the previous proof, thanks to contractivity it is sufficient to estimate from below lim inf n→∞ n log n E W 2 2 (µ n,t , m)χ Aη . Let f n,t be the solution to (3.19) and define f = −f n,t / √ n, so that ∆f ∞ ≤ η in the event A η . To this function f we associate the potential g given by Corollary 3.3, hence thanks to the duality formula (4.3) we can estimate (in the event A η ) with ω(η) as in (4.4). Since t ≥ C/n for some positive constant C and |log t|/ log n → 1, from Lemma 3.16 we get The bipartite case We prove now the bipartite part of Theorem 1.1. It will be convenient to introduce a notation (Ω, P) for the underlying probability space. providing the optimal maps from m to µ n (ω) and ν n (ω) are measurable and independent. Proof. The independence of (X i , Y i ) easily implies that the two measure-valued random variables µ n (ω), ν n (ω) are measurable and independent, where in P(D) we consider the Borel σ-algebra induced by the topology of weak convergence in duality with C(D Proof. If S, T : Ω → L 2 (D, m; R d ) are independent, one has the identity (D, m). By a standard projection argument, and by approximation, we recover the general result. For all ω ∈ Ω the plan (T µ n (ω) , T ν n (ω) ) # m is a coupling between µ n (ω) and ν n (ω). Hence (omitting for simplicity the dependence on ω) and using (4.6) with S = T µn , T = T ν n one has where we used that E W 2 2 (µ n , m) = E W 2 2 (ν n , m) since µ n and ν n have the same law. In particular, combining the inequality in (4.5) (neglecting for a moment the negative term in the right hand side) with the first part of Theorem 1.1, we obtain lim sup (4.7) Next, we deal with lower bounds. It will be sufficient, by a comparison argument, to provide the lower bound only in the flat torus. Proof. Similarly to the proof of Theorem 4.6, for η ∈ (0, 1) we introduce the event whose probability tends to 1 as n → ∞. By the contractivity assumption in W 2 we have W 2 2 (µ n , ν n ) ≥ e 2Kt W 2 2 (µ n,t , ν n,t ), therefore it is sufficient to study the asymptotic behaviour of E W 2 2 (µ n,t , ν n,t )χ Aη . To this end, we let f n,t be the solution to (3.19), g n,t the solution to the same equation with s n,t in place of r n,t and h n,t = f n,t − g n,t . Define h = −h n,t / √ n, so that ∆h = −(r n,t − s n,t )/ √ n and ∆h ∞ ≤ η in the event A η . To this function h we associate the potential k given by Corollary 3.3, hence we can estimate (in the event A η , with ω(η) defined as in (4.4) with f replaced by h) Since h + k ≤ 0, we have therefore, still in the event A η , The proof now concludes as before, noticing that, by independence of µ n and ν n , From the previous result we get which, combined with (4.7), concludes the proof (1.2). By looking at (4.5) we see also that (1.3) holds, and this concludes the proof of Theorem 1.1. A new proof of the AKL lower bound In this section we see how a minor modification of the ansatz of [CLPS14] provides a new proof of the lower bound in [AKT84], written in terms of expectations; the upper bound follows immediately from Theorem 1.1 and Hölder inequality. The following real analysis lemma is well known, we state it for the case of the flat torus. Its proof (see for instance [AF84]) can be obtained by considering the sublevel sets of the maximal function of |∇h|. By the triangle inequality, the same holds for the matching to the reference measure. Proof. As in the proof of the lower bound for p = 2 we can use contractivity, reducing ourselves to estimating from below the Wasserstein distance between the regularized measures µ n,t = u n,t m, ν n,t = v n,t m. Let M > 0 be fixed and set c(n) = M n −1 log n. Let t = t(n) = γn −1 log n with γ sufficiently large and let h n,t be as in the proof of the lower bound in the case p = 2, so that h = h n,t / √ n satisfies where we used the PDE ∆h = u n,t − v n,t solved by h. Now we estimate W 1 (µ n,t , ν n,t ) ≥ 1 c(n) D |∇h| 2 dm − 1 c(n) En ∇h, ∇h − ∇φ dm . By (5.2), the first term is asymptotic to (2πM ) −1 n −1 log n. We will see that, for M sufficiently large, the first term dominates the second one. Indeed, we have Open problems and extensions In this section we discuss open problems, the present limitations of our technique, and some potential generalizations. Improvements in the case p = d = 2. In this case, the more demanding prediction of [CLPS14] is lim n→∞ n log n E W 2 2 (µ n , ν n ) − 1 2π log n ∈ R. This is still open, in this connection notice also that our technique for the lower bound requires t = γn −1 log n with γ sufficiently large, while necessarily in the upper bound one is forced to take t = γn −1 log n with γ small. Other open problems regard the distribution of the random variables n log n W 2 2 (µ n , ν n ) and the matching problem involving more general reference measures m (the Gaussian case could be interesting, replacing the heat semigroup with the Ornstein-Uhlenbeck semigroup). Different powers and dimensions. Our proof in the case d = 2 exploits the extra room given by the logarithmic correction to the "natural" scale n −1/d . Let us discuss the difficulties coming from p = 2 and d > 2 separately, of course the problem is even more challenging if both things happen. If d = 2 and p = 1, we have already seen in Section 5 that the proof can be adapted to obtain the tight lower bound of [AKT84]. Via Hölder's inequality, one obtains the tight upper and lower bounds also for 1 < p < 2, and we believe that also the case p > 2 could be covered, by estimating E |∇f n,t | k with k large integer (we did this for k = 2, 4). On the other hand, proving convergence of the renormalized expectations seems to require a more precise scheme, since the gradients of solutions to the Monge-Ampère equation describe the optimal transport map T only when p = 2; in this vein, one could consider (see [ In the case p = 1, an alternative PDE possibility could be given by the construction of the transport density via a q-laplacian approximation in [EG99], q → ∞, which led to the first rigorous proof of the optimal transport map for Monge's problem. If p = 2 and d > 2, the prediction of [CLPS14] is that where c d is not conjectured and the coefficient ξ is explicitly given in terms of the Epstein function. However, our regularization technique seems to fail, even for the purpose of computing c d (namely proving convergence of the renormalized expectations) or getting tight bounds. For instance, in the case d = 3, from (3.21) we get E |∇f n,t | 2 ∼ t −1/2 , and therefore one should choose t ∼ n −4/3 , a regularization time much faster than n −1 , which does not seem to lead to the density bounds on |r n,t |/ √ n needed for the proof of the lower bound. On the other hand, the dispersion estimate used in the proof of the upper bound requires t = o(n −2/3 ), a less demanding condition. A class of abstract metric measure spaces. We already noticed that in our proof the geometry of the domain enters only through the properties of the heat semigroup P t with homogeneous Neumann boundary conditions. As a matter of fact, let us briefly indicate how our proof works, still in the case N = 2, for the class RCD * (K, N ) of "Riemannian" metric measure spaces (X, d, m), extensively studied and characterized in [AGS15], [AMS15], [EKS15]. This class of possibly nonsmooth metric measure spaces, includes for instance all compact Riemannian manifolds without boundary, or "convex" manifolds with boundary, namely manifolds having the property that geodesics between any two points do not touch the boundary (as it happens for compact convex domains in R d ). The class RCD * (K, N ) can be characterized either in terms of suitable K-convexity properties w.r.t. W 2 -geodesics (of the logarithmic entropy for N = ∞ [AGS15], of power entropy [EKS15] or nonlinear diffusion semigroups [AMS15] in the case N < ∞), or in terms of Bochner's inequality, very much in the spirit of the Bakry-Émery theory (see [BGL14] for a nice introduction to the subject). In the very recent work [JLZ14], all regularizing properties of P t needed for our proof to work have been proved in the context of RCD * (K, N ) spaces. The only missing ingredient in this more abstract framework is the asymptotic expansion of the trace formula provided by Proposition 3.13, but thanks to (3.22) our results can be stated in terms of the limit lim t→0 + λ∈σ(∆)\{0} e 2λt λ log t whenever it exists.
2016-11-15T17:45:12.000Z
2016-11-15T00:00:00.000
{ "year": 2019, "sha1": "fa7bff8ab43b6f7aea9e6b5393d4d40cb583ddb1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1611.04960", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fa7bff8ab43b6f7aea9e6b5393d4d40cb583ddb1", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
56069640
pes2o/s2orc
v3-fos-license
Autonomous Observations in Antarctica with AMICA The Antarctic Multiband Infrared Camera (AMICA) is a double channel camera operating in the 2-28 micron infrared domain (KLMNQ bands) that will allow to characterize and exploit the exceptional advantages for Astronomy, expected from Dome C in Antarctica. The development of the camera control system is at its final stage. After the investigation of appropriate solutions against the critical environment, a reliable instrumentation has been developed. It is currently being integrated and tested to ensure the correct execution of automatic operations. Once it will be mounted on the International Robotic Antarctic Infrared Telescope (IRAIT), AMICA and its equipment will contribute to the accomplishment of a fully autonomous observatory. Introduction In the recent years, great attention has been paid to the new possibilities opened from the exploitation of Antarctica for astronomical observations. A large number of advantages come out from the peculiar characteristics of this remote land and, at the same time, new stations are rising in the inner region of the continent, known as Antarctic Plateau. Among them Concordia, one of the most advanced and organized permanent bases, built at Dome C within a French-Italian collaboration. Taking advantage of the presence of this station, an ambitious goal is going to be achieved, consisting in the installation of a fully autonomous observatory, constituted by IRAIT [1] and the scientific equipment of AMICA [2]. IRAIT is a 0.8 m, F/22 Cassegrain with two Nasmyth foci, built within an Italian-Spanish collaboration among the University of Perugia, the University of Granada (DFTC) and the Institut d'Estudis Espacials de Barcelona (IEEC). Its mechanics has been mounted during the 2008-09 summer summer campaign while the assembly of the remaining components, the mirrors alignment and the integration of the camera system will be accomplished strating from the 2009-10 summer. Main scientific tasks of AMICA are both the characterization of the Dome C sky for infrared astronomy and the observation of a large variety of astrophysical objects. Among them the AGB and post-AGB stars, the star forming regions in our and nearby galaxies, but also RR-lyrae, nearby brown dwarf, heavly-obscured supernovae and Solar System bodies. Finally, Survey mode observations will be performed for interesting regions of the southern sky (e.g. the Large and the Small Magellanic Clouds). The robotization of the whole system is a necessary condition due to the extreme climate conditions, as a result of which human activities are reduced and essentially stopped during the Antarctic winter (mainly because of the unavailability of efficient communication means and the total absence of personnel and supplies transport). On the other hand, the instrumentation needs reliable solutions to deal with such peculiar environment, to avoid the damage of the components and to minimize any safety risk. The Antarctic Scenario In this section, we give a brief overview of pros and cons of the Antarctic environment from both a scientific and technological point of view. These considerations have led to the development of AMICA and to the adoption of the solutions discussed afterwards. Sky Properties Essential requirements for observations at infrared wavelengths are the low sky emission and the high atmospheric transmission. As reported in literature, first estimations of these properties in Antarctica and in particular in the inner Plateau are highly promising [3,4]. The extremely low temperatures (with an annual mean of about -55°C) reduce the thermal emission from the sky and prevent the atmosphere from containing high amount of water vapor. Since the most absorption is dominated by this component, near infrared bands (2-5 μm) are wider than those in a temperate site and new windows open up in the mid-infrared (beyond 15 μm), which cannot be accessed elsewhere [5]. Other important advantages come out from the low level of aerosols and dust, the high stability of the atmosphere, the high percentage of cloud-free time and the elevation of the site (3250 m). For these reasons, a substantial increment of the resulting sensitivity is expected, obtaining, for a given primary mirror size, the same performance of a several times larger telescope, with lower construction and management costs. Polar Condition Further important characteristics have to be considered about the location of Dome C. Its near-Pole latitude (75°S) allows to perform longer and uninterrupted observations in a maximum field of circumpolar sources (particularly interesting for the presence of peculiar regions such as the Magellanic Clouds and the Galaxy Center), even at zenithal angles normally unfavorable for temperate sites. Finally, thanks to the excellent IR properties of the sky, daily duty-cycle of 100% is probably achievable for wavelengths beyond 4 m, where observations can be carried out even with sunlight. The Extreme Environment Unfortunately, the site location and the environmental conditions that are responsible for the great advantages underlined above, make it very difficult to operate conventional instrumentation. Main troubles rise out from the temperature and pressure values. Low Temperature The first problem to face concerns the low temperatures (down to -80°C during wintertime). Despite they provide an efficient passive cooling (dramatically reducing the instrumental emission that at infrared wavelengths could overcome the sky flux), the electronic and mechanical systems are dangerously exposed to the risk of damage and malfunction. The investigation of suitable solutions for components that have to inevitably work outdoors, and the insulation of devices with limited operating conditions, are very critical aspects. In fact, they require the development of custom elements accurately tested in climatic chamber, to simulate the environment in which they will be operated and stored. Low Pressure Because of the low pressure (~ 640 mbar at Dome C), the efficiency of the conduction and the convection of the air is significantly reduced, hence increasing the possible overheating of insulated electronics. This problem has been frequently reported during experiments performed in past Antarctic campaigns [6]. For this reason a thermal study of dissipating elements and a careful distribution of hot spots inside the cabinets are fundamental prerogatives for an efficient active conditioning system. Temperature and pressure are also responsible of the limited human activities, in particular during winter, when the difficulty to operate outdoors becomes prohibitive. Logistic Facilities The presence of the Concordia base ensures the availability of a large number of facilities for the permanence of the personnel, the visiting researchers and the hosted scientific experiments. However some relevant restrictions need to be considered: -the total amount of electrical supply that can be provided by the station generators is 200 kW (shared among all scientific and logistic needs); -satellite links are normally used for fax and voice connections. Data transfer is very limited in time and speed (~ 64 kbps) and used mainly for text emails. Some improvements are under study to provide a broad-band data connection. -the site can be reached by plane from the Italian "Mario Zucchelli" coastal station after a 5 hours flight, or by ground from the French coastal station "Dumont d'Urville", through a two weeks over-snow traverse (protecting the equipment from jolts, vibrations, thermal shocks and low temperatures); -the low availability of operators inside the base and total isolation for about 8-9 mounts during wintertime. AMICA Equipment AMICA is a multiband camera provided of two detectors. A InSb 256x256 CRS-463 Rayteon array and a Si:As 128x128 DRS Tech. array are used for imaging in the NIR (2-5 μm) and MIR (7-28 μm) regions, with a set of KLMNQ filters. A complete equipment provides all the necessary support to the camera operation. To perform safe and unattended observations in such peculiar conditions, it is necessary to meet all environmental, cryogenic and functional requirements, paying attention to failures recovery and maintenance. After solving a large number of issues, the development of the control system is currently at its final stage. This is focused on the integration of the camera with all hardware and software subsystems and to the verification of the correct execution of automatic procedures that will be cooperatively carried out with IRAIT. Modularity and Subsystems The compact configuration allows the whole system to co-rotate with the telescope fork ( Fig. 1). All the instrumentation, except for the front-end electronics (bias and clock filters, video signals pre-amplifier), is thermally controlled in separate insulated boxes. They have been designed to be easily mounted and transported by cranes used at Dome C. As shown in Fig. 2, the components distribution has been carefully studied (taking into account thermal and size constraints) in order to ensure as much as possible their easy access and a fast replacement in case of malfunction. A schematic description of the AMICA control system (hardware layout and data connection) is shown in Most of the instrumentation is enclosed in thermal conditioned boxes (white) while the cryostat (red) is mounted at the telescope focus. Two small cabinets contain the cryo-cooler cold head and some vacuum system components. The box below encloses the LCU, the read-out electronics, controllers and camera auxiliary devices. The cryo-compressor (blue) is mounted below the rotating floor while the M2 wobbling system (light grey) is mounted at the IRAIT top ring. The detectors control electronics (ACQ) has been developed cooperatively by INAF's departments (Padova, Teramo, Torino) and the Skytech Srl manufacturer [7]. It allows a pixel exposure time down to 0.7 sec (16 bit ADC, up to 8 Mpx s -1 ) with a minimum frame time of 2.9 msec for the MIR channel. It is composed of a digital programmable sequencer (PMC) installed in the cPCI Local Control Unit (LCU), optically connected through a 1.2 Gbaud fiber link with a separated rack. This hosts three further subunits: the SPC board provides the detectors clocks, while two DCS boards are used for the biases generation and for the correlated multisampling of the video signals. Since mid-infrared observations require fast chopping techniques using the wobbling secondary mirror (M2) of IRAIT (with frequency of 2-10 Hz and a frame rate up to 300 fps for the AMICA+IRAIT configuration), the co-adding and the sky subtraction of raw frames are performed in real-time mode during exposures, directly triggered through TTL lines by the PMC or (for redundancy) through a TCP/IP command interface. The IRAIT M2 subsystem has been manufactured by the Spanish NTE SA, which has also built a driver for the motion of the tertiary mirror (M3), in order to alternatively feed both the Nasmyth foci of the telescope . They have been tested to successfully operate down to -80°C, with the M2 driver able to perform fast and accurate pointing during chopping (up to 10 Hz) and off-axis imaging. The M2 driver in particular, has been fully integrated with the camera control system, in order to evaluate the overall accuracy and repeatability of the instrument, the settling time and the correct execution of automatic operations (e.g. observing modes configuration, focusing, etc.) during realistic simulations A multi-level thermal control ensures the continuous monitoring and the conditioning of the instrumentation placed inside the boxes. In fact, besides the Environmental Control System (ECS, a high-level software application), a Programmable Logic Controller (PLC) is devoted to the management of a large number of analog and digital devices (resistors, temperature and humidity probes, fans and contactors) to provide a low level active control of the conditions inside the cabinets. It can operate both autonomously (during outages of activity in which all other devices could be turned off) and cooperatively with the software running on the LCU. Moreover, the PLC is in charge of the boot and the shutdown of each component of the system (following well-defined safety procedures). Finally, a low-level passive electronics has been distributed inside the boxes to keep the minimum safe temperature in case of failure of all active systems. As discussed above, because of the high air rarefaction, not only the low temperatures, but also the low heat dissipation from electrical elements could seriously damage the instrumentation. For this reason, two cooling systems have been installed, composed of pipes passing through the boxes in which the forced ventilation (periodically alternated in direction) allows the internal environment to indirectly exchange the exceeding heat with the external one, avoiding thermal shocks (that could be induced by a direct exposure to the outside air) and the entrance of highly dangerous ice crystals. Software Layout The AMICA Control Software (ACSW) is an agent-based cooperative system, modeled under the principles of the OO Programming (C++, Java), using the graphical notation of the Unified Model Language (UML). Architecture The observatory operation is managed by the IRAIT OCS (Observatory Control System), which detain the observations scheduler and the weather control, retrieving information about all subsystems activity (e.g. busy status during acquisition, intensive backup, maintenance operations, etc.) as well as alarms due to dangerous events (low temperature inside the boxes, overheatings, malfunctions, etc.). The ACSW architecture (Fig. 4) descends directly from the underlying modular hardware, with the aim to reduce the complexity level and to better identify all tasks that have to be assigned to each single-or multi-thread process. This distribution has led to the development of a multi-process system that allows to reduce the possibility of critical failures that could compromise the operation of all software modules (ACS, ECS, DCS, SCS and AAA, described below). Each subsystem is then controlled by one or more processes, which communicate through TCP/IP sockets, pipes or files. The entry-point for the telescope scheduler is the Activity Control System (ACS), a server application that retrieves through a TCP/IP connection the required parameters to perform scheduled observations. It creates and dispatches macro-commands to the cooperative agents, managing and monitoring their activities and restarting them in case of unexpected hangs. Both telemetries and scientific data are periodically stored in a remote server inside the Concordia base, hosting a relational (MySQL) database and the image archive. Then, all data are preserved until they are shipped to European partners. A real-time active control of the environment inside the insulated cabinets and the maintenance of the correct cryogenic conditions inside the cryostat are provided by the ECS. Since it operates cooperatively with the PLC (driving its activity), it can monitor the thermal, electrical and operational status of all the components of the AMICA hardware equipment. Moreover, camera auxiliary devices like heaters (for detectors thermal stability), temperature probes, cryomotors, the vacuum system and the cryocooler are indirectly managed by the ECS through their controllers (each of them being connected to the LAN by means of Serial/Eth MOXA converters). During the operation, the ECS collects all telemetry data, notices the ACS about the status of the system, giving the green light to the observations or asking to wait for the achievement of suitable operating conditions. Each dangerous event is therefore signaled to the ACS and in case of high safety risks, the ECS automatically takes the control of the system. If it is not possible to recover its status, all unnecessary operations are stopped, turning off the corresponding devices and leading the environmental control to the PLC. The configuration of the observing modes is operated by the Chopper Control System (CCS) that communicates with the IRAIT M2 subsystem through the LAN. It controls and verifies the correct position of M2 during acquisition (correcting for possible offsets) and supports the automated focusing procedures. It also monitors the subsystem activity gathering information on its thermal status and logging the ongoing operations. The acquisitions are managed by the Detector Control System (DCS). It verifies the correct setting of the parameters (bias, clocks, T exp , N img , etc.), interfacing the read-out electronics through a further application (SCS) dedicated to the accumulation of the incoming raw frames (storing co-added and sky-subtracted images in a real-time process and filtering bad frames acquired during the motion of M2). In addition, raw data could be also saved for an off-line analysis of the M2 driver operation. Moreover, the DCS performs their descrambling, generating final FITS files, while a first pre-processing of the resulting images is initially obtained using the package SExtractor. It is used to estimate centroids, ellipticities and FWHMs of sources detected over the field and to obtain a preliminary pixels statistics. thus allowing the optimization of the observing parameters and of the focus quality, and the detection of bad images. Finally, a remote Java web application (AMICA Activity Analyzer) is under development. Running on the control room workstation inside the base, it will provide statistics based on stored telemetry data for each component of the system, ensuring a full remote access to the instrumentation for local operators. Aspects of the AMICA Control System Generally, different levels of robotization can be distinguished for a system, depending on the capability to perform unattended operations, to resume its status after unexpected errors and to make use of some sort of intelligence to optimally operate for long-lasting periods (without human interaction). When all these properties coexist in such a system, it is usually referred to as "fully autonomous". On the basis of these considerations, and taking into account the Antarctic conditions, a robotic observatory as that formed by IRAIT and AMICA, must necessarily be fully autonomous. Although the requirements on scheduler flexibility are less severe, thanks to the near-Pole condition and the possibility to observe throughout the year with a daily duty-cycle of 100% for wavelengths beyond 4 m (sec. 2.2), further issues have to be addressed for an "Antarctic" system. Reliability and Test Activity The robustness and reliability of the AMICA instrumentation has been the first point to be considered during its design. After a careful analysis of the environment properties and the possible risks that could arise from such a climate, a great attention has been paid to the choice of suitable devices available from the industrial automation market. In fact, the development of custom Antarctic-proof instrumentation would have had otherwise so strict requirements (more similar to those used in space engineering) that the total cost of the project would not been sustainable, thus losing all the advantages that had motivated the project itself. For this reason, all components have been selected in order to ensure the maximum resistance to the Antarctic conditions in terms of vibrations and shocks resistance, operating and storage ranges for temperature, pressure and humidity values. In addition, several thermal studies have been carried out about the materials and the thickness of the boxes insulating layer to minimize changes of their internal temperature. It has been considered suitable a thermal configuration that will maintain about +20 °C inside the cabinets with an external temperature of -60 °C (winter average temperature), and with an average thermal input given by the dissipated heat from the operating electronics of about 750W. Thanks to this insulation, a lowlevel thermal control is able to maintain with a low power consumption, the internal temperature of the boxes between the storing range of values of the critical components (~ 0-50°). This estimation has been confirmed performing realistic simulations on the behavior of insulated, warmed boxes, through a climatic chamber built at INAF-Teramo (ANTARES -Antarctic Environment Simulator). It allows to reproduce climate conditions worse than those occurring at Dome C (T min = -93.4 °C, P 600 mbar, RH 5%) in a 200 ℓ volume. Thanks to these simulations, several effects have been studied as for example the wind chill effect induced by fans activation, the bias voltages drift with temperature, the failure of Buna O-ring vacuum seals below -43°C, etc. Constraints on the total available space and instrumentation accessibility have led to the design of a very compact, complete and modular system, paying attention to the logistic facilities (e.g total power consumption, mounting and dismounting procedures). Moreover, redundancy of critical components has been applied when possible, in particular for power supplies and electrical and data connections (TTLs, Ethernet, etc.), to continue operating even in case of damaging of any system element. Despite the great attention that has been paid to the achievement of a highly reliable system, experiences in past Antarctic campaigns have highlighted the importance to provide each instrumentation with spares of sensible components. For this reason most of the elements constituting the AMICA equipment have been duplicated. Multiple levels of control will ensure the instrumentation safety. Hardware devices and software systems will cooperate to achieve and maintain suitable conditions inside the AMICA conditioned boxes. In addition, all activities will be further monitored both through the LAN and the mutual exchange of "heartbeat" signals among programmable devices (i.e. LCUs and PLCs) belonging to the camera and the telescope subsystems. Finally, several long tests on the control software have been performed, with the aim to prevent Single Points of Failure (SPOF), thus allowing the recovery of possible malfunctions and successfully executing all system tasks (acquisition and chopping management, image pre-processing, data storing, environment conditioning, remote monitoring, etc.). Future work The development of the whole control system is at its final stage. All hardware subsystem have been separately tested while the preliminary release of the control software allows to automatically execute (simulated) scheduled observations. Next steps will consist of the integration of the ECS module and the verification of the reliability of the thermal control, cooling the insulated boxes down to Antarctic temperatures. Improvements in the DCS are required for the accomplishment of the pre-processing pipeline while further issues will be addressed to optimize the interaction between IRAIT and AMICA. Finally, the Activity Analyzer will be endowed with graphs and diagrams showing real-time information on the system activity and providing statistics that will be used to detect deviations from the expected behavior and to prevent malfunctions. Conclusions The excellent atmospheric properties of Dome C for infrared astronomy, allow to achieve better observing performances than any observing temperate site. Despite these exceptional advantages, several difficulties arise from its extreme environment. The AMICA project takes up the great challenge of the development of a highly reliable Antarctic instrumentation. For this reason, suitable and innovative solutions provide the necessary conditions to its robotization. After the conclusion of the integration stage and further test activities, the camera and its equipment will be shipped to Dome C for the accomplishment of the fully autonomous observatory.
2018-12-10T18:14:07.975Z
2010-03-02T00:00:00.000
{ "year": 2010, "sha1": "105d4eae0966d741a8ca13b8549ce8199737c2eb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/aa/2010/728470.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "9ba9b99354059fb0b90061b18afba8e9ebca393d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
271708259
pes2o/s2orc
v3-fos-license
DiffPROTACs is a deep learning-based generator for proteolysis targeting chimeras Abstract PROteolysis TArgeting Chimeras (PROTACs) has recently emerged as a promising technology. However, the design of rational PROTACs, especially the linker component, remains challenging due to the absence of structure–activity relationships and experimental data. Leveraging the structural characteristics of PROTACs, fragment-based drug design (FBDD) provides a feasible approach for PROTAC research. Concurrently, artificial intelligence–generated content has attracted considerable attention, with diffusion models and Transformers emerging as indispensable tools in this field. In response, we present a new diffusion model, DiffPROTACs, harnessing the power of Transformers to learn and generate new PROTAC linkers based on given ligands. To introduce the essential inductive biases required for molecular generation, we propose the O(3) equivariant graph Transformer module, which augments Transformers with graph neural networks (GNNs), using Transformers to update nodes and GNNs to update the coordinates of PROTAC atoms. DiffPROTACs effectively competes with existing models and achieves comparable performance on two traditional FBDD datasets, ZINC and GEOM. To differentiate the molecular characteristics between PROTACs and traditional small molecules, we fine-tuned the model on our self-built PROTACs dataset, achieving a 93.86% validity rate for generated PROTACs. Additionally, we provide a generated PROTAC database for further research, which can be accessed at https://bailab.siais.shanghaitech.edu.cn/service/DiffPROTACs-generated.tgz. The corresponding code is available at https://github.com/Fenglei104/DiffPROTACs and the server is at https://bailab.siais.shanghaitech.edu.cn/services/diffprotacs. Introduction The technology of PROteolysis TArgeting Chimeras (PROTACs) has gained popularity since the first proposal and demonstration by Crews in 2001 [1].PROTACs are molecules consisting of three components: a ligand for a protein of interest (POI), a linker, and a ligand for recruiting E3 ubiquitin ligase.By bringing the ubiquitination machinery closer to the POI, it promotes the formation of a complex (POI-PROTAC-E3) and drives the transfer of ubiquitin from the E2 enzyme to the exposed lysine on the surface of target protein.This leads to polyubiquitination and degradation of the POI into small peptide fragments or amino acids after recognition by the 26 S proteasome.After the completion of the process, the PROTACs undergo recycling to reach another POI, revealing the catalytic properties of PROTACs. Compared to traditional drug molecules, PROTACs have a number of superior properties.First, PROTACs are capable of modulating undruggable targets without well-defined binding pockets, albeit with comparatively modest or even affinities [2,3,4].Second, the catalytic properties of PROTACs allow them to function when their concentration in the cellular environment is low, mitigating potential adverse effects associated with high drug concentrations.Moreover, PROTACs can distinguish the highly conserved homologous proteins that have different conformations outside the catalytic core, as the ubiquitin transfer step depends on the relative position of exposed lysine and ubiquitin [5].This depends on the conformation of the ternary complex, a parameter that is significantly affected by the linker of PROTACs.The diversity of conformations makes the task of designing a universally applicable linker almost insurmountable and thus presents a major challenge for PROTAC design [6]. Since PROTACs were proposed, there has been a great effort to move it from academic to industry.The first crystal structure of the ternary complex for BRD4-MZ1-VHL (PDB code: 5T35) was released in 2017 [7].In 2020, clinical testing of two PRO-TAC molecules (ARV-110 and ARV-471) provided the first proofof-concept for the modality against two well-established cancer targets: the androgen receptor and the estrogen receptor.By the end of 2021, approximately 15 PROTACs have entered into clinical trials sucessivelly [8,9].However, on account of the lack of experimental crystal structures and the vagueness of the structure-activity relationship, the discovery of PROTACs, especially the linker, still depends mainly on the expertise of chemists and the experimental validation technologies such as western blot and cell-based assays. In addition to the methods based on human design, several computational methods have emerged in recent years to complement them.Molecular dynamics (MD) and docking are two common approaches of traditional computer-aided drug design.MD is an approach for exploring molecular dynamic behaviors in certain space.Several MD methods [10,11,12] have been developed to simulate ternary structures and gain insight into the mechanisms of PROTACs, facilitating the rational design of novel PROTACs.MD methods have shown some predictability, but the process is always time consuming.Our group is also very interested in elucidating the reasons for the large changes in degradation efficacy caused by small differences in molecules and has performed studies in different molecular systems, e.g.BTK-PROTACs-CRBN and BCR-ABL-PROTACs-CRBN, using all atom MD simulation strategies [13,14].Mai et al. [15] employs coarsegrained MD and alchemical free energy calculation methods to explore PROTAC cooperativity but sacrifices precision.For docking or scoring function-based methods, which typically predict the binding of a ligand and a target to form a stable complex, there are currently no established standard protocols for constructing reliable ternary complexes.Nonetheless, various efforts have been made to address this challenge [16,17,18].PROs-ettaC [19], a typical protocol based on scoring function, integrates global docking, local docking, conformational sampling, and clustering to model the 3D structure of the POI-PROTAC-E3 complex.The selection of the most promising PROTAC and its associated structure is determined by the ranking provided by the scoring function.Although these methods prove effective in certain cases, the pursuit of a universally applicable modeling approach is an extensive endeavor that requires a high level of commitment. In recent years, advances in artificial intelligence technology and the accumulation of PROTAC data, particularly with the release of PROTAC-DB [20], have ushered in a surge in PROTAC research leveraging deep learning methodologies.DeepPROTACs [21] is a deep learning-based model for predicting the degradation of PROTACs and provides a way to design or screen PROTACs.Zhang et al. [22] and Nori et al. [23] employed reinforcement learning to generate PROTACs with desired properties and obtained good results in their study cases.However, it is important to note that these models operate primarily on 2D representations of PROTACs (only the atom element types and bond types are predicted), while the functionality of PROTACs is predominantly contingent upon their 3D structures.Especially for PROTAC cases, because the 3D structure determines the stabilization of ternary complex structure, which is the precondition of initiating the degradation processes. All of these methods have contributed valuable tools for PRO-TAC design and offer promising research directions.Nevertheless, there is still a substantial journey ahead in the pursuit of rational PROTAC design. Most of the PROTACs reported to date the present have been developed with established and potent small molecule ligands that bind to known targets.These ligands are usually selected based on the availability of cocrystal structures that can be used to define a suitable initial vector for linker incorporation.Consequently, FBDD methods, which start with fragments (small molecular compounds) and interconnect them to form a ligand, offer an alternative route for PROTAC research, particularly in the domain of PROTAC generation.The two ligands of a PROTAC can be viewed as individual fragments, with the linker serving to connect these two ligands and thereby form a complete PROTAC molecule.There have been several deep learning-based FBDD methods for linker generation, such as DeLinker [24], 3DLinker [25], and DiffLinker [26].The success of these methods in FBDD indicates the potential applicability of similar approaches in PROTAC research and the insights gained from FBDD can be transferred to inform and guide PROTAC generation. In addition, artificial intelligence-generated content has garnered significant attention and has become an important topic.Diffusion models [27], which have emerged as an innovative generative framework, use the noising and denoising process for training and generating new content.The former gradually adds noise to the data step by step (i.e.diffusion), while the latter attempt to gradually recover the data (i.e.denoising).Usually, the diffusion models employ a neural network to predict the added noise and remove it for denoising.During generation, the random normal noises are denoised step by step to generate new data.In addition to vision [28], audio [29], and natural language [30] fields, diffusion models are also used in molecular and material generation [31].GeoDiff [32] and EDM [33] applied diffusion models to generate molecules considering the equivariant property of molecules.Based on EDM, DiffLinker [26] and DiffSBDD [34] further added the condition of fragments or target protein pockets to improve the fragment-based drug design and structure-based drug design.Currently, most diffusion models for molecule generation use graph neural networks (GNNs) to learn and predict noise, as molecules are naturally represented as graphs.In this representation, the atoms and bonds of molecules correspond to the nodes and edges within the graph.Recently, Transformers [35], which play a vital role in recently reported large language models and generative tasks, have also been employed to analyze graphstructured data, yielding promising results [36].The application of Transformer models to the field of molecular generation could be a way to advance this field.However, Transformers were originally developed for sequential data and lack inductive biases associated with graphs, such as rotation equivariance, that are required to deal with 3D structures.Equiformer [37] addresses this issue by incorporating equivariant graph attention and other operations, achieving satisfactory results.Nevertheless, this adaptation fundamentally alters the basic structure of Transformers, rendering some of the conventional techniques and tricks used in Transformers may not be applicable.Our goal is to leverage the strengths of both Transformers and GNNs while minimizing the changes to the traditional Transformer architecture. Therefore, we propose a diffusion model, DiffPROTACs, to generate new PROTACs and the O(3) equivariant graph Transformer (OEGT) module to learn and predict the noise in the model.OEGT uses Transformers to extract node and edge features from a molecular graph, and then employs GNNs to update the graph's coordinates.The GNN module ensures that features retain the same transformations after operations such as molecule rotation or ref lection, namely O(3) equivariance.In contrast, Transformers operate on features that inherently do not contain spatial information.We trained and tested DiffPROTACs on two traditional FBDD datasets, ZINC and GEOM, and found that DiffPROTACs competes closely with existing models.Furthermore, the incorporation of Transformers into the models extends the potential for large models, opening the door to leveraging the latest advancements and techniques related to Transformers to enhance model performance. Interestingly, this work observed significant differences in the distribution of PROTACs compared to the traditional small molecule datasets.To address this, we fine-tuned our model using a PROTACs dataset, resulting in a validity score of 93.86% for the generated PROTACs.As a culmination of our work, we present a database of generated PROTACs in this paper to facilitate further research in this area. Data In our experiments, we used three different datasets: ZINC, GEOM, and PROTACs.ZINC and GEOM are derived from DiffLinker.ZINC contains 438 610 training samples, 400 validation samples, and 400 test samples, while GEOM contains 282 602 training samples, 1250 validation samples, and 1290 test samples.It is important to note that the ZINC and GEOM datasets were computationally generated, and decomposed from the ZINC20 [38] and GEOM [39] databases, respectively.The ZINC dataset consists of single linkers and two fragments, while the GEOM dataset typically features at least three fragments.Also, it is worth noting that ZINC lacks the element P, while the molecules in other two datasets contain this element. Weng et al. published the dataset PROTAC-DB 2.0 [40] recently, which contains basic information of 3270 PROTACs.In our data collection efforts, we gathered the Simplified molecular-input line-entry system (SMILES) representations for the E3 ligands, the warheads (ligands that bind to targets), and the linkers of each PROTAC from the corresponding pages on the PROTAC-DB website.This process resulted in a final dataset of 365 warheads, 82 E3 ligands, 1501 linkers, and 3270 PROTACs. We analyzed the frequently occurring functional groups of linkers in the PROTAC-DB (3257 PROTACs with linkers out of 3270 PROTACs).The detailed findings are presented in Table 1.Notably, a single linker may encompass multiple functional groups, potentially resulting in some overlap in the data. In addition, we conducted an extensive analysis of several physicochemical properties of the linkers in the PROTAC-DB, including molecular weight, AlogP, number of rings, number of rotatable bonds, and the number of hydrogen bond acceptors and donors, as shown in Fig. 1.Our findings indicate that the molecular weights of the linkers predominantly center around 200 Daltons.Furthermore, most linkers lack rings, and the number of rotatable bonds is typically between 7 and 10.Besides, we have calculated the distance between the anchors that connect the linker, which is particularly relevant to protein-protein interactions, as shown in Fig. 1.The anchor distance for most linkers is around 7-10 Å. Our results indicate that existing PROTAC molecule linkers often contain amides or polyethylene glycol (PEG), with more than one-third of these molecules incorporating each of such functional groups.Amides offer several advantages, including good biocompatibility, as peptide bonds naturally occur in organisms, and ease of synthesis.PEGs are relatively f lexible and have good solubility as well as chemical stability.Furthermore, amides contain both hydrogen bond donors and acceptors, whereas the ether oxygen in PEG can only act as a hydrogen bond acceptor.Consequently, PROTAC molecules tend to have more hydrogen bond acceptors.These factors should be considered when designing linkers. We hope this analysis provides researchers with a better understanding of the parameters to consider in the design of PROTAC linkers. However, it is important to note that these PROTACs exist only in a 2D format, lacking experimental 3D structures.To address this limitation, we generated the 3D structures of the PROTACs computationally.For simplicity and to strike a balance between time and precision, we employed a random structure generation approach using LigPrep in Schrödinger [41].It is worth acknowledging that this method does not consider protein-protein or protein-ligand interactions and relies only on local minima, potentially introducing some bias. Another critical challenge we encountered was how to divide the PROTACs into appropriate ligands and linkers.Since PROTAC-DB provides the division, the problem is akin to sub-graph matching.To solve it, we employed the subgraph isomorphism [42] module in NetworkX [43].The linkers and PROTACs were initially converted into graph objects and then iteratively matched with each other.This iterative process resulted in mapping identities.However, due to various issues such as patterns not found in PRO-TACs, cases with no reasonable structures, or PROTACs without linkers, we obtained a final set of 2813 samples for future analysis.The samples were divided into training, validation and test sets at the ratio of 2013:400:400 randomly. We conducted a statistical analysis of the atom number distributions within the ZINC, GEOM, and PROTACs datasets.The result is shown in Fig. 3.In general, the ZINC and GEOM datasets have more similarities, while the PROTACs dataset differs significantly.One of the main differences is the average total number of atoms in PROTACs, which is higher than that of ZINC and GEOM.This difference is primarily attributed to the larger number of fragments in PROTACs. As for the distribution of the number of linker atoms, PROTACs and GEOM datasets show greater variability.Moreover, the modes of the two datasets are larger than those observed in ZINC.These different distributions across the three datasets emphasize the importance of considering their differences during the learning and generation processes. Diffusion models Diffusion models are a kind of framework with denoising and diffusion process to generate new data.The diffusion process is to add noise in T steps gradually for a data point z 0 .z 0 = r i , h i , i from 0 to n linker , where n linker is the number of linker atoms, r i ∈ R 3 is the coordinates of linker atom i. h i ∈ R nf represents features of i th linker atom types.In each time step we get a noised data point z t , t = 1, 2, . . ., T. The size of z t is identical to the input size z 0 .Until time step T, we get an approximately normal distributed noise z T ∼ N (z T ; 0, I).Mathematically, the process can be represented as where, q is a probability and N is a normal distribution, z s is the last state of z t , i.e. s = t−1.By reparameterization trick, we can get According to DDPM [44] and EDM, α t is obtained from the noise schedule in EDM.The process is restricted to a Markov Process, which can be written as, The denoising process is a reverse process, i.e. to obtain the noise in each step and remove it.The process can be derived as where t is the noise added from z 0 to z t , i.e. z t = α t z 0 + t , by reparameterization trick for equation ( 2), and We use the simplified objective function from DDPM, , to learn t , and finally obtain the estimated denoising process, where, where θ is a nonlinear function, here we use the OEGT. Considering the FBDD generation for PROTACs, the condition u is introduced as the 'fragments' or 'ligands', which contains the coordinates r i and features h i of each ligand atom identical to z 0 .Thus, the denoising process and the loss function turn to and the initial noise z T ∼ N (z T ; 0, I) turns to z T ∼ N z T ; f (u), I . Here we set f (u) as the center of the condition and move it to zero. In conclusion, for training, given training sample z 0 , we first move the center of u to zero and sample the timestep t and the noise t .After obtaining the noised sample z t (equation ( 2)) at time step t, with the context u, we employ OEGT to learn the noise t (equation ( 7)).After training, we get the learned neural network θ , which can estimate the noise t .Then for generation, given the context u and the linker size, we first sample a random linker and move the center of u to zero.From timestep T and sample z T , we can 'denoise' it iteratively by T steps, i.e. to obtain z s for each z t (equation ( 6)), and finally get a generated sample z 0 . O(3) equivariant graph Transformer If a function φ satisfies ρ (g)φ(x) = φ ρ(g)x for all g ∈ G, where ρ(g) and ρ (g) are two representations of the group element g in group G, the function φ is equivariant to G.For simplicity, the same representations are used for the group, and O(3) group is used for equivariance and for all layers of the neural network, which means, φ is equivariant to a rotation or ref lection, i.e.Rφ(x) = φ (Rx).Igashov et al. [26] have proved that, for a Markovian denoising process, if f (u) in the initial state z T ∼ N z T ; f (u), I is O(3)equivariant, and the model φ in each denoising step q (z s |z t , u, t) is We use OEGT to learn the noise as Fig. 2b.The OEGT represented as θ (z t , u, t) in equation (7), is learnt to estimate the noise t .Since the linker z 0 and condition u have identical representation for atom coordinates r i and atom features h i , we combine them and use OEGT to process them at the same time.But only linker atom coordinates and linker atom features are updated.Those in the ligands remain unchanged. where, l is the layer index, φ h is Graphormer [36], a Transformer model in graph, H = h i ∈ R n * nf is the concatenated feature of each atom element feature h i , D ∈ R n * n is the distance matrix for all atoms in a graph, r i ∈ R 3 is the coordinates of atom i. n is the number of whole atoms, including ligands and linker.h i ∈ R nf represents the one-hot encoding of atom types.nf is the number of each atom feature.The ZINC dataset comprises of 8 elements, which corresponds to atom type of 'C, O, N, F, S, Cl, Br, I'.However, in the GEOM dataset, the one-hot encoding expands to include P, resulting in a length of 9. To be more concrete, equation ( 8) is Graphormer but slightly different in the attention part, which is: where, W Q , W K , W V ∈ R nf * n are parameters.d = n is the scaling factor. In Transformers, Q, K, and V represent query, key, and value, respectively.A stands for attention score or attention weight.The attention mechanism intuitively involves matching a query with each key and then using an operation (softmax) to enable the query to identify the most matched value.Here in the context of self-attention, Q, K, and V are functions of the feature itself. Transformers can be considered as a type of GNN.However, traditional Transformers lack O(3) equivariance due to the dot product attention mechanism.In contrast, the message passing in GNNs inherently maintains O(3) equivariance.Therefore, in our approach, we leverage the dot product attention in Transformers and enhance it with message passing, as equation ( 9), to ensure O(3) equivariance. Equation ( 9) is message passing neural network [45], a typical architecture of GNN.The concrete expression is where d ij is the distance between r l i and r l i ,φ r represents the aggregation operation, and in this context, a multilayer perceptron is used.φ u in equation ( 9) denotes the update operation, and a summation operation (equation ( 11)) is employed.It is essential to emphasize that these operations are deliberately applied to the linker update, while the condition component, both the atom features and atom coordinates, remains unchanged. For transformation of rotation and ref lection to the molecules, the distance of each atom pair is invariant.When the rotation or ref lection is applied to the coordinates (Rr l i ), the output of equation ( 9) will do the same transformation (Rr l+1 i ), which keeps the O(3) equivariance.Equation ( 8) is not related to the coordinates, and further, thanks to the separation of the feature and the coordinates, φ h can be any function to learn the new atom feature from the old feature and the distance matrix.The detailed parameters can be seen at Supplementary Table S2. Metrics Similar to many molecular generation tasks, our evaluation metrics include validity, uniqueness, and recovery.To assess the results, we employ a process where we sample 100 conformations for each input ligand pair in the test set and subsequently calculate the following metrics: Validity This metric assesses the reasonableness of the generated molecule, specifically whether it resides within the chemical space.In our work, we utilize OpenBabel [46] to compute the bonds within the generated molecules, while RDKit [47] is employed to assess compliance with valency rules.Additionally, validity encompasses the absence of dissociative atoms and the presence of the specified fragments.Given that the fragments serve as conditions and remain unaltered during learning, our assessment focuses on detecting any detached atoms. Uniqueness To determine uniqueness, we consider the ratio of unrepeated generated molecules. Since defining whether two molecules are 'repeated' or 'the same' in 3D space can be challenging, we employ the use of canonical SMILES for each molecule.Canonical SMILES, generated by a specific software, ensure uniqueness in the 2D space.The equation of uniqueness is where, n i unique and n i valid are the number of unique SMILES and the number of valid SMILES of generated molecules for input i. Recovery The recovery metric evaluates the ratio of matched molecules, indicating whether the generated samples contain molecules identical to the original ones.Similar to the uniqueness assessment, canonical SMILES are employed to facilitate this comparison.Canonical SMILES play a crucial role in defining the identity of molecules in 2D space.The equation of recovery is where, n match the number of instances where the generated samples contain identical SMILES as the input, n input is the number of inputs. Framework of DiffPROTACs As shown in Fig. 2a, the training and generation of DiffPROTACs focus on the diffusion and denoising processes respectively.In the diffusion process, noise is added stepwise to the sample data of molecules, specifically, to the linker part.After T = 500 steps, the distribution of the linker is almost normal distributed.The noise introduced during this process is for the learning of the network module, OEGT.Notably, the fragment or ligand part serves as contextual information and remains unchanged throughout the process.The denoising process gradually restores the linker part through the learned noise. To facilitate noise learning, we introduce the OEGT module (Fig. 2b), which integrates both Transformer and GNN components in one block.This module divides the update of the molecules into two different processes: the update of the node features and the update of the coordinates.Namely, the Transformer encoder (Graphormer [36]) is used to extract the node features and the GNN updates the coordinates of the nodes.n = 6 blocks of OEGT modules are stacked for learning. The center of the mass (CoM [32]) of the fragments is first moved to the origin.CoM and the message passing of GNN for the coordinates in OEGT guarantee that the whole process is equivariant to O(3) group, i.e. rotation and ref lection. DiffPROTACs was then trained and tested on ZINC and GEOM datasets and further fine-tuned on PROTAC dataset. Results of DiffPROTACs on ZINC and GEOM DiffPROTACs was trained and tested on ZINC and GEOM for evaluating its ability of generating traditional small molecules, and the results are shown as Table 2. ZINC and GEOM [26] are two molecular fragment datasets, which are decomposed computationally from ZINC20 [38] and GEOM [39] database, respectively.The ZINC comprises two fragments and one linker, whereas the GEOM consists of a minimum of three fragments. We found that DeLinker [24] and 3DLinker [25] were not directly compatible with our ZINC dataset (cannot get the results for some input).Therefore, we used their original datasets, which are also subsets of ZINC20, for testing and obtained the results in Table 2.The validity matrix generated by these methods demonstrates a comparable level of performance, with DeLinker and 3DLinker outperforming the other two.However, it is important to note that DeLinker can only generate 2D structures of molecules, and 3DLinker requires additional anchor (exit vector) information for molecule reconstruction.While DeLinker and 3DLinker exhibit greater uniqueness, DiffLinker and DiffPROTACs surpass them in recovery metrics.This indicates that the latter two methods excel in the ability to recover most molecules in our test set with the original fragment information.For the study on the dataset GEOM, DeLinker and 3DLinker can only link two fragments with one generated linker.Therefore, only DiffLinker and DiffPROTACs are tested on GEOM.Of these, DiffPROTACs exhibits superior performance in terms of validity and uniqueness, albeit with a slightly lower recovery rate than DiffLinker.These results suggest that DiffPROTACs is indeed a strong contender in the current landscape of methods. Results of DiffPROTACs on PROTACs We created the PROTACs dataset using PROTAC-DB 2.0 [40] for testing purposes; however, the size of the dataset is relatively small compared to other datasets.As a result, we leveraged models pretrained on other datasets to overcome this limitation.Notably, the ZINC dataset lacks the phosphorus (P) element, whereas GEOM includes it.Observations indicate that the linker distribution in the GEOM dataset closely resembles that in the PROTAC dataset (Fig. 3), thereby suggesting that models pretrained on GEOM are more suitable for PROTAC generation.Also, it is worth noting that 3DLinker has an atom limit of 48 for fragments, while most PROTACs have ligands with atom numbers that exceed this limit.Therefore, we only tested on DiffLinker and Diff-PROTACs for the generation task.The results are shown in Table 3. The probability of encountering failed molecules increases with the rising number of PROTAC atoms, evident from the increasing proportion of not valid molecules in the distribution (see Supplementary Fig. S1).The two models perform effectively on ZINC and GEOM datasets but exhibit limitations in the context of PROTACs, especially when the total atom number exceeds 50 (Supplementary Fig. S1).This could be a result of almost all ZINC and GEOM data falling below this threshold (Fig. 3).Moreover, PROTACs exhibit distinct characteristics compared to traditional small molecules in ZINC and GEOM datasets.We believe that the dissimilarity and the divergence in the characteristics, distribution among the data can be one major reason.We then fine-tune our model on the PROTACs dataset, resulting in significant performance improvements, as demonstrated in Fig. 4. We conducted a statistical analysis of the generated results for PROTACs, a unique class of molecules that often do not conform to the rule of five.We compared these results with the original training samples, and Fig. 5 clearly illustrates the close relationship between the two distributions.These results underscore the remarkable similarity between the properties of the molecules generated by DiffPROTACs and the real PROTACs (ground truth), highlighting the significant potential of DiffPROTACs. Case study: BRD4-PROTAC-VHL (PDB code:8BEB, 8BDT) Krieger et al. explored the implications of different VHL binders and exit vectors (anchors) on BRD4-degraders [48].They provided two structures, 8BEB and 8BDT, in the PDB, which were recently released.It is important to note that the training dataset comprises simulated data, whereas the case studies involve experimental data, and even their 2D structures are not presented in our training data.These structures share the same VHL ligand and target warhead but differ in their linker patterns.We set the linker length for both and obtained the generation results as shown in Fig. 6.Each structure of the results is displayed in Supplementary Figs S2 and S3.The spatial structure of the generated linkers closely resembles the crystal structures (Fig. 6a and c).DiffPROTACs successfully recovered PROTACs (with the same molecular formula) in two test PDBs (PDB codes: 8BEB and 8BDT), and reproduced the conformations in their complex structures with the E3s and targets with RMSD values of 0.25 Å and 0.53 Å for 8BEB and 8BDT, respectively (Fig. 6b and d).To assess the conformational rationality of the generated linker, we generated the potential energy landscapes for the linkers of PROTACs in 8BDT (PDB code) and 8BEB (PDB code) by systematical conformational search simulations (as shown in Supplementary Fig. S4).Our generated and the experimentally determined conformations of these linkers were also projected to the energy landscape.Both the generated linker conformations and the experimentally determined linker conformations are in relatively low states with similar values, indicating DiffPROTACs is capable of generating native like conformations of PROTAC linkers. Moreover, for 8BDT, when provided with the crystal structure of the ligand and warhead, DiffPROTACs accurately identified the correct exit vector within 8BDT, contrary to another exit vector mentioned in their paper that lacks PDB structure, presumably due to its relatively weak binding affinity to VHL.These results highlight DiffPROTACs can effectively predict the approximate structure and linker composition with accurate structural information for the ligands of PROTACs. Generated database of DiffPROTACs DiffPROTACs requires knowledge of the linker size, which is often unknown in practical scenarios.To address this challenge, we generated a range of linkers with varying sizes.Subsequently, we created an extensive dataset comprising the entire set of PROTACs (training, validation, and test) with linker lengths ranging from 5 to 28.This resulted in a dataset containing 2 601 818 PROTACs, organized by PROTAC-DB ID, with non-valid entries removed. We computed the physicochemical or drug-like properties of the generated linkers, including molecular weight, AlogP, number of rings, number of rotatable bonds, and the number of hydrogen bond acceptors and donors, as shown in Fig. 7.In addition, we measured the distance between the anchor points for linker evaluation purpose. Concerning the issue of diversity, we evaluated the diversity of linkers generated by DiffPROTACs by calculating the Tanimoto similarity for the generated linkers, as shown in Fig. 8. Specifically, from a total of 2 601 818 linkers, after deduplication, we obtained 1 724 424 unique linkers.From these unique linkers, we randomly sampled 10 000 linkers three times and calculated the Tanimoto coefficients of their atom pairs molecular fingerprints using the fingerprint similarity panel in Maestro (Schrödinger Inc.). As illustrated in Fig. 8, the extensive dark blue regions indicate low similarity between the linkers, demonstrating that DiffPRO-TACs can generate highly diverse linkers. Through this analysis, we aim to create a general outline for this database, providing insights for future researchers to better utilize it. This dataset, enriched with patterns learned from DiffPRO-TACs, is a valuable addition to the existing PROTAC dataset, which is limited due to the resource-intensive and time-consuming nature of experimental work.It can also be used as a screening library to facilitate and advance research in this field.The generated database can be downloaded at https://bailab.siais.shanghaitech.edu.cn/service/DiffPROTACs-generated.tgz. Discussion We present a novel diffusion model called DiffPROTACs, which leverages Transformers and GNNs to learn noise and generate new PROTACs linkers based on provided ligands.To incorporate the inductive biases of O(3) equivariant properties in molecular generation, we introduce the OEGT module, combining Transformers with GNNs.In this architecture, Transformers update nodes, and GNNs update the coordinates of PROTAC atoms.Diff-PROTACs competes effectively with existing models in the field of FBDD, demonstrating comparable performance on generating traditional small molecules by learning from the ZINC and GEOM datasets.To address the distinctions in the PROTACs dataset compared to existing datasets, we fine-tuned the model on PROTACs data and achieved a remarkable validity rate of 93.86% for the generated PROTACs.We also provide a database of generated PROTACs for further research and investigation. DiffPROTACs uses only the Euclidean distance as the edge feature in the Transformer component of OEGT, unlike the original Graphormer, which includes additional features such as shortest path and centrality encoding for node features.In addition, Graphormer-GD [49] adds resistance distance to its model.These extra features serve as priors for the neural network and can potentially improve performance.Despite not having these additional priors, DiffPROTACs demonstrates competitive performance, running neck, and neck with other methods.This highlights the substantial potential of DiffPROTACs. The patterns of the PROTACs differ from those in the current FBDD datasets.This divergence should be considered in future research.Utilizing more appropriate datasets as a pre-training set could potentially lead to better results. The case study highlights the importance of the structure of the ligand.The precise positioning of the ligands in binding to the target and the E3 ligase appears to be a critical factor.Therefore, the development and application of an appropriate docking method to accurately predict these interactions is warranted. In conclusion, our DiffPROTACs model introduces an innovative approach that combines Transformer and GNN, offering a valuable tool for the generation of PROTACs.This model has the potential to advance research in the field and may contribute to the acceleration of the discovery and development of PROTACs. Figure 1 . Figure 1.Distributions of the physicochemical and drug-like properties of the linkers in PROTAC-DB.From left to right and top to bottom, the properties include anchor distance, molecular weight, AlogP, number of rotatable bonds, number of rings, and the number of hydrogen bond acceptors and donors, respectively. Figure 2 . Figure 2. Overview of DiffPROTACs and OEGT.(a) The framework of DiffPROTACs, which is generally a diffusion model, containing the diffusion and denoising process.In the two processes, the molecules are noised and denoised step by step.(b) The architecture of OEGT, which combines Transformers and graph neural networks (GNNs) for noise learning, uses the former to update node features and the latter to update molecular coordinates. Figure 3 . Figure 3. Distributions of ZINC, GEOM and PROTACs datasets.The figure displays distributions of total atom number, fragment atom number and linker atom number of the three datasets from left to right. Figure 4 . Figure 4. Distribution of the generated results from DiffLinker and DiffPROTACs-finetuning.The top three figures display the distribution of generated molecules by DiffLinker, while the bottom three display that of DiffPROTACs-finetuing.Each figure presents the relationship between the number of atoms and the number of generated molecules, including the total count and non-valid molecules.The three columns, from left to right, represent the distributions of PROTAC atom number, fragment atom number, and linker atom number, respectively. Figure 5 . Figure 5. Distributions of rule of five of the test data and generated PROTAC data.The figure shows the distributions of molecular weight, AlogP, hydrogen bond acceptors, hydrogen bond donors and rotatable bonds for true PROTACs in training samples and generated PROTACs for test input ligands.The distributions exhibit a high degree of overlap, highlighting the potential of generated PROTACs by DiffPROTACs to closely resemble true PROTACs. Figure 6 . Figure 6.Case studies.The PROTACs and generated linkers for BRD4-PROTAC-VHL by DiffPROTACs.(a) Generated linkers of PROTACs for the protein and E3 in PDB 8BDT.(b) The alignment of the generated conformation and the experimentally determined conformation of the linker in PDB 8BDT, with an RMSD of 0.53 Å. (c) Generated linkers of PROTACs for the protein target and E3 in PDB 8BEB.(d) The alignment of the generated conformation and the experimentally determined conformation of the linker in PDB 8BEB, with an RMSD of 0.25 Å. Figure 7 . Figure 7. Distributions of the physicochemical or drug-like properties of the generated linkers.From left to right and top to bottom, the properties include anchor distance, molecular weight, AlogP, number of rotatable bonds, number of rings, and the number of hydrogen bond acceptors and donors, respectively. Figure 8 . Figure 8. Heatmaps of fingerprint similarities (Tanimoto similarity) of linkers generated by DiffPROTACs.After deduplication, all generated linkers were randomly sampled three times, with 10 000 samples taken each time, to obtain the results. Table 1 . Functional group occurrence in PROTAC-DB a Linkers containing only carbon atoms, excluding other elements. Table 2 . Performance metrics of different methods on ZINC and GEOM datasets Table 3 . Performance metrics of different methods on PROTACs dataset • DiffPROTACs employs the OEGT module, integrating GNN and Transformer architectures to ensure rotational equivariance within the model.• DiffPROTACs introduces a novel diffusion model for generating PROTACs, capable of generating unique linkers based on the spatial structure of the warhead and ligand.• DiffPROTACs is utilized to construct a comprehensive database for PROTAC research, serving as a screening library to facilitate and enhance research efforts in this domain.• DiffPROTACs demonstrates comparable performance to the current state-of-the-art model on FBDD data, achieving a remarkable 93.86% validity in PROTAC generation.
2024-08-06T06:15:58.317Z
2024-07-25T00:00:00.000
{ "year": 2024, "sha1": "549b1118f8e6873fde8e3115a47dcbc0318e3861", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1c00af08c1b561a22eb164d0b432a2563db18e96", "s2fieldsofstudy": [ "Computer Science", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
145967972
pes2o/s2orc
v3-fos-license
Caloric Regulation Linked Thermogenesis in Acute Submaximal Intensity Exercise Model as The Effect of Audio Frequency Exposure Thermogenesis is an essential physiological mechanism in both bodies thermal and energy balance. Thermal balance is significantly associated with body heat homeostasis linked thermogenesis-caloric regulation. The caloric or energy balance was reported under facultative thermogenesis within skeletal muscle stimulated by exercise. Importantly, decreased energy expenditure, imbalance energy intake, and loss of energy was developed for types of obesity. Recently, music tempo and frequency are proposed as the new raw model of exercise treatment against the progression of overweight in the population. Thus, our preliminary pre-post test randomized study aimed to investigate the physical-physiological connection between thermogenesis, caloric regulation, acute- maximal and submaximal intensity exercise model and musical frequency/tempo on the body thermal homeostasis and physiological performance in the younger athlete. This study involved 45 participants with homogeneity in age, height, weight, heartbeat, and physical fitness. Interestingly, co-treatment high intensity, moderate intensity exercise and moderate intensity exercise with middle musical tempo/frequency decreased body temperature without relevant alteration on caloric production. Furthermore, this exercise model significantly induced caloric production and energy expenditure in a similar pattern with the placebo. Also, musical-moderate intensity exercise exposure enhanced muscle thermogenesis without effect to overheat condition during the treatment. The circulating level of physiological-physical stress marker (cortisol) significantly decreased post musical exposure. Hence, the development of physical combination therapy for individual onset obesity progress to metabolic syndrome can contribute to the prevention of metabolic disease. This combination model may offer an alternative solution to combating overweight and obesity through musical-exercise co-treatment. However, further studies are emerging to widely establish this model for global communities. IC2MAM 2018 IOP Conf. Series: Materials Science and Engineering 515 (2019) 012069 IOP Publishing doi: 10.1088/1757-899X/515/1/012069 3 90 minutes with moderate intensity. It is similar to the exercise of cycling that could enhance irisin for 40 minutes with the submaximal intensity of 70% of maximum oxygen volume [18]. The moderate intensity exercise, for about one hour, increased the capacity of muscle and adipose tissue oxidation caused by irisin expression [12]. This case shows that irisin functions to change the white fat to brown fat which is very important for energy secretion with ATP synthesis conversion with the form of thermogenesis. Thermogenesis depends on protein uncoupling activity through oxidative phosphorylation uncoupling in the mitochondria [19]. Thermogenesis has been explained in the ratio of glycolytic oxidation enzyme with the efficiency level of fatty acid oxidation in the muscle, the cycle of the change of ATP need without the change of ATP formation (for example in triglyceride hydrolysis and the next reesterification in adipocytes). The change of ATP need of each muscle contraction, uncoupling mitochondria in the brown adipose tissue, lipolysis path, and the use of free fatty acid, is influenced by genetic factor and hormonal control such as insulin, thyroid hormone and sympathetic nerve system (SNS) [15]. Exercise gives effect to the improvement of metabolism and thermogenesis and energy expenditure as well as activates irisin to change white fat to brown fat [19]. Doing exercise by listening to music increases the speed of running (economic running) and upgrades the motivation and decreases the cortisol secretion [20]. Music impacts on the decrease in cortisol and the increase in hormone of growth and the activity of HPA-axis that contribute to enhancing the efficiency of metabolism and the use of triglyceride. Listening to music during exercise could repair the profile of body fat and improve physical performance [21]. Exercise is a key determinant of energy expenditure that functions to maintain the energy balance [22] by the increase in metabolism of fat as the energy source with the increase in free fatty acid oxidation [23]. Based on that explanation, this research aimed to find the physiological and physiopsychological responses of maximal and submaximal exercises using music to the response of energy expenditure and thermogenesis by using the concept of bio-psycho metabolism. Methods This research has been stated that it is by the ethics of research on health with the certificate of ethics number 106/EC/KEPK/04/2018 from the ethics commission of Faculty of Medicine, Brawijaya University of Malang. This research aimed to know the physiological response on the acute exercise with sub-maximal intensity by listening to middle rhythm music with the experimental design of Randomized Control Group Posttest Design. This research used three experiment groups such as acute exercise with high intensity, acute exercise with moderate intensity collaborated by listening to middle rhythm music. The participants were the students of Sports Science with the age of 18-20 years old, male gender, good VO2 max (maximal oxygen volume), proportional body height and weight, normal hemoglobin (Hb), and relatively low-stress level and they were willing to sign to be the participant (informed content). Based on the determined provision, 45 people were chosen as the trial people randomly selected and then divided into three treatment groups. Data collection was done by asking the trial person to run on the treadmill during 20 minutes with the high-intensity exercise, moderate-intensity exercise, and moderate-intensity exercise collaborated with the activity of listening to the middle rhythm music initiated by warming up and cooling for five minutes by running on the treadmill with the low intensity. The high-intensity exercise in this research was similar to 85% of a maximum heartbeat, and that of moderate-intensity exercise was the same with 75% of maximum heartbeat while middle rhythm music is the music with the beat of 109-120 beat/minute. The trial persons listened to the music during running exercise using multi-player 5 (MP5) completed with headset and attached to the ears. The measurement of thermogenesis response used indicator of body temperature measured before and after exercise while the calorie/energy expenditure was measured by doing exercise while the analysis of chronic physiological response with the indicator of cortisol hormone content and acute physio-psychologic response with the indicator of heartbeat/minute. Taking the sample of the blood for checking the content of cortisol hormone was done from vein cubeti which were 10 cc. Taking the blood was executed by a medical officer from the department of health of Malang City while the research was conducted from 06.00-10.00 am. Analysis 4 of blood sample for measuring the content of cortisol used Elisa kit method while the data analysis using the ANOVA test with 1% significant level. Results and Discussion The research results of participants' characteristics with anthropometric indicator and physiological condition are explained in Table 1. The research results of the anthropometric condition of sample showed that the body height and body weight of the groups tended to be similar while the physiological aspect was determined by using the indicator of physical freshness level by using VO2 max measurement tool with the Balke test with the high category and Hb of the sample was averagely normal. The sample was then treated by acute exercise with high intensity without collaboration with listening to the music, moderate-intensity without conjunction with listening to the music and the moderate-intensity exercise with listening to the middle rhythm music. The results are presented in Table 2. Based on the research results, there was a difference in energy expenditure and body temperature (p<0.01). The mean of energy expenditure in the high-intensity exercise group (group I) was higher than that of group II and III ( Figure 1). Moreover, the body temperature when doing the acute exercise with high intensity was higher than the body temperature of the group of moderate-intensity acute exercise with and without listening to the music ( Figure 2 and The growing passive lifestyle causes low energy expenditure since the primary variable of energy expenditure is physical activity including exercise and doing physical activity during leisure time [24]. The improvement of one-day activity included physical activity related to exercise will possibly increase the energy expenditure, the energy balance, and prevent the excessive energy reserves [25]. Both acute and chronic exercises are the physiological response that impacts on the change of body metabolism in fulfilling the need of energy of which intensity and duration can be managed [8]. The response of exercise to the body is influenced by not only intensity, duration, type, but also the individual's status included the status of health, physical condition, the composition of body and physiological and psychological conditions [26]. Based on the research results, the participants with an anthropometric The research results showed that there was no significant difference in anthropometric, physiological conditions of participants in each group. It can be seen from the similar body height, and body weight of participants and the physiological aspect using the indicator of the content of hemoglobin and physical condition measured using physical freshness, the mean of VO2 max level was in the best performance. This condition will correlate to the level of energy expenditure. Since the energy expenditure is influenced by the training level, exercise, the metabolic adaptation related to the composition of free fat mass, and the activity of mitochondria [22]. Also, the physio-psychological parameter ay changed by the different physical treatment that confirmed by the serum level of cortisol. The concentration of circulating cortisol hormone before doing exercise was relatively low. The low cortisol hormone influences the metabolism that will impact the energy expenditure and thermogenesis [21]. The participants' anthropometric averagely had the body mass index of 21 that categorized as normal since the normal level of body proportion, the excessive body weight, and obesity influence the response of exercise and the level of energy expenditure [27]. The imbalance between the energy intake and energy expenditure are the leading causes of the increase in some metabolic syndrome diseases [22]. Based on the research results, there was a significant caloric expenditure in the three groups (P<0.01). This data proved that the high-intensity exercise was caused by the increase in the need for energy so that the change of metabolism also enhanced [28]. Someone's metabolism adaptation is also related to the change of energy release [25]. Besides, the energy expenditure is influenced by basal energy expenditure or resting of metabolism rate (RMR), the energy released when doing energy, and the energy expenditure related to adaptive thermogenesis [29]. The energy release is also correlated to the intensity of exercise and the type of exercise. An individual with normal body mass index with the high intensity is more beneficial for energy expenditure. On the contrary, the high-intensity exercise done by the individual with body mass index ≥ 24 will cause chronic fatigue and not optimal energy expenditure [24]. The intensity of exercise and factor of music are the aspects that cause the difference in the level of calorie expenditure and thermogenesis in the high-intensity exercise group. The high-intensity exercise needs higher energy than the other groups caused by the increase in more active skeletal muscle contraction that will stimulate the other metabolism systems such as cardiorespiratory system and energy metabolism system. It is proven in the high-intensity acute exercise of irisin secretion in the higher skeletal muscle [12]. Irisin causes the improvement of biogenesis in mitochondria that stimulates the use of fat as the energy source and the increase in irisin secretion will enhance the energy expenditure [18]. Irisin in the muscle and adipose is the derivation of chemokines that will activate thermogenesis in white adipose tissue and increase glycemia. Besides, irisin also gives signal through the activation of adenosine monophosphate kinase (AMPK) to enhance the glucose metabolism and fatty acid oxidation. This case showed that muscle is an endocrine organ that has an active role in keeping the metabolic balance that communicates with some tissues including adipose tissue and heart. Exercise increases the energy expenditure of about 400 kcal [30]. The irisin secretion enhances the energy expenditure by activating brow fat tissue through activation of PPAR co-activator 1 alpha (PGC1-α) that stimulates type III-fibrinogen that is dominant 5 (FNDC5) [31]. The skeletal muscle is an excretory organ that has the ability with the other tissues of organs. The muscle contraction increases peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) that will enhance type III-fibronectin domain-containing protein 5 (FNDC5), to increase irisin; the secretion of irisin is influenced by the intensity of exercise [12]. The secretion of irisin is influenced by not only the intensity of exercise but also the anthropometric factor, age, and gender. It is proven by the correlation between age, gender, skeletal muscle mass, body fat mass, and the number of calories entered to the secretion of irisin [32]. Meanwhile, the secretion of irisin is related to the increase in calorie and thermogenesis that are also influenced by fat metabolism and the intensity of exercise. However, in the group of individuals with the level of physical freshness categorized as good and the body proportion categorized as normal, the intensity of exercise was up to 85% of VO2 max that still uses fat as the energy source [33]. Based on the research results, the difference in the energy release, thermogenesis, heartbeat, and secretion of cortisol was caused by the factor of the intensity of exercise results in the difference in the activation of muscle contraction, activation of the sympathetic nervous system and hypothalamuspituitary axis (HPA-axis). The high-intensity exercise caused the response of muscle system was more active that would respond the active sympathetic nerve and HPA axis, as the effort to maintain the homeostatic related to the fulfillment of energy and to keep the stable body temperature in facing the stressor caused by the exercise intensity [16]. The response of exercise to hypothalamus will secrete corticotrophin releasing hormone then activate pituitary gland that releases an adrenocorticotropic hormone that stimulates adrenal cortex to secrete the cortisol so that the higher the intensity, the longer the exercise done, the secreted cortisol was higher as well [34]. The high secretion of cortisol hormone indicated the chronic stress while the catecholamine secretion that caused the increase in a heartbeat as the indicator of acute stress, chronic stress activates HPA axis and acute stress activates sympathetic nerve, the activeness of both stresses in the exercise was also influenced by the exercise intensity [35]. The enhancement of acute and chronic stress impacts on the suppression of immunity and the increase in inflammation in muscle [16], while the exercise with the intensity of 60% of VO2 max in an untrained individual has upgraded the cortisol and heartbeat [34]. Therefore, the effort to inhibit the stress while doing exercise is necessary so that the exercise could influence the health and physical performance and optimize the body function. The music used during exercise could enhance the physical performance and make the exercise playful so that it can inhibit the stress, improve the motivation, increase the energy expenditure but reduce the perception of feeling tired [36]. The decrease in stress in facing the exercise with the good intensity and physical performance could increase the stress due to the cortisol secretion and increase the blood pressure. It has been proven that listening to music could reduce the blood pressure and heartbeat, decrease muscle tension, pain, discomfort feeling, and influence the nervous system [20]. Besides, music in the exercise also could enhance the physical performance influenced by music rhythm such as moderate and fast rhythm music could improve the work efficiency caused by the decrease in the anxiety and muscle tension caused by the stimulation of parasympathetic nerve [37]. Music also influences the increase in growth hormone directly related to HPA-axis. The growth hormone as the anabolic hormone could stimulate the triglyceride breaking and become the media to encourage the growth in some tissues to activate the secretion of insulin-like growth factor-1(IGF-1). Music influences the increase in the efficiency of metabolism. In the previous study mentioned that someone with obesity and stress then listens to classical music showed the increase in energy expenditure in resting time (resting expenditure). Furthermore, listening to enjoyable music indicated the enhancement of the amplitude of gastric myoelectrical activity that can stimulate the increase of gastric motility and gastric emptying [21]. Listening to music does not influence anaerobic exercise. It will be more useful for submaximal-intensity exercise compared to the higher intensity exercise. However, with the music that is relevant in Wingate anaerobic test is done to 12 males and three females showed the excessive reaction of fatigue, index of fatigue, the mean of energy expenditure and the increase in higher maximal energy when listening to music than without music. The increase in fast rhythm music with was more significant than the low rhythm music [36]. The increase in the heartbeat in the submaximal-intensity exercise with listening to the music compared to the submaximal-intensity exercise without listening to the music showed that music influenced the stimulation of sympathetic nerve that activated catecholamine hormone to improve the performance of heart [20]. Thakare et al. in their research using 19-21 years old participants showed the higher increase in a heartbeat in the high-intensity exercise with listening to the slow rhythm music compared to submaximal-intensity exercise without music; moderate-intensity exercise with jazz music tempo caused the heartbeat higher than in slow rhythm music [38]. Physiologically, music enhances the heart working system and cardiac stress. The heartbeat increases when listening to fast rhythm music in the increase in the sympathetic nervous system that functions to control the speed and the power of heart performance [39]. On the other hand, submaximal-intensity exercise with listening to music caused cortisol hormone secretion is lower than submaximal-intensity exercise without music. Music influences the response of the lower HPA axis so that it causes the low stress indicated by the decrease in cortisol hormone secretion [35]. Besides, music influences the central nerve and autonomic nerve fiber and hormonal system that results in the decrease in the activity of sympathetic nerve and the increase in parasympathetic stimulation so that the secretion of stress hormone decreases [40]. Listening to music has a positive role in reducing stress, pain, increasing the function of the heart, and enhance the relaxation. Music regulates the performance of HPA axis, sympathetic nerve system (SNS) and immunity system, metabolism regulation, energy balance, and metabolism in recovery from the increase of fat metabolism and elimination of fatty acid after doing exercise [41]. The research results showed that submaximal-intensity exercise with music could save the energy and keep the body temperature with the lowest level of cortisol hormone secretion over three groups. Based on the research results, it is recommended to have exercise as a strategy in increasing the caloric expenditure, the significant change of thermogenesis to prevent the increase in obesity and metabolic syndrome disease but it should take into account the intensity of the exercise. The intensity of the exercise should be by the individual's condition and the exercise done should be interesting and playful. Therefore, collaborating with exercise and music with the right tempo is necessary. Conclusion Based on the research results it can be summarized that acute maximum/ high-intensity exercise without musical treatment could be a stimulator for increased the energy expenditure, thermal body temperature, cardiovascular tension, and physio-psychological stressor. The gradual changes of excess heat production related thermogenesis and body temperature significantly observed in the combination of moderate intensity exercise with middle rhythm music. Moreover, the co-treatment of middle rhythm music and moderate intensity exercise could decrease psychological stressor hormone without any significant alteration on energy expenditure. Further study is necessary to explore the physiopsychological changes due to the combination of chronic moderate intensity exercise with musical cotreatment as the future model combating obesity and metabolic syndrome through physical therapy.
2019-05-07T13:28:59.533Z
2019-04-17T00:00:00.000
{ "year": 2019, "sha1": "f1abb7ed2c60cce2b5acb92e215e1ca3842bbb18", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/515/1/012069", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ef8b0622b8f35eb08d917b6724c774b2066a7df8", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
266778494
pes2o/s2orc
v3-fos-license
A cross-sectional assessment of the effects of select training modalities on vaccine cold chain management ABSTRACT Background Vaccines offer arguably the most cost-effective public health intervention. Vaccine supply chain management which is a critical building block faces many Human resources challenges mainly due to the special attributes of vaccines. Objective This study attempted to measure the effect of training on vaccine cold chain handler knowledge and practices. Methods A cross-sectional research design, using predominantly quantitative data collection techniques, was used. Facilities that have offered vaccination services for more than a year and report through the HMIS system were eligible for selection. Observation checklists and structured questionnaires were used. SPSS was used to analyse data. Results Vaccine cold chain management among the study group had an average score of 65.33% range (31–85%). The average knowledge score among the study respondents was 62.42% with a range (45–95%). The knowledge of respondents generally increases with an additional increase in the number of training modalities. Conclusions The status of VCCM is at about 65.33% below the target of 80% set by the EVM. The trainings have an effect on both knowledge of handlers and their practice especially when deployed in a multi-pronged design and thus these trainings need to be aligned to achieve synergy. ABBREVIATIONS CCE, Cold Chain Equipment; DHIS2, District Health Information Systems 2; DHO, District Health Officer; DPT, Diphtheria, Pertussis, Tetanus; DVS, District Vaccine Stores; EPI, Expanded Program for Immunisation; EVM, Effective Vaccine Management; FEFO, First Expiry First Out; GAVI, Global Alliance for Vaccines and Immunisation; HMIS, Health Information Management Systems; IRC, International Rescue Committee; KII, Key Informant Interview; LIAT, logistics indicator assessment tool; PATH, Program for Appropriate Technology in Health; PHC, Primary Health Care; QPPU, Quantification and Planning and Procurement Unit; SOPs, Standard Operating Procedures; SPSS, Statistical Package for Social Sciences; UNEPI, Uganda National Expanded Program for Immunisation; UNICEF, United Nations Children's Fund; VPD, Vaccine Preventable Diseases; VVM, Vaccine Vial Monitors; WHO, World Health Organisation Background Vaccines have a pivotal role in public health being described as 'the most costeffective public health intervention' with a return on investment of 16 USD accrued from a cost saving and reduced illness-induced loss of productivity for every 1 USD spent on vaccination (Ozawa et al., 2016;Riedmann, 2010;Sillanp, 2015) To obtain the full benefits of vaccination, it is imperative that the vaccines are maintained in appropriate and adequate cold chain conditions throughout the supply chain (Lloyd & Cheyne, 2017).Any break in the cold chain may result in an unrecoverable loss in potency lowing effective coverage leading to outbreaks of some vaccine-preventable diseases regardless of reported good coverage (The Maternal and Child Survival Program (MCSP), 2016;Orenstein et al., 1985).At the facility level, all the vaccines are stored within a recommended temperature range of 2-8°C in most cases (Kartoglu & Milstien, 2014).Critical to note is that these vaccines have two major categories with a bunch of them being sensitive to heat events and another bunch being sensitive to freeze events (Hanson et al., 2017).These special sensitivities determine the section of the cold chain storage equipment where these categories can be stored.Exposure to heat events can be tracked on the Vaccine Vial Monitor (VVM) which is a chemical indicator that records cumulative heat events and reports them as a colour change (Eriksson et al., 2017).Freeze events which have a quicker blow on freeze-sensitive vaccines do not have any form of direct tracker as such need to be monitored using temperature monitoring equipment (Hanson et al., 2017).Any suspicion of a freeze event should be investigated using the shake test (Hanson et al., 2017) An adequate number of knowledgeable, skilled and motivated human resources is critical in maintaining an effective and efficient vaccine cold chain (Steele, 2015).Unfortunately, this is not the case on the ground as seen in the citations below: In Ethiopia, about 45.4% of the vaccine handlers were classified as not having satisfactory knowledge and skills, for example, inability to read and interpret the readings on the temperature monitoring equipment (Lutukai et al., 2019).Suboptimal data use, analysis, and interpretation are into actionable decisions (Woldemichael et al., 2018) among vaccine handlers.This lack of adequate knowledge has been determined as a predictor of practice.In Ghana, whereas the knowledge of most respondents was generally considered satisfactory, it was noted that the application of this knowledge was limited.For example, out of the 100% who had heard of VVM, 85% could correctly read VVM but only 19% could correctly state the implication if the colour of the inner square and outer circle of a Vaccine Vial Monitor (VVM) matches (Osei et al., 2019).The use of unqualified community vaccinators, who lack formal training or formal contracts with the facility, was observed in Uganda (Karlsson, 2012).These observed gaps impact vaccine management in many ways most importantly causing a break in the cold chain and exposing the vaccines to quality compromise which negatively affects the potency and safety of vaccines at the vaccination site (Zaffran et al., 2013). As an affirmative action to address these gaps, a plethora of human capital development interventions have been deployed to build the capacity of vaccine handles at all levels.These include the creation of international professional networks (Brown et al., 2017), the creation of regional centres of excellence like the East African region has one hosted in Kigali Rwanda with a focus on the area of vaccines immunisation and health supply chain (Brown et al., 2017), multiple certifications at certificate, diploma, degree and post-graduate are widely available all over the world, and supporting toolkits and resource repositories (Brown et al., 2017).The knowledge, skills, morale and performance of staff can be enhanced through interactive in-service training usually organised as workshops (Masresha et al., 2020), pre-service and in-service training and supervision (Adebimpe & Adeoye, 2021;Masresha et al., 2020) Despite this conducive training environment, it was noted that 60% of supply chain roles including those in vaccine cold chain management are performed by human resources without supply chain certifications (Kasonde & Steele, 2017).A study in Kenya showed that little or no time was allocated to EPI topics in preservice training and lecturers in nursing schools also needed some refresher training (Zaffran et al., 2013).These factors jointly call for a focus on on-the-job training with many comprehensive relevant training known yet with limited access and a general lack of strategy to deliver training. The commonest modalities cited include but are not limited to Offsite training, Onsite training, support supervision, provision of reference materials in the form of (manuals, guidelines, SOPs, and Job Aids) and self-paced learning on virtual platforms (Kasonde & Steele, 2017) The outcome of these interventions in knowledge and practice improvement however has not been extensively studied in Uganda, particularly in the Lango Sub region.This study set out to determine the effect of the human resources' capacity-building interventions on the knowledge and practices of the vaccine cold chain handlers. Study setting and intervention The study was conducted in the Lango sub-region located on the northern side of Uganda with a population of close to 2.5 million people.The region has 10 administrative districts including the recently chartered Lira city with a mix of public and private facilities across all levels of care offering immunisation services.The city, which is the most urban business hub in the region, is about 344 km away from the country's capital Kampala.The region has a tropical climate characterised by very hot dry seasons which could prove challenging for cold chain logistics.The facility mix, weather aspect, and poverty levels jointly formed the ground for the choice of the sub-region for the study. Study design A cross-sectional study design was employed to assess the effects of HR interventions on knowledge and practice.Quantitative techniques were deployed to measure interventions, knowledge and practices.With a cross-sectional study, rich data on exposure and outcome were abstracted fast and all at a single point in time. Inclusion criteria Health facilities in the Lango sub-region that had offered immunisation services for at least one year before May 2023 were included in the study.Additionally, facilities that granted the investigators permission to take part in the study were included. Exclusion criteria Health facilities in the Lango sub-region that did not have functional cold chain equipment by May 2023 were excluded from the study.The estimated sample size of 57 was stratified by district and then the level of care to obtain a representative sample composition.Within the strata, simple random sampling was used to obtain the facilities to recruit as below 1 RRH, 3 GH, 6 HCIV, 25 HCIII, and 22 HCII. Data collection and analysis A structured questionnaire, developed with ideas from multiple data collection tools of similar studies (Mohammed et al., 2021;Woldemichael et al., 2018), was used to collect data on participant biodata, individual, and professional characteristics and knowledge.The tool was tailored to collect sequential specific relevant cold chain knowledge areas required of a cold chain handler.Quantitative data on cold chain practices at the facility were obtained using observation checklists which were used to measure on-spot cold chain practices at the facility.The data collection was conducted by the PI to ensure consistency, reliability and validity. The raw coded data from the tools was entered in Microsoft Excel and exported to Statistical Package for Social Scientists (SPSS) version 23 for analysis. Descriptive statistical techniques were used to analyse data and results were presented in frequencies, percentages, average ranges, and standard deviations and later categorised and presented as dichotomous categories based on satisfactory or non-satisfactory grading.The outputs were then summarised using tables and where applicable visualisation was enhanced using appropriate figures.Relevant tests, particularly the chi-square tests, were run to determine the associations between the dependent and independent variables.A critical value of p < 0.05 was considered as the cut-off for statistical significance to assert an association. Health facility characteristics A total of 57 facilities stratified to represent all levels of care and ownership were targeted out of 375 health facilities to participate in the study.All 57 facilities were enrolled on the study yielding a response rate of 100%.The ownership spread was as follows.Public 47(82.46%)Private Not for Profit (PNFP) 9(15.79%) and Private for profit 1(1.75%). Respondents' demographics Table 1 describes the respondent demographics.There was a gender balance among the respondents.Professional certificates 30(52.6%)followed by professional diplomas 21(36.8%)were the most predominant level of education.The majority of the respondents were EPI focal persons 52(91%). The majority of the respondents were nurses and midwives with 46(80.7%)having more than 1 year of experience in vaccine cold chain management. The status of vaccine cold chain management practices and performance in participating facilities Table 2 shows that among the many areas of vaccine cold chain management practice assessed, the areas of best performance included: VVM application and vaccine storage unit access control at 100% each, and vaccine storage unit positioning also had a good compliance level at 91%.Temperature excursion incident reporting and wastage recording were the worst performed with 0(0%) and 1(1.75%) correct practice, respectively. Vaccine cold chain management among the study group had an average score of 65.33% with a maximum score of 85% and a minimum of 31%.On a Training deployed to improve the knowledge and skills of vaccine cold chain handlers Table 3 shows the exposure level to the training investigated in this study. Reference materials were the most populous modality at 98.2%, while the use of the internet was the least utilised modality to gain knowledge and skills at 42%.Table 4 shows the multiplicity of the different trainings.Key to note is that at least all respondents had exposure to more than one modality of Knowledge of vaccine cold chain handlers that participated in the study Using a 22-theme area questionnaire, the average knowledge score among the study respondents was 62.42% with a minimum score of 45 and a maximum score of 95%.On a binary scale with a 60% or more knowledge score as a cut for a satisfactory level of knowledge, the respondents were almost equally split with 28(49.12%)not satisfactory and 29(50.88%)with satisfactory knowledge level. The effects of the training deployed on the knowledge of respondents Five training models were investigated for their effects on the knowledge of handlers.It was found that of these, Peer-to-peer learning which was termed as learning from a colleague at work and the use of the internet had the greatest positive impact on knowledge gain as demonstrated by a mean score difference exposed compared to the non-expose groups.The difference was however not statistically significant.Technical support supervision was found to have a negative impact on knowledge as demonstrated by the reduction of knowledge score mean in the exposed group from 68.3% in the non-exposed to 62.09 in the exposed group. Figure 1 shows that the mean knowledge score generally increased with an increasing number of interventions a participant was exposed to previously.With an average mean increase of 3% for every additional intervention a participant is exposed to. The effects of the interventions deployed on vaccine cold chain management practices Table 5 shows that the mean vaccine cold chain management practice scores change for a number of background and direct interventions.Particular mention can be made for facility ownership which had a p-value of 0.007 and respondent qualification with a p-value of 0.000.Whereas training in itself did not have a significant statistical impact on practice, the duration since the training had a p-value of 0.005 implying it affected practice.The use of the internet was the final intervention which had a significant effect on practice with a p-value of 0.045.Unlike in Knowledge, the number of interventions did not seem to directly impact the mean practice scores. The effect of the knowledge of respondents on vaccine cold chain management Table 6 highlights that the study found a strong association between participant knowledge and practice with a p-value of 0.000 as in the table above. Discussion The status of vaccine cold chain management practices This current study showed that 95% of the facilities had temperature monitoring equipment.This was higher than in a study in Cameroon, where 76% of the facilities in the study population had functional temperature equipment (Yakum et al., 2015), However, of the 95% with temperature monitoring equipment in this current study, 5(9%) had non-functional equipment and were unaware of it.Further analysis showed that of these, 4/5 had temperature readings for the morning of the visit yet had non-functional equipment in the vaccine storage unit this casts doubt on the reported readings in the temperature charts across the study population.In this study, 77% of the facilities monitored had recorded temperature twice a day this was higher than a study in Ethiopia that had only a 51.2% complete temperature monitoring rate (Feyisa, 2021).This current study showed that 95% of the respondents knew that a shake test was necessary to investigate a suspected freeze event this was not in agreement with a study conducted in Uganda and Senegal which revealed a challenge in freeze event detection (Luzze et al., 2017). In this study, despite the demonstrated knowledge of shake tests, only 39% knew how to safeguard freeze-sensitive vaccines from freeze events by placing them in the right section of the vaccine storage unit this agreed with the same study above conducted in Uganda and Senegal. This study had an outstanding 100% non-compliance with the recommendation to track wastage this reiterated the same findings in a previous study in Uganda and Senegal which demonstrated a low level of tracking of wastage (Luzze et al., 2017) Training deployed to improve vaccine cold chain management In this study, 5 training modalities were interrogated and it was shown that 57 (100%) of the participants were exposed to at least a mix of two different trainings.The study further goes on to show that the average knowledge score increased by about 3% with every additional exposure to an additional training modality.These findings were in agreement with findings from another study that evaluated vaccine management human capital factor in observed practices, according to the researchers, it was concluded that a multi-pronged training deployment yielded better results when aiming at improving practices (Kasonde & Steele, 2017). Knowledge of vaccine handlers This current study found that 50.9% of the vaccine handlers had satisfactory knowledge of vaccine cold chain management principles.This finding is similar to a study in Ethiopia which had a result of 54.6% of the vaccine handlers classified as having satisfactory knowledge (Woldemichael et al., 2018).However, this differs from a study in Yemen which found 80% as the proportion of handlers with sufficient knowledge (Sule, 2022). In this current study, the investigators demonstrated that 81% of the respondents could correctly define a VVM and this could be slightly lower than a study in Ghana which showed that 100% of the respondents had heard of VVM (Osei et al., 2019).In this current study, however, the investigators went a little deeper than merely hearing about VVM to define it.Furthermore, in this study, 70% could correctly identify from a picture of vaccines at different VVM levels which vaccine should be used first based on the VVM readings.This test tested both the ability to read a VVM and decide based on the reading.Although lower than the 85% who could correctly read a VVM in the Ghana study, it was higher than the only 19% who stated the implication of the inner colour matching the outer colour in the same study (Osei et al., 2019). Association between training and practice In this current study, experience was found a non-contributory factor for both knowledge and practice.This was contrary to a study in India that concluded that longevity in service referred to as experience had an effect on practice (Osei et al., 2019), Also cadre referred to as qualification in this current study was non-impactful on practice contrary to the assertions of the study in India (Osei et al., 2019). Association between knowledge and practice In this current study, the average knowledge of all the participants was 62.42%, while the average practice score for all the participants was 65.33%.This was different from the paradox observed in the Nigerian study where practice scores lagged behind Knowledge at 77.2 and 83.9%, respectively (Adebimpe & Adeoye, 2021). Study limitations The study being a cross-sectional one cannot conclude with certainty that exposure to the training and the knowledge and practice outcomes are associated, as depicted in the study population.To arrive at this conclusion, a controlled study would yield more reliable results. The biggest proportion of the study participants were exposed to at least 2 trainings the outcomes were therefore confounded when each training was being analysed in isolation. The study focused on the lead personnel for vaccine cold chain management at the facilities under the assumption that their knowledge and practices are the closest proxies to the facility's level of practice.This however may not entirely be the case.As such a follow-up study involving more stakeholders needs to be taken to cover the gaps. Conclusions The study found that the level of practice at 65.33% is below the 80% WHO targets set in the EVM although all the six types of training investigated had been utilised across the board among the facility vaccine cold chain handlers.At least each cold chain handler had been exposed to a minimum of two combined different types of training. The knowledge of the vaccine cold chain handlers is above average at 62.42% but below the WHO target of 80% set out in the EVM.The study found that the training has an incremental effect on knowledge of handlers and their practices at the facility level.It also found that the deployment of multiple trainings yielded slight but increasingly better results as demonstrated in the results section above.The study further found that the training has an advantageous effect on the practices of handlers and their practices at the facility level.It also found that the deployment of multiple trainings yielded slight but increasingly better results as demonstrated in the results section above. The study also found that as would be expected, knowledge impacted practice therefore highlighting that efforts made to increase knowledge also impacted practice in the long run. Notes on contributors Dr. Aguma Daniel is a pharmaceutical supply chain specialist who has an MSc in Health supply chain management from the EAC Regional Centre of Excellence for Vaccines, Immunization and Health Supply Chain Management, College of Medicine and Health Sciences, University of Rwanda, Kigali, Rwanda.Aguma is passionate about his patients and has focused his energies on improving supply chain efficiency such that his patients get the right medicines in the right quality, quantity and cost.He is dedicated and committed to make his contribution to the health sector in Uganda, the EAC, Africa and the world at large.Currently working for the ministry of health in the northern region of Lango Aguma heads the supply chain technical working group of the Lango region and has made strides in synergizing the individual efforts of the vast supply chain stake holders in the region seen in the raising trend of most supply chain indicators in the region.Theogene Rizinde has more than 15 years of doing research and teaching in higher learning institutions.Theogene is a Lecturer at University of Rwanda (UR).He has been a head of Department of Applied Statistics at University of Rwanda in the School of Economics, College of Business and Economics for 4 years since 2018.He is author of more than 6 journal articles, 3 book chapters and he is an international consultant.Theogene holds master's degree of Mathematical modelling and Scientific computing, and He is finalizing his PhD in Data Science applied in Biostatistics. Joseph Dr. Marie Francoise Mukanyangezi is a PhD holder in Medical Science from Gothenburg University, Sweden.Her PhD research focused on translational study of the immunity responses of the uterine cervix in case of inflammation and infection.Following her graduation, Dr Marie Francoise is determined to contribute to the Rwandan Government in his efforts to the fight against cervical cancer.Currently teaching Research Methodology at both UG and PG Pharmacy programs, at University of Rwanda.Dr. Marie Francoise aims not only promoting medical research in the field of women cancers but also promoting health professional's education. Using the USAID/DELIVER PROJECT guidance in the logistics indicator assessment tool (LIAT) a sample size of 15% of the population is representative of the assessment of pharmaceutical logistics indicators.(USAID | DELIVER PROJECT TO 1, 2009), 375 * 15% = 56.25facilities Figure 1 . Figure 1.Comparison of means against the number of different types of training participants were exposed to. Oloro is a Lecturer of Pharmacology at the Department of Pharmacology & Therapeutics, Faculty of Medicine, Mbarara University of Science and Technology, Uganda.His major area of interest is Toxicology.He holds a Diploma in Clinical Medicine and Community Health, BSc.In Pharmacology, MSc in Pharmacology and Currently pursuing a Ph.D. in Toxicology.He previously worked as a Clinical officer at Pope John Hospital Aber, Northern Uganda, and Kampala International University Teaching Hospital, Uganda.He has taught at the university level from 2006 to date at different levels.At Mbarara University, he is the Chairman of the Faculty of Medicine Curriculum Review Committee and a Member of the Faculty of Medicine Postgraduate Committee.He had training in Basic Laboratory Animal Science at the University of Utrecht in the Netherlands with a Species-Specific specialization in Rodents in 2019.He is also a member of several grants at the local level.Dr Innocent Hahirwa is a Senior Lecturer and a Senior Consultant Pharmacist with a Ph.D in Biomedical and Pharmaceutical Sciences/Clinical Toxicology from University of Liege (Belgium).Technically, he has an extensive experience in teaching, research, administration and clinics.In addition to the formal academic training, Dr Hahirwa has been trained in different areas of Pharmacy profession including, Pharmacy profession regulation, Drug registration, Pharmacovigilance, Pharmaceutical products (including hazardous products) handling and Clinical Trials.Dr HAHIRWA is teaching different courses related to Toxicology, Pharmacology, Pharmacovigilance and Clinical trials at both Undergraduate and Postgraduate levels, and has supervised a number of research works for UG and PG students.He is also in charge of clinical pharmacy care and training in Kigali University Teaching Hospital.His research areas of interest includes mainly Toxicology, Pharmacology, Clinical Biology as well Pharmacy practice and regulation.Having occupied different managerial positions including being the Chairperson and a member of the National Pharmacy Council Board, Head of Pharmacy Department and Deputy Dean of the School of Medicine and Pharmacy at the University of Rwanda for several years, Dr HAHIRWA has strong leadership and management skills. binary scale with 60% or more practice scores as a cut for a satisfactory level of performance, the majority of facilities 46(80.7%)were considered to have a good level of performance while 11(19.3%)were considered as having an unsatisfactory level of practice. Table 2 . Facility compliance to set standards of key vaccine cold chain management practices. Table 4 . Number of different types of training participating facilities were exposed to. Table 5 . Comparison of vaccine cold chain management score means against the various interventions. Table 6 . Correlation between knowledge scores and vaccine cold chain management practice scores.
2024-01-06T16:17:42.612Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "9d320dbc5f695b223718cf5ddfd0f2d33d8c7114", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20523211.2023.2292717?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2012347cfbcb6d5018fab1dcc4c5906ad44ba622", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231864058
pes2o/s2orc
v3-fos-license
Online Mental Health Animations for Young People: Qualitative Empirical Thematic Analysis and Knowledge Transfer Background Mental ill-health is one of the most significant health and social issues affecting young people globally. To address the mental health crisis, a number of cross-sectoral research and action priorities have been identified. These include improving mental health literacy, translating research findings into accessible public health outputs, and the use of digital technologies. There are, however, few examples of public health–oriented knowledge transfer activities involving collaborations between researchers, the Arts, and online platforms in the field of youth mental health. Objective The primary aim of this project was to translate qualitative research findings into a series of online public mental health animations targeting young people between the ages of 16 and 25 years. A further aim was to track online social media engagement and viewing data for the animations for a period of 12 months. Methods Qualitative data were collected from a sample of 17 youth in Ireland, aged 18-21 years, as part of the longitudinal population-based Adolescent Brain Development study. Interviews explored the life histories and the emotional and mental health of participants. The narrative analysis revealed 5 thematic findings relating to young people’s emotional and mental health. Through a collaboration between research, the Arts, and the online sector, the empirical thematic findings were translated into 5 public health animations. The animations were hosted and promoted on 3 social media platforms of the Irish youth health website called SpunOut. Viewing data, collected over a 12-month period, were analyzed to determine the reach of the animations. Results Narrative thematic analysis identified anxiety, depression, feeling different, loneliness, and being bullied as common experiences for young people. These thematic findings formed the basis of the animations. During the 12 months following the launch of the animations, they were viewed 15,848 times. A majority of views occurred during the period of the social media ad campaign at a cost of €0.035 (approximately US $0.042) per view. Animations on feeling different and being bullied accounted for the majority of views. Conclusions This project demonstrates that online animations provide an accessible means of translating empirical research findings into meaningful public health outputs. They offer a cost-effective way to provide targeted online information about mental health, coping, and help-seeking to young people. Cross-sectoral collaboration is required to leverage the knowledge and expertise required to maximize the quality and potential reach of any knowledge transfer activities. A high level of engagement is possible by targeting non–help-seeking young people on their native social media platforms. Paid promotion is, therefore, an important consideration when budgeting for online knowledge translation and dissemination activities in health research. Introduction If health is the goal, biomedical interventions are not the only means to it. A broadened perspective expands the range of health-promoting practices and enlists the collective efforts of researchers and practitioners who have much to contribute from a variety of disciplines to the health of a nation [1]. Mental health is one of the most significant health and social issues affecting young people globally [2][3][4]. Mental disorders are the leading cause of disability for people aged 10-24 years [5]. Developing mental ill-health during youth places young people at risk of enduring mental health difficulties [6], which are accompanied by a myriad of social, vocational, and relational consequences [2,3]. Thus, promotion, prevention, and early intervention for young people who may be at risk of developing mental ill-health are global health imperatives [7][8][9][10][11][12]. The Lancet Commission on Global Mental Health and Sustainable Development [13] has recommended a broad range of approaches across multiple sectors to address the global mental health crisis. Included in these are health promotion and the need to hear from those who have experienced mental health difficulties. Lack of knowledge about features and signs of mental health difficulties (mental health literacy) and how to access support are both associated with mental health treatment avoidance or delay [14]. However, public health campaigns have been shown to be effective in changing both attitudes and intended behavior, including help-seeking [15][16][17][18]. Mental health campaigns that promote mental health literacy, personalize and normalize the experience of mental health difficulties, and have a recovery orientation have been found to both reduce stigma and promote help-seeking behavior [15,[18][19][20]. The effectiveness of public health campaigns can be enhanced by ensuring that messages are well-designed and target and reach intended audiences [16]. In the field of youth mental health, the use of digital technologies and web-based platforms has been identified as an essential way of reaching young people and delivering both mental health information and support [13,21]. At least 91% of Europeans aged 16 to 29 years use the internet on a daily basis [22], and evidence suggests that young people are turning to web-based platforms to access health and mental health information and advice [23][24][25][26]. Findings from a recent survey of over 19,000 Irish youth suggested that, after family and friends, the internet is where 20% of adolescents and 33% of young adults go to informally seek information or support on mental health [24]. The anonymity, ease of access, absence of financial or educational barriers, and the nonstigmatizing environment offered by web-based mental health platforms have been identified as positive features of web-based mental health information by young people [27]. Thus, the internet is an ideal space for public health knowledge transfer outputs. In their study, Wetterlin et al [23] found that 72.3% of respondents aged 17-24 years rated access to videos explaining mental health issues as highly important on web-based platforms. Among the many multimedia formats that can be used, animations have particular potential for public health communication [28]. They have the potential to provide strong symbolic representation of concepts. Additionally, as they are often short in length, they are considered to be an efficient way to communicate complex issues succinctly, to promote learning [29], and to influence intentions to change health-focused behavior [30]. Importantly, they also offer the potential to communicate health information across all levels of literacy [31,32]. This is particularly the case for spoken animations, which have been found to be the most effective way to communicate complex health information to people with low literacy levels [28,31]. In their study, George et al [32] found that people's responses to video animations were overwhelmingly positive, with most perceiving animation to be more engaging and relatable than other information video formats. Pacing, tone, and character rendering were rated as important factors in individuals' responses to animations. Although increasingly a requirement of health research funders [33], there is a dearth of published material documenting knowledge transfer activities in the field of youth mental health research. In this paper, we describe a collaborative knowledge transfer project involving the translation of qualitative research findings on young people's emotional and mental health into online public health animations. The project was conceived in response to a Knowledge Exchange and Dissemination Scheme funding call from the Health Research Board in Ireland. The scheme supports dissemination activities aimed at the general public or specific subgroups of the general public and is open to existing Health Research Board grant-holders. HC, the lead author, was conducting qualitative research on the lives and mental health of young people as part of a Health Research Board grant-funded PhD. Emergent findings from her research had provided compelling insights into young people's lived experiences of emotional and mental health struggles. HC recognized that the reach and impact of her research could be increased significantly if the findings could be meaningfully and creatively translated and shared with other young people. This resulted in a successful application for the Youth Mental Health Animation Creation Project by HC. The aim of the project was to develop engaging and accessible public mental health animations for young people. The project involved a collaboration between research (Royal College of Surgeons in Ireland), the Arts (the Institute of Art, Design, and Technology), and the online youth health sector (SpunOut). The IADT Animation department was invited to join as a project partner because of its previous experience in translating complex and emotive material through animation for health and mental health organizations in Ireland. To maximize the potential reach of the animations, Ireland's leading health information website for young people aged 16 to 25 years, SpunOut, also joined as a project partner. SpunOut has over 1.2 million unique users per year with an average of 180,000 individuals accessing content per month [34]. The project was conducted in phases: research data collection and analysis; developing narrative scripts using qualitative data; creating and promoting the animations; and collection and analysis of online engagement and view data. Study Population Qualitative data were collected from 17 young people (10 male, 7 female) aged 18-21 years from the Adolescent Brain Development study [35,36], a longitudinal, epidemiological, population-based study that has been examining mental health and brain development among Irish youth since 2007. At the time of the animation project, 3 waves of data collection had been completed: (1) a baseline clinical interview study of 211 young people aged 11-13 years; (2) a follow-up clinical interview study of 86 individuals aged 14-18 years; and (3) a nested qualitative follow up study with a subsample of 17 individuals aged 18-21 years. The aim of the qualitative study was to explore young people's life narratives with a focus on adverse life experiences, interpersonal relationships, mental health and subjective well-being. Findings from the 17 individuals who took part in the qualitative study at follow up 2 formed the basis of the animations. Data Collection For the qualitative study, data were collected using in-depth qualitative interviews. These were conducted by HC from May 9 to July 25, 2016. Interviews lasted between 45 minutes and 1 hour 50 minutes. A semistructured interview schedule was used to explore participants' early family life experiences, their experiences of adverse or stressful life events, their mental health, their subjective well-being, their relationships with family and peers, their self-perception, their educational and vocational experiences, and their satisfaction with life. These were explored over each individual's life course. Written consent, which included consent to audio record study interviews was obtained from all participants. Participants were compensated for their time with a gift voucher. Audio recordings were transcribed by an external transcription agency and were subject to a nondisclosure agreement. All transcripts were subsequently checked for accuracy by HC. Data Analysis Interview data were analyzed using narrative analysis. Narrative analysis refers to a suite of methods that focus on the interpretation of individuals' lives as told in storied form [37,38]. Narrative analysis recognizes that all knowledge is constructed through multiple subjective interpretations of an individual's lived experiences and involves a dynamic interplay of subjectivity, perception, meaning, and context involving both the individuals who tell their stories and the researchers who listen and interpret those stories [39]. Specifically, as noted by Kirkman [40], narrative theory offers researchers the ability to "both to retain the complexity of the individual lives they study and to investigate multiple interactions among individuals and cultures." As a method, it focuses on the stories people tell about their life experiences across time, each of which is understood to have specific meaning to the person telling their story [41]. For this study, thematic narrative analysis [38] was used to identify themes within and across individuals' life stories. Although some forms of narrative analysis focus on both story content and how people tell their stories, the exclusive focus of thematic narrative analysis is the content of the stories that people tell [38]. However, unlike other thematic methods, such as grounded theory, it focuses on maintaining the integrity of individuals' stories during the analysis rather than on extracting decontextualized themes across cases [38]. Drawing specifically on the work of McCormack [39], we initially analyzed the construction of interpretive life story summaries for each participant. This process involved repeated listening to and reading of the qualitative interviews, during which notes and memos were documented. Each individual's life story was then mapped visually (in a mind map format and sequentially, from birth to the time of interview) and a life history summary was written for each individual based on an interpretation of each life story as told by each individual and interpreted by HC. Life story summaries were then examined for key themes within each individual's life story. The analysis method used to identify themes for each individual was that described by Braun and Clarke [42,43]. It involved the generation of inductive thematic codes for each participant based on both the manifest and latent themes across their life stories. These codes were then examined and combined into broader descriptive themes, which included a number of themes relating to participants' emotional and mental health. Once coding was completed for each individual, findings were compared across all participants to identify any shared themes across the sample as a whole. For the animation project, we focused only on thematic findings relating to the mental health of individuals during their midadolescent and early adult years. The rationale for this was that the midadolescent and early adult phase of the lifespan is a peak period of risk for the onset of mental health difficulties [6,44]. We wanted the animations to reflect mental health experiences reported by young people during this potentially vulnerable phase of their lives. It also fit with the 16-to-25-year-old target age range of SpunOut. Five dominant mental health themes were identified across participants' subjective accounts of issues relating to their emotional and mental health during their mid-adolescent and early adult years. These were Anxiety, Depression, Feeling Different, Loneliness and Being Bullied. These formed the basis for each animation. Developing Narrative Scripts Using Qualitative Data With evidence that videos of no more than 2 minutes duration are optimal to maximize viewer attention and engagement [45], we aimed to create 5 animations of between 60 and 120 seconds each. Furthermore, to ensure that the animations reflected the study findings and captured the authentic voices of young people, this phase involved developing composite narrative scripts for each of the 5 animations using verbatim quotes from multiple participants' interview data. All individuals who had attended for interview were recontacted about the animation project. Of the 17 participants who had been interviewed, 7 replied. The project was discussed with each and written consent was sought to use quotes from their interviews to create the scripts. All 7 consented. Interview transcripts for these individuals were examined and any relevant quotes pertaining to the 5 animation themes were extracted. In the small number of instances where relevant quotes were not contained in the interview data from these individuals, quotes from other participants from the study were extracted, edited, and modified for inclusion in the script. Linking phrases were also added by HC to optimize the flow and necessary messaging of each narrative script. Each script was written as a first-person account following a similar narrative arc based on social cognitive theory [1]. From a social cognitive perspective, positive health behaviors and behavior change are only possible when individuals understand health behaviors, have a belief in their capacity to control their health behaviors, and hold expectations about the possible outcomes for their actions [1]. For example, Meyerowitz and Chaiken [46] found that public health communications that enhanced individuals' sense of self-efficacy to take action in relation to their own health behaviors were most effective. Each script begins by describing the experience and how it feels, including the emotional, cognitive, physical, social, and relational aspects of the experience. Following this, the script incorporates ambivalence on the part of the young person, capturing young people's struggles to accept their own suffering and reach out for support. This is in line with existing evidence of ambivalence reported in the literature [47,48]. Each animation then highlights different actions taken to respond to the theme of the script. These include talking to informal supports, engaging in hobbies or other activities, speaking to a trusted adult, and accessing formal counselling and mental health supports. These actions were all reported by participants in the study. They also complement existing evidence on the protective roles of formal and informal supports for young people [49,50], trusted adults in a young person's life [51], and involvement in meaningful hobbies and social activities [52][53][54]. Each animation ends with a message that combines hope and realism. Specifically, that the action taken has enhanced the young person's sense of well-being and connectedness but that attending to emotional and mental health issues is an ongoing process and no single action is a panacea to the existential realities and challenges of the human experience [55]. Thus, in line with evidence on maximizing effectiveness in public health campaigns, the content of each animation incorporated information on mental health literacy and help-seeking, using first-person accounts with a recovery orientation [15,16,18,20]. Scripts (with associated subthemes) can be found in Multimedia Appendix 1. Once each script had been crafted, all scripts were sent to the research participants who had consented to the use of quotes from their interviews. Scripts were marked for each individual to clarify which quotes had been extracted from their interview data. This was to offer participants an opportunity to withdraw their consent or to remove any quotes if they had any concerns about their anonymity. All participants were satisfied with the scripts as written. Creation Animations were created in collaboration with the Animation program in the Film and Media Department of the IADT in Dublin. IADT is the only institute of art, design, and technology in Ireland that focuses specifically on the creative cultural and technological sectors. This animation project was integrated into the curriculum of third-year animation students in IADT as part of their applied professional practice learning. HC acted as executive producer and executive director for the animations, working with 5 student animation teams who crafted the scripts into the final animations. Students were overseen by DQ, the animation program lead. The collaborative process was designed to maximize the authenticity and potential impact of each animation, while also protecting the research participants' data. The collaboration combined HC's expertise in mental health and the animation students' expertise in conceptualizing, creating and producing animations. It also enabled exploration and discussion of issues such as pacing, tone, and character rendering in each animation [32] (Figure 1). A decision was made by HC to ensure the design style was simple and minimalistic and that character rendering was not overly polished. This was to ensure that the style and rendering of the animations was as congruent as possible with the stories being told. Additionally, in line with evidence on how to maximize animation effectiveness [28,31], first-person narrative voice-overs of the scripts were layered onto the animations. Three of the voice-overs used female actors (depression, loneliness, and feeling different), and 2 used male voice-over actors (anxiety and bullying). The process of cocreating the animations lasted for approximately 4 months. To promote accessibility, subtitled versions were developed for all animations. Subtitles are essential for individuals who are Deaf/deaf [56] and have also been found to enhance multimedia animation learning in people with attention deficit hyperactivity disorder [57]. Adding subtitles also ensured that the animations could be watched without audio, something of particular relevance to young people's use of mobile technology. Furthermore, although over 70,000 of people in Ireland speak Irish on a daily basis [58], there is an absence of youth mental health information in the vernacular of young native Irish speakers. To address this deficit, Irish language versions were also developed with support from Conradh na Gaeilge [59], a social and cultural organization that promotes the Irish language in Ireland and worldwide. Once the animations had been completed, they were shown to the research participants who had consented to their verbatim quotes being used. This was to do a final check with those participants that they were satisfied with the animations and to affirm their consent for them to be hosted online. All participants reported being highly satisfied with the completed animations and consented to the launch phase of the project. Promotion A multimethod approach was adopted in relation to hosting and promoting the animations. First, new content was developed for SpunOut connected to each of the animations. This new content was embedded into the SpunOut website. All existing SpunOut content was also reviewed to identify relevant sections of the website where young people could access further information. Animations were hosted on a unique webpage [60]. Hyperlinks to this hosting page were included in all social media posts to facilitate young people who wished to view more of the animations. All 3 versions of the animations (nonsubtitled, subtitled, and Irish language versions) were also hosted on the SpunOut YouTube channel. A launch event took place on May 9, 2019, using the hashtag #YMHanimate. Following this, SpunOut engaged in a social media advertising promotion campaign on both Facebook (€300.00, approximately US $362.21) and Twitter (€300.00, approximately US $362.21). The target demographic for the promotion campaign was young people aged 16 to 25 years. Collection and Analysis of Online Engagement and View Data To determine online engagement, analytics data from SpunOut Twitter, Facebook, and YouTube accounts were collected and analyzed for the 12-month period following the launch event. A cut-off view length of 75% or above was used to determine viewing figures across all 3 platforms. One reason for choosing this view length was that analytics data on viewing figures can include views of as little as 2 seconds in length, rendering counts of "any views" unreliable. Additionally, the animation credits accounted for between 9% and 15% of the total view time of each animation. This meant that individuals did not have to watch the full duration of each animation to be exposed to the full content. Data on link clicks (where an individual clicked a related SpunOut content link after watching an animation on social media), costs per view, and viewer demographics could only be extracted from Facebook analytics. Available gender variables were restricted to male, female, and unknown. Views and Link Clicks Over the period from May 9, 2019 to May 8, 2020, the animations were viewed by 15,848 young people across all social media platforms (based on our criterion of >75% view length). Facebook views accounted for almost two-thirds of all views. Feeling Different was the most viewed animation, followed by Being Bullied and Depression (see Table 1). A majority of views occurred over a period of approximately 2 months following the launch during the social media ad campaign. There were low rates of link clicks on Facebook (ie, when an individual clicked a link to content hosted on the main SpunOut webpage) with just 240 recorded during the period of the social media ad campaign. Further details on impressions and viewing figures are available in Table 1. Cost Per View and Reach The cost per view of the animations on Facebook was €0.035 (approximately US $0.042) per young person. This was calculated based on the >75% view length count of 10,437. The cost per reach was €0.003 (approximately US $0.004) per young person, based on a total Facebook reach of 118,142. Demographics Available data from Facebook revealed that 55.1% of those (5750/10,437) who viewed the animations on Facebook were aged 18 to 24 years, 39.9% (4160/10,437) were aged 13 to 17 years, and the remaining 5.0% (527/10,437) were aged 25 years or over. There were higher rates of female viewers than males across all animations and age ranges with the exception of the Being Bullied animation, where higher rates of male views were observed across all age ranges (see Table 2) General This is the first knowledge transfer project we are aware of that has translated qualitative research findings on issues affecting young people's emotional and mental health into a series of bilingual public health online animations. In line with recent recommendations on addressing the global mental health crisis [13,21], the project has given voice to the lived experiences of young people who are struggling with their mental health using a collaborative knowledge transfer process. Our research revealed that experiences of anxiety, depression, loneliness, feeling different, and being bullied were common in the lives of young people during their midadolescent and early adult years. These findings were successfully translated into 5 public health animations through a unique collaboration between the research, Arts, and online sectors. All 5 animations were hosted and promoted by SpunOut [60]. In the 12 months following the launch of the animations, high engagement and viewing numbers were evident across SpunOut social media platforms for all 5 animations, with close to 16,000 views. A majority of engagement occurred during the limited period of the social media ad campaign. The animations exploring Feeling Different and Being Bullied had the highest number of views. Comparison With Existing Research and Knowledge Our qualitative research findings, highlighting young people's lived experiences of anxiety, depression, loneliness, feeling different, and bullying, are aligned to existing evidence. Epidemiological evidence in Ireland has found that, by the age of 24 years, over 1-in-4 young people in Ireland will have experienced clinical levels of anxiety (26.7%) and depression (28.5%) [61]. More recently, in their study of over 19,000 adolescents and young adults in Ireland, Dooley et al [24] found that 49% of adolescents and 58% of young adults were experiencing anxiety and 40% of adolescents and 58% of young adults were experiencing depression [24]. Anxiety and depression in youth populations have also been recently identified as a significant health issue internationally [62][63][64]. Our findings on loneliness and feeling different during the adolescent and early adult years complement existing evidence that youth is a key period of risk for loneliness and social disconnection [65][66][67]. Not only did this emerge as a qualitative theme in the research study, but the Feeling Different animation had the highest number of views. Loneliness and a sense of feeling different are associated with individuals' needs to explore and find their own identities during adolescence and early adulthood [65]. However, other factors such as culture, environment, personality factors, and gender are also implicated in experiences of loneliness [67]. In the case of gender, females are more likely to report feelings of loneliness and social disconnection, supporting our use of a female voiceover for the loneliness and feeling different animations. The female character representations used in these animations may also have been more congruent with the experiences of females, as reflected in the high prevalence of female views of these animations. Our finding that many participants in the study had experienced bullying and the high view rate of the Being Bullied animation is consistent with recent Irish data showing rates of 39% and 58% among adolescents and young adults respectively who reported being the victim of bullying [24]. Rates among adolescents are similar to those reported internationally. In their meta-analysis on bullying, Modecki et al [68] found a mean prevalence rate of 35% for traditional bullying and 15% for cyberbullying across the 80 studies in their review. Additionally, our finding that higher numbers of males aged 18 years or older viewed the Being Bullied animation reflects gender trends in the national My World survey [24] of Irish youth where rates of bullying in males increased over time. Specifically, fewer males than females reported being bullied during adolescence (male: 40%, female: 45%) but a higher proportion reported being bullied during their young adult years (male: 61%, female: 57%). The finding that the animations were viewed almost 16,000 times following their launch demonstrates the potential reach that animations can have within the youth mental health arena. To achieve this, a low budget social media ad campaign was required, and a majority of views occurred in response to this campaign over the campaign period of approximately 2 months. This highlights the value of animations as a medium for knowledge transfer [28,29,31,32] and the importance of budgeting for paid social media promotion to maximize the reach of multimedia knowledge transfer outputs. When proactively seeking mental health information, evidence suggests that young people prefer seeking information from information-based rather than social media websites [23,27,69]. However, for young people who are not proactively seeking mental health information, our high viewing figures during the promotion campaign indicate that social media campaigns may be a particularly effective method to engage non-help-seeking young people with public mental health information. Key to this is following social media trends in order to target those web-based and social media platforms that young people are already using [70]. We anticipated that the animations would enhance mental health literacy in young people, promote disclosure of mental health difficulties, and ensure young people understood how to access informal and formal mental health support. This was based on existing evidence that has shown that public awareness mental health campaigns are effective in achieving both attitude and behavior changes [14,15,18]. For example, an evaluation into the Time-to-Change public mental health awareness campaign in the United Kingdom found that individuals who were simply aware of the campaign reported increased comfort in disclosing mental health difficulties to family and friends and were more likely to seek professional help [18]. Similarly, in their review, Kauer et al [69] found that an increase in mental health literacy was a facilitator of help-seeking among young people accessing information online. A review by Pretorius et al [27] also found that young people used online information and resources to facilitate personal coping responses or as a means to promote informal support seeking behaviors and that the process of help-seeking online could act as a gateway to further help-seeking by connecting young people with information and additional supports. Based on this existing evidence, it is reasonable to hypothesize that, for a proportion of young people, viewing and being exposed to the messaging within each animation will have positively impacted their attitudes toward mental health difficulties, their mental health literacy (for those with low levels of mental health literacy), their willingness to share any current or future mental health concerns, and their willingness to reach out for information and support if they need to. A final and important finding from this project was the low cost-high yield relationship between what was spent on social media promotion and the level of user engagement and views of the animations. In their systematic review of the use of social networking sites for public health practice and research, Capurro and colleagues [70] highlight that social media and social networking sites offer researchers the fast, easy, and low-cost access to a range of populations, making them an ideal platform for conducting research. Our cost-per-view findings, at a cost of just €0.003 (approximately US $0.004) to reach a young person with one of the videos and €0.035 (approximately US $0.042) to have a young person in our target demographic view the video to completion highlights the potential that social media promotion can offer in supporting impactful knowledge transfer activities targeting known populations at a very low cost. Moreover, our use of this method addresses a key factor that has been identified in maximizing the potential for public health messaging to change behavior: ensuring that messages are delivered to their intended audience with sufficient reach [16]. Additionally, it facilitated access to demographic and engagement data, an oftentimes underused data resource in the field of health-related research [70]. Limitations A key limitation in this knowledge transfer project is that, while we were able to identify our target demographic for our Facebook and Twitter promotion campaigns, such promotion activity also relies on algorithms and models that are controlled by each social media or networking site. Thus, our demographic findings relating to the age and gender of user engagement may reflect aspects of the advertising algorithms used rather than solely reflecting gender trends related to the animations. Additionally, the idea to develop the animations was a response to the findings emerging from the research. This meant that the focus of the animation project was on ensuring the animations were embedded and accessible to young people as part of an existing and reputable youth mental health website rather than on collecting data on young people's responses to the animations or their impact on attitudes, health, or help-seeking. This limited our analysis to the reach of the animations. We were therefore unable to examine the impact of the animations on those 15,000 or more young people who watched them or to interpret the low link click rate in our analytics data. However, in relation to the latter, it is important to note that link-click data were only collected during the short period of the social media and promotion period. The limitations of the reach data in this study highlight the importance of integrating research to evaluate impact and effectiveness when designing public health campaigns such as this. Future research is needed to examine the impact of both outputs, such as our animations, and the effectiveness of targeting young people on social media platforms. Conclusions In line with recommendations for tackling the global mental health crisis [13,21], this knowledge transfer project provides an example of how the mental health research community can engage in meaningful knowledge transfer activities targeting young people on their native social media platforms. By adopting this type of knowledge transfer activity, researchers have the potential to use and translate their findings to make a tangible difference to both individual lives and to overall societal health, beyond what is possible within the confines of traditional dissemination arenas and institutions. provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
2021-02-11T06:18:18.179Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "a8bdf12e692c0d33c1a54f0119c03702a9aa4a85", "oa_license": "CCBY", "oa_url": "https://jmir.org/api/download?alt_name=jmir_v23i2e21338_app1.pdf&filename=34a364d361567d89a354295aa112adfe.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "51418a69a93ebc9ff2ee299fc3c2a5f626461689", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119611243
pes2o/s2orc
v3-fos-license
On the Beauville form of the known irreducible symplectic varieties We study the global geometry of the ten dimensional O'Grady irreducible symplectic variety. We determine its second Betti number, its Beauville form and its Fujiki constant. Introduction Irreducible symplectic varieties are simply connected compact Kähler manifolds with a unique, up to C * , global holomorphic two form and such that this two form is nondegenerate at each point. By Bogomolov In this paper we deal with the remaining case: we determine the second Betti number of the ten dimensional O'Grady example M and then compute B M and c M . In the following table we give the complete list of the Beauville forms and the Fujiki constants of all known irreducible symplectic verieties. 2n 23 2n 7 (2n)! n!2 n (n + 1) H ⊕ ⊥ 3 ⊕ ⊥ (−2(n + 1)) M 6 8 60 In this table the lattice H is the standard hyperbolic plane, the lattice −E 8 is the unique negative definite even unimodular lattice of rank eight and (i) is the rank 1 lattice generated by an element whose square is i. Finally Λ is a rank 2 lattice whose associated matrix in a suitable basis is Acknowledgements. I would like to thank Kieran O'Grady for useful conversations and Donatella Iacono and Francesco Esposito for their helpful support. b 2 (M) = 24 In this paper X is a K3 surface such that P ic(X) = Z < H > and H 2 = 2. We denote by M (a,b,c) the Simpson moduli space of semistable sheaves on X with Mukai vector (a, bc 1 (H), cη) ∈ H 0 (X, Z) ⊕ H 2 (X, Z) ⊕ H 4 (X, Z), where η is the fundamental form in H 4 (X, Z). This theorem is a consequence of the following three propositions. The first proposition compares open subsets of M (0,2,2) and M (0,2,1) . In the remaining part of this section, coefficients of singular cohomology groups are always rational and, for simplicity, we often omit them in the notation. Proof of Proposition 1.0.2. Denote by U 0 ⊂ U the open subset parametrizing curves whose singular loci are empty or consist of a unique nodal point. Since the complement of U 0 in U has codimension two and the fibers of Ψ and Φ are equidimensional (see [Ma 00]), the same property holds for the complements of Ψ −1 (U 0 ) and Φ −1 (U 0 ) in Ψ −1 (U ) and Φ −1 (U ) respectively. Hence Proposition Since the abutment of the Leray spectral sequence is the cohomology of the domain, Proposition 1.0.2 follows if we prove: terms of both the spectral sequences survive to the E ∞ pages. Let's prove a). Let U s ⊂ |2H| be the locus parametrizing smooth curves and let i : U s → U 0 be the open inclusion. Statement a) is an obvious consequence of 1. For q ≤ 2 the sheaves i * R q Ψ 0 * (Q) and i * R q Φ 0 * (Q)) are isomorphic. For q ≤ 2 the natural attachment maps In order to prove 1, we denote by Ψ s : we are reduced to show that there exists an isomorphism between R q Ψ s * (Q) and R q Φ s * (Q). Let l ∈ |2H| ≃ P 5 be a general line, by the Zariski theorem the inclusion induces a surjection on fundamental groups π 1 (l ∩ U s ) → π 1 (U s ). Since Ψ s and Φ s are smooth R q Ψ s * (Q) and R q Φ s * (Q) are local systems, hence they are isomorphic if and only if their restrictions R q Ψ s * (Q) |l∩Us and R q Φ s * (Q) |l∩Us to l ∩ U s are isomorphic. Hence, denoting by Ψ l : the restrictions of Ψ and Φ we want an isomorphism between R q Ψ l * (Q) and R q Φ l * (Q). Since l ∩ U s parametrizes smooth curves the family Ψ l : Ψ −1 (l ∩ U s ) → l ∩ U s is isomorphic to the degree 6 relative Picard group Pic 6 (l ∩ U s ) → l ∩ U s of the family of curves parametrized by l ∩ U s and analogously φ l can be identified with the degree 5 relative Picard group Pic 5 (l ∩ U s ) → l ∩ U s of the same family of curves. The wanted isomorphism exists since Pic 6 (l ∩ U s ) ≃ Pic 5 (l ∩ U s ) over l ∩ U s . This follows since the family of curves parametrized by l ∩ U s admits sections: any point in the base locus of l gives a section. In order to prove 2 we need a general Lemma. Lemma 1.0.6. Let g : X → ∆ be a proper map with irreducible fibers, from a complex smooth surface X onto the open unit disk ∆ ⊂ C. Suppose that g has a unique critical point p, suppose that p is non degenerate and g(p) = 0. Let g : P ic i (X ) → ∆ be the compactification, by torsion free sheaves, of the degree Proof. We only have to check that the map Γ(α) : of the qcocycles of the general fiber ofĝ that are invariant under the monodromy action of π 1 (∆ * ). Finally, using these identifications, the map Γ(α) is just the map induced in cohomology by the inclusionĝ −1 ( 1 2 ) ֒→ P ic i (X ). The central fiber ofĝ is a normal crossings divisor (see [Se 00]) andĝ is a semistable degeneration, hence the surjectivity of the map Γ(α) is a consequence of the Clemens local invariant cycle theorem (see [Cl 77]). Since by retraction The second term of this inequality can be computed by the Picard-Lefschetz formula. In fact, denoting by δ the vanishing cycle, the Picard-Lefschetz formula says that the generator γ of π 1 (∆ * ) acts on H 1 (g −1 ( 1 2 )) sending β to A γ (β) = β+ < α ∩ β > α. Since the action of γ on (H q (ĝ −1 ( 1 2 )) is given by ∧ q A ∨ γ and the matrix of A γ in a suitable basis is given by The first term of (1) is determined by using the known description of the compactified Picard group of a curve with a node. The varietyĝ −1 (0) is obtained starting from a P 1 -bundle over the Jacobian J of the normalization of g −1 (0). This P 1 -bundle has two preferred sections andĝ −1 (0) is obtained identifying the two sections by a translation on J. It follows thatĝ −1 (0) is homeomorphic to a topologically locally trivial bundle over J whose fiber F is obtained from a sphere by identifying two points. By the Leray spectral sequence we deduce Since locally over small neighborhoods of points in U 0 \ U s the families Ψ and Φ are homeomorphic to families of the formĝ × id : P ic i (X ) × ∆ 4 → ∆ 5 whereĝ : P ic i (X ) → ∆ is as in the previous lemma and id : ∆ 4 → ∆ 4 is the identity, statement 2 follows from Lemma 1.0.6. It remains to prove statement b). Its proof is a slight modification of the proof of the degeneration of the Leray spectral sequence of a smooth projective fibration. Since the same proof works for both Ψ 0 and Φ 0 , we deal explicitely only with the first case. In order to define V we denote by R 0 ⊂ R the locus parametrizing curves of the form C = C 1 ∪ C 2 , where C 1 = C 2 , the singular locus of C i consists of at most a nodal point and C 1 ∩ C 2 is included in the smooth locus of both C 1 and C 2 . A concrete description of the inclusion R 0 ⊂ R is given by means of the map f : X → P 2 (see Remark 1.0.5). This map identifies R 0 with the set of pairs of distinct lines in P 2 such that the intersection of each line with the branch locus S of f is either reduced or contains at most a unique double point and, in this second case, the support of this double point does not belong to the intersection of the two lines. The subvariety V ⊂ Φ −1 (R 0 ) is defined as the locus parametrizing sheaves of the form F = i * L, where i : C → X is the inclusion of a curve of R 0 and L is a line bundle on C. Before giving a global description of V , we study the locus of M (0,2,1) parametrizing sheaves supported on a fixed curve in R 0 . Lemma 1.0.7. Let C = C 1 ∪ C 2 be a curve in R 0 . Denote by i : C → X the inclusion of C and by i 1 : C 1 → X and i 2 : C 2 → X the inclusion of its components. Let F be a torsion free sheaf on C and suppose F = i * G ∈ Φ −1 (C). Then: 1) Up to exchange of C 1 and C 2 , the sheaf F fits in an exact sequence of the form where L 1 and L 2 are rank 1 torsion free sheaves whose degrees are one and two respectively, 2) For any non trivial extension of the form (2) the middle term F is a stable sheaf, 3) Fixing L 1 and L 2 , two non trivial extensions of the form (2) have middle terms isomorphic if and only if they differ by a scalar multiplication, 4) Fixing L 1 and L 2 , for any point p in C 1 ∩ C 2 there exists a unique, up to C * , non trivial extension of the form (2) such that the restriction G of F to C is not locally free at p. Proof. 1) Let G 1 and G 2 be the torsion free parts of the restrictions of G to C 1 and C 2 , then F fits in an exact sequence of the form where Q is a quotient of the schematic intersection of C 1 and C 2 . Stability and ch 2 (F ) = 1 imply that either deg(G 1 ) = deg(G 2 ) = 2 and length(Q) = 1 or {deg(G 1 ), deg(G 2 )} = {2, 3} and length(Q) = 2. Supposing deg(G 2 ) = 2 and setting G 2 = L 2 and L 1 := Ker(F → i 2 * L 2 ) we get the sequence (2). 2) If F were unstable, there would be a sheaf of the form i j * M with deg(M ) = 2 injecting into F . If j = 1 this would imply that M is a subsheaf of L 1 : absurd. If j = 2 then M = L 2 and the sequence splits. 3) By 2) End(F ) = C. It remains to prove that V is an open, dense, smooth, irreducible subset of Φ −1 (R) and H 1 (V, Q) = 0. Openness is obvious. By the previous lemma V is dense in Φ −1 (R 0 ) and, since R 0 is dense in R and the fibers of Φ are equidimensional (see [Ma 00]), the open subvariety V is dense in the divisor Φ −1 (R). Smoothness holds because R 0 is smooth and the differential of Φ at any point p = F of V is surjective: in fact since the restriction of F to its support C is a line bundle, such a differential is identified with the natural map d : (Ext 1 (F, F )) ≃ H 0 (N C|X ) and this map is surjective because its cokernel is always included in H 2 (Hom 1 (F, F )) which is zero since F is supported on a curve. Any point p ∈ M (0,1,0) ×M (0,1,1) has an open neighborhood U p in the classical topology of the form U 1 ×U 2 such that each X×U i is endowed with a tautological family F i . Let q i : X × U 1 × U 2 → X × U i and q : X × U 1 × U 2 → U 1 × U 2 be the projections. For any U p ⊂ g −1 (T ) the sheaf Ext 1 q (q * 2 F 2 , q * 1 F 1 ) is a rank 2 vector bundle. By 3) of Lemma 1.0.7 the fibers of the associated projective bundle b Up : P(Ext 1 q (q * 2 F 2 , q * 1 F 1 )) → U p parametrize isomorphism classes of sheaves F fitting in a non trivial extension of the form (2): it follows that the bundles b Up can be glued to form a global P 1 -bundle b : P → N . By 1) and 2) of Lemma 1.0.7 the natural modular map φ : P → M (0,2,1) surjects onto Φ −1 (R 0 ): hence the open subset V ⊂ Φ −1 (R 0 ) is irreducible. (0,1,1) be the open subset parametrizing pairs of sheaves whose restrictions to their supports are line bundles, set P 0 := b −1 (N 0 ) and denote by b 0 : P 0 → N 0 and by φ 0 : P 0 → M (0,2,1) the restrictions of b and φ. By 4) of Lemma 1.0.7 the locus W ⊂ P 0 , parametrizing extensions of the form (2) whose middle terms have locally free restrictions to their supports, is the complement of a two section D. The map φ 0 induces a bijection, hence an isomorphism, between the smooth varieties W and V . In fact, by 1) of Lemma 1.0.7, if F = i * G ∈ V , then F is the middle term in an exact sequence of the form (2): hence φ 0 (W ) = V . Moreover since G is a line bundle of degree 5, its restrictions to the components of C have degree 2 and 3: hence the sheaf i 2 * L 2 in the sequence (2) is unique quotient of F belonging to M (0,1,1) and i 1 * L 1 ∈ M (0,1,0) is the associated kernel: it follows that (φ 0 ) −1 (F ) ⊂ (b 0 ) −1 (i 1 * L 1 , i 2 * L 2 ). Since, by 3) of Lemma 1.0.7 the map φ 0 is injective on the fibers of b 0 , the restriction of φ 0 to W is injective too. We now show that H 1 (W ) = H 1 (V ) is zero. Let D s be the smooth locus of D. The couple (W ∪ D s , W ) induces the long exact sequence The vector space H 1 (W ∪ D s ) is zero. In fact the complement of W ∪ D s in P 0 has codimension two, hence H 1 (W ∪ D s ) = H 1 (P 0 ) and since P 0 is a P 1bundle over N 0 its first cohomology group is trivial if H 1 (N 0 ) = 0. The last equality holds since the complement of N 0 in the simply connected manifold M (0,1,0) × M (0,1,1) has codimension two. Indeed it is the union of g −1 (|H| 2 \ T ) and the locus Y ⊂ M (0,1,0) × M (0,1,1) parametrizing pairs of sheaves of the form (i 1 * L 1 , i 2 * L 2 ) where either L 1 or L 2 is not a line bundle. The subvariety g −1 (|H| 2 \ T ) ⊂ M (0,1,0) × M (0,1,1) has codimension two since |H| 2 \ T has codimension two in |H| 2 and by [Ma 00] the fibers of g are equidimensional. Using the previous exact sequence, it remains to show the injectivity of a. The image of a is the Chern class of the line bundle associated with the divisor D s and it is not zero since D has degree 2 on the fibers of b 0 . Since by excision and Thom isomorphism the dimension of H 2 (W ∪ D s , W ) is the number of connected components of D s we need to prove that D s is connected or, equivalently, D is irreducible. Denote by Z ⊂ X × T ⊂ X × |H| 2 the incidence subvariety parametrizing triplets of the form (p, C 1 , C 2 ) where p ∈ C 1 ∩ C 2 . There exists a regular morphism m : D → Z given by sending a non trivial extension of the form (2), where F = i * G and G is not locally free, to the triplet (p, C 1 , C 2 ) where p ∈ C 1 ∩ C 2 is the unique point at which G is not locally free. By 4) of Lemma 1.0.7, the fiber of m over (p, C 1 , C 2 ) is isomorphic to P ic 1 (C 1 ) × P ic 2 (C 2 ) hence it is irreducible and of constant dimension 4: therefore the irreducibility of D follows from the one of Z. Finally Z is irreducible since the projection p : Z → |H| 2 is a double covering and it is obtained from the double covering f : X → P 2 by base change with the map q : T → P 2 sending (C 1 , C 2 ) to f (C 1 ∩ C 2 ): since X and T are irreducible also Z is irreducible Proof of Proposition 1.0.4. Since the fibers of Ψ are equidimensional and the exceptional divisor of O'Grady's desingularization is irreducible, we need to prove the irreducibility of the stable locus of Φ −1 (0,2,2) (R 00 ), where R 00 is an open dense subset of R. It is irreducible since R is irreducible and for general C = C 1 ∪ C 2 ∈ R, the stable locus of Φ −1 (0,2,2) (C) is a C * -bundle over P ic 3 (C 1 ) × P ic 3 (C 2 ) (this can be proved as in the case C ∈ R(1) ∪ R(2) in Proposition 2.1.4. of [Ra 04]). We now prove that µ(H 2 (X, Z))⊕Zc 1 ( Σ)⊕Zc 1 ( B) is saturated by evaluating a basis on suitable homology classes. By Proposition 3.0.5 of [OG 99], there exists an open dense subset B 0 ⊂ B which is a P 1 -bundle over the smooth locus of the symmetric product X (4) and by the proof of Lemma 3.0.13 of [OG 99] the intersection Σ ∩ B 0 is a three-section of this P 1 -bundle. Since by [LS 05] the smooth variety M is the blow up of M (2,0,−2) along Σ, the strict transform B has an open subset which is a P 1 -bundle and, denoting by γ its fiber, we have γ · Σ = 3. On the other hand Σ has an open dense subset Σ 0 which is P 1 -bundle on the smooth locus of Σ and, denoting by δ a fiber, we get δ · B = 1. Finally, since M has trivial canonical bundle, we also get γ · Σ = −2 = δ · B. In the case of M we have the following theorem.
2014-10-01T00:00:00.000Z
2006-06-16T00:00:00.000
{ "year": 2006, "sha1": "189e2214b82252920d7b2e274000ab6fa77b3141", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0606409", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c5918f6bd8bcd9c8c83136ef6b41a6062676636f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
13912824
pes2o/s2orc
v3-fos-license
New Predictions for Multiquark Hadron Masses The recent reported charmed-strange resonance at 2.32 GeV/c suggests a possible multiquark state. Three types of multiquark bound states are reviewed. A previous model-independent variational approach considers a tetraquark with two heavy antiquarks and two light quarks as a heavy antidiquark with the color field of a quark bound to the two light quarks with a wave function like that of a heavy baryon. Results indicate that a charmed-strange tetraquark $\bar c \bar s u d$ or a bottom-strange tetraquark $\bar b \bar s u d$ with this"diquark-heavy-baryion"wave function is not bound, in contrast to"molecular-type"$D-K$ and $B-K$ wave functions. However, a charmed-bottom tetraquark $\bar c \bar b u d$ might be bound with a very narrow weak decay mode. A"molecular-type"$D-B$ state can have an interesting $B_c \pi$ decay with a high energy pion, Three different mass scales are relevant to the description of multiquark hadrons, the nuclear-molecular scale, the hyperfine or color-magnetic scale and the diquark scale. The nuclear scale is characterized by the deuteron, a bound state of two color singlet hadrons with a reduced mass of 500 MeV, a binding energy of several MeV and a radius of ≈ M π . The underlying quark structure of the hadrons plays no role. The kinetic energy of the state confined to this radius is No two-meson bound state containing a pion has been found. The reduced mass of any such state is too small to be bound in a radius of ≈ M π by a similar interaction; its kinetic energy would be too high. T π ≈ M 2 π /M π ≈ 140 MeV (1. 2) The two-kaon system with a reduced mass of 250 MeV seems to be on the borderline, Suggestions that the f o and a o mesons are deuteron-like KK states or molecules are interesting, but controversial. There is no unambiguous signature because KK couples to π − π and η − π and both states break up strongly. The D − K system with a kinetic energy is therefore an attractive candidate for such a state [3][4][5][6][7][8]. The transition for the I = 0 DK state to D S π is isospin forbidden; thereby suggesting a narrow width. The color-magnetic scale is characterized by a mass splitting of the order of 400 MeV; e.g. the K * − K splitting. Recoupling the colors and spins of a system of two color-singlet hadrons has been shown to produce a gain in color-magnetic energy [2][3][4]. However, whether this gain in potential energy is sufficient to overcome the added kinetic energy required for a bound state is not clear without a specific model. The diquark scale arises when two quarks are sufficiently heavy to be bound in the well of the coulomb-like short-range potential required by QCD. A heavy antidiquark in a triplet of color SU(3) has the color field of a quark and can be bound to two light quarks with a wave function like that of a heavy baryon. Since the binding energy of two particles in a coulomb field is proportional to their reduced mass and all other interactions are mass independent, this diquark binding must become dominant at sufficiently high quark masses. II. THE DIQUARK-HEAVY-BARYON MODEL FOR TETRAQUARKS We now examine the diquark-heavy-baryon model for states containing heavy quarks. Our "model-independent" approach assumes that nature has already solved the problem of a heavy color triplet interacting with two light quarks and given us the answers; namely the experimental masses of the Λ, Λ c and Λ b . These answers provided by nature can now be used without understanding the details of the underlying theoretical QCD model. This approach was first used by Sakharov and Zeldovich [9] and has been successfully extended to heavy flavors [10][11][12]. The calculated mass can be interpreted as obtained from a variational principle with a particular form of trial wave function [5]. This model neglects the color-magnetic interactions of the heavy quarks, important for the charmed-strange four-quark system at the colormagnetic scale [2][3][4] and is expected to overestimate the mass of acsud state. Thus obtaining a model mass value above the relevant threshold shows only that this type of diquark-heavy-baryon wave function does not produce a bound state; i.e that the heavy quark masses are not at the diquark scale. The previous results [3,4] at the colormagnetic or nuclear-molecular scale should be better. However, the bc system may already be sufficiently massive to lead to stable diquarks and the model predictions for thecbud state may suggest binding. We first apply this model to acsud state with a light ud pair seeing the color field of thecs antidiquark like the field of a heavy quark in a heavy baryon. Thecs antidiquark differs from the cs in the D s by having a QQ potential which QCD color algebra requires [5] to have half the strength of the QQ potential in the D s . The tetraquark mass is estimated by using the known experimental masses of the heavy baryons and heavy meson with the same flavors and introducing corrections for the difference between the heavy meson and the heavy diquark. where H ud and H udQ respectively denote the Hamiltonians describing the internal motions of the ud pair and of the three-body system of the ud pair and the antidiquark which behaves like a heavy quark, T cs and V cs denote the kinetic and potential energy operators for the internal motion of a cs diquark which is the same as that for acs antidiquark. The expectation values are taken with the "exact" wave function for the model, with the subscript cs indicating that it is taken with the wave function of a diquark and not of the D s . The kinetic energy operator T cs is the same for the cs diquark and the D s but the potential energy operators V cs and V cs = 2V cs differ by the QCD factor 2. This difference between cs diquark and D s wave functions is crucial to our analysis. The quark masses m q are effective constituent quark masses and not current quark masses. We follow the approach begun by Sakharov and Zeldovich [9] where the "Bar" and "Mes" subscripts denote values obtained from baryons and mesons, respectively. Similar results have since been found for hadrons containing heavy quarks along with many more relations using these same effective quark mass values for baryon magnetic moments and hadron hyperfine splittings [10][11][12]. We therefore assume that the To evaluate δH cs we use the Feynman-Hellmann theorem and the virial theorem to obtain, This expression can be simplified by using the Quigg-Rosner logarithmic potential [13] with its parameter V o determined by fitting the charmonium spectrum. In the limit of very high heavy quark masses this model must give a stable bound state. The cs diquark is evidently not heavy enough to produce a bound diquark-heavy-baryon state. A similar calculation forbsud indicates that the bs diquark is also not heavy enough. In any case this is a striking signal which cannot be confused with a qq state. Experiments can look for a resonance with a pion accompanying any of B c states. It is a pleasure to thank E. L. Berger, T. Barnes, F. E. Close, M. Karliner, J. Napolitano and V. Papadimitriou for helpful discussions.
2014-10-01T00:00:00.000Z
2003-06-22T00:00:00.000
{ "year": 2003, "sha1": "45f35bb1c204d506f529651eda2e8f100574162b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2003.10.117", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "1ef60f630c98a41e649f6eb94734fa9585e34759", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52878206
pes2o/s2orc
v3-fos-license
A Decoding Scheme for Incomplete Motor Imagery EEG With Deep Belief Network High accuracy decoding of electroencephalogram (EEG) signal is still a major challenge that can hardly be solved in the design of an effective motor imagery-based brain-computer interface (BCI), especially when the signal contains various extreme artifacts and outliers arose from data loss. The conventional process to avoid such cases is to directly reject the entire severely contaminated EEG segments, which leads to a drawback that the BCI has no decoding results during that certain period. In this study, a novel decoding scheme based on the combination of Lomb-Scargle periodogram (LSP) and deep belief network (DBN) was proposed to recognize the incomplete motor imagery EEG. Particularly, instead of discarding the entire segment, two forms of data removal were adopted to eliminate the EEG portions with extreme artifacts and data loss. The LSP was utilized to steadily extract the power spectral density (PSD) features from the incomplete EEG constructed by the remaining portions. A DBN structure based on the restricted Boltzmann machine (RBM) was exploited and optimized to perform the classification task. Various comparative experiments were conducted and evaluated on simulated signal and real incomplete motor imagery EEG, including the comparison of three PSD extraction methods (fast Fourier transform, Welch and LSP) and two classifiers (DBN and support vector machine, SVM). The results demonstrate that the LSP can estimate relative robust PSD features and the proposed scheme can significantly improve the decoding performance for the incomplete motor imagery EEG. This scheme can provide an alternative decoding solution for the motor imagery EEG contaminated by extreme artifacts and data loss. It can be beneficial to promote the stability, smoothness and maintain consecutive outputs without interruption for a BCI system that is suitable for the online and long-term application. INTRODUCTION The emergent brain-computer interface (BCI) technology allows individuals with severe neuromuscular related locomotive disabilities to directly use their brain to operate or communicate with external peripherals and environments (Daly and Wolpaw, 2008;McFarland and Wolpaw, 2011). Namely, the BCI system provides an alternative interface bridge which can bypass the conventional motor neural pathways and map brain intentions to relative control commands (Ortiz-Rosario and Adeli, 2013). Brain activity can be characterized by various signal modalities, such as invasive ElectroCorticoGraphy (ECoG) (Miller et al., 2010;Hiremath et al., 2015), non-invasive electroencephalogram (EEG) (Lazarou et al., 2018), the functional Magnetic Resonance Imaging (fMRI) (Cohen et al., 2014), and the functional Near-Infrared Spectroscopy (fNIRS) (Naseer and Hong, 2015). Due to its manageability, easy capture, high time resolution and relative cost effectiveness, the EEG signal has been widely adopted for substantial BCI applications, such as remote quadcopter control (Lin and Jiang, 2015), motion rehabilitation (Xu et al., 2011;Zhao et al., 2016), biometric authentication (Palaniappan, 2008), and emotions prediction (Padilla-Buritica et al., 2016). Currently, the electrophysiological brain patterns used in EEGbased BCI systems are mainly Steady-State Visual Evoked Potentials (SSVEPs) Zhang et al., 2015;Zhao et al., 2016;Nakanishi et al., 2018), P300 (Cavrini et al., 2016), sensorimotor rhythms (SMRs) (Yuan and He, 2014;He et al., 2015), and motion-related cortical potential (MRCP, one kind of a slow cortical potential) (Karimi et al., 2017). Compared to other patterns, the SMRs-based BCI is more flexible and suitable for practical applications due to the spontaneous EEG signals, which are generated by individuals voluntarily without any external stimuli. The SMRs are derived from the motor imagery EEG, which evoked by mentally imaging the movements of limbs without actual actions (Yuan and He, 2014). The underlying neurophysiological phenomena are event-related synchronization (ERS) and event-related desynchronization (ERD) in the SMRs, which are induced simultaneously by an exogenous event. The variability of ERS/ERD intensity or power in particular frequency bands can be utilized to distinguish the different motor imagery EEG signals (Pfurtscheller et al., 2006;Koo et al., 2015). Some remarkable SMRs-based BCI systems for motor imagery classification have been created and applied in wheelchair control (Li et al., 2013), objects control in 2D (Ma et al., 2017) or 3D space (LaFleur et al., 2013), and robotic arm control (Xu et al., 2011;Meng et al., 2016). However, there are still various challenges faced in the establishment of efficient SMRs-based BCI systems, such as fewer recognizable motor types or states, apparently lower recognition rate, and longer training time (Yuan and He, 2014;He et al., 2015). In addition, due to the volume conduction effect of scalp and skull, the EEG is a non-stationary and non-linear dynamic signal with low signal-to-noise ratio and vulnerable to be interfered or submerged by complex background artifacts, which makes it really challenging to accurately decode various motor imagery tasks (Blankertz et al., 2011). Consequently, the crucial issue that needs to be solved is how to improve the decoding performance of the SMRs-based BCI in the condition of various artifacts. The artifacts affecting the quality of motor imagery EEG mainly contain electrooculography (EOG), electromyography (EMG) and electrical line interference. Traditionally, a variety of filters can be available to alleviate or even eliminate electrical line interference and some high frequency noises, like EMG (35 Hz above). In the past researches, many typical attempts have been proposed to reduce EOG, such as filter-based method (Shoker et al., 2005), independent component analysis (ICA) (Lindsen and Bhattacharya, 2010) and discrete wavelet transform (DWT) (Peng et al., 2013). However, these methods can cause the loss of some useful EEG components. And the procedures for manual parameter tuning are needed to obtain optimal performance of these methods. Moreover, they generally fail in the case of the EEG contains extreme noises. Otherwise, the EEG signals could be accidentally overwritten or lost caused by hardware or system malfunctions during recording periods. For the above cases, good decoding performance for SMRs-based BCI systems could still hardly be achieved. One intuitive and helpless solution to avoid such extreme artifacts and data loss is usually to reject the entire severely disturbed EEG segments. Consequently, this raises some defects including no decoding results during certain period, additional EEG rejection process and increased BCI training time. Furthermore, from a practical perspective, consecutive and smooth recognition of SMRs-based BCI systems is extremely necessary for the online and long-term application. This requires that the BCI system can continuously decode brain signals without any interruption. If entire EEG segments are discarded due to extreme artifacts or data loss, the BCI system cannot obtain the decoding results during the corresponding time slice. Hence, it is very important to decode incomplete motor imagery EEG for SMRs-based BCI systems in the condition of extreme artifacts and data loss. Currently, only few studies have been conducted to solve the decoding performance from the incomplete EEG signals. Zhang et al. applied a Bayesian tensor factorization based method to find the underlying low-rank EEG tensor from incomplete EEG signals and improve the decoding accuracy with robustness after artifacts and outliers removal . Cui et al. used a fully Bayesian CP factorization for incomplete tensors method to analyze and classify incomplete EEG signals with different data missing ratios (Cui et al., 2016). However, such decoding methods for incomplete EEG need complicated matrix and tensor computations, which are not efficient for an online BCI application. Moreover, the classification accuracies obtained by these methods need further improvement. In this paper, to improve the decoding performance for incomplete motor imagery EEG and satisfying the needs of smooth operation for the BCI system, a novel decoding scheme composed of Lomb-Scargle periodogram (LSP) for feature extraction and deep belief network (DBN) for classification was proposed. Instead of rejecting the entire EEG segment, the portions that affected by extreme artifacts or data loss were directly removed and the remaining portions were used to construct the incomplete motor imagery EEG signals in this study. Generally, the most robust and representative feature for the contents of different motor imageries is spectral power in particular bands of ERS/ERD (Pfurtscheller et al., 2006). The conventional fast Fourier transform (FFT) or Welch periodogram can be available to estimate the spectral power features for the intact motor imagery EEG. Nevertheless, these spectral analysis methods cannot work well for the non-uniformly sampled signals (Stoica et al., 2009), such as incomplete motor imagery EEG signals. The LSP method can handle signals that have been sampled non-uniformly or have missing data points (Stoica et al., 2009;Stankovic et al., 2014) and is suitable for processing incomplete signals. Hence, the LSP method was adopted to extract major spectral power features from the incomplete motor imagery EEG signals in this study. A DBN structure based on the restricted Boltzmann machines (RBM) was exploited and optimized to learn different motor imagery EEG classes. The proposed scheme may offer the following advantages: (a) It can provide comparable decoding performance for the incomplete motor imagery EEG with different proportion of data removal; (b) The extracted spectral power features are more robust for the representation of the incomplete motor imagery EEG; (c) It is applicable to consecutive and smooth operation without any disruption for the online BCI system. The remaining parts of this paper are organized as follows. The overall systematic framework of decoding scheme for incomplete motor imagery EEG is introduced in section Overall Decoding Scheme Framework. Accordingly, section EEG Processing Pipeline describes the EEG signal processing pipeline in detail, including artifacts and data loss preprocessing, spectral features extraction and DBN classifier construction. The motor imagery experiments and datasets are presented in section Motor Imagery Experimental Paradigm and Datasets. Some experimental comparison results and discussions are given in section Experimental Results and Discussions. Finally, section Conclusions and Future Works gives the conclusions and ideas for future works. OVERALL DECODING SCHEME FRAMEWORK The objective of our study is to address the issue of improvement of the recognition accuracy and stability associated with different motor imagery tasks for the incomplete EEG signals. The schematic diagram of the overall decoding system is illustrated in Figure 1, which primarily synergizes three procedures: preprocessing for raw EEG, spectral power feature extraction, and motor imagery recognition. Definitely, the raw EEG signals were captured by the means of non-invasive wet electrodes arranged on the brain scalp when individuals perform diverse motor imagery tasks, such as imagining limbs movements. The preprocess procedure was devoted to constructing incomplete motor imagery EEG datasets, which covered band-pass filtering, sliding windows segmentation, and data loss or noise removal. The deep belief network was composed of three layers of pre-trained stacking RBMs along with an output layer of softmax regression. The spectral power features within specific frequency bands extracted through Lomb-Scargle periodogram were normalized to pre-train each layer of the RBMs and fine-tune the weights of the DBN. Stochastic binary units were utilized in the pre-training stage to initialize the deep neural network. Deterministic real-valued probabilities were also implemented to adjust the connection weights of each layer by error backpropagation algorithm. After a fine-tuning stage, the trained DBN was employed to decode the corresponding classes of motor imagery from incomplete EEG, such as movement intention of left hand, right hand, or foot. The structure of each layer in the DBN was optimized and determined by various group experiments. Moreover, simulated and extensive experiments for multi-subjects, different feature extraction methods (FFT or Welch) and classifiers (supervised Support Vector Machines, SVMs) were conducted to verify the viability and effectiveness of the proposed decoding scheme for incomplete motor imagery EEG signals. EEG PROCESSING PIPELINE Preprocessing In order to exclude the unwanted components of the interested EEG segments, the preprocessing procedure was designed to transform the intact EEG with complex artifacts or data loss into incomplete EEG segments. Essentially, the preprocessing pipeline consists of three sub-parts: (a) signal filtering, (b) sliding windows segmentation, and (c) artifacts or data loss removal. More explicitly, the signal filtering was dedicated to alleviating the background noises arose from experimental, instrumental, and electrical or physiological sources. The sliding windows were mainly responsible to segment the expected motor imagery fragments from the continuous EEG signals. For the motor imagery EEG segments, the portions with extreme artifacts or data loss were directly discarded and the remaining portions were utilized to form incomplete signals. Signal Filtering Because of the fact that EEG signals contain useful information below 100 Hz, noise elements above this frequency may be directly excluded through low-pass filters. For motor imagery EEG, the phenomenon of ERS/ERD obviously appears in the frequency range of mu (8-12 Hz) and beta (18-26 Hz) rhythm band (Pfurtscheller et al., 2006). In other words, the frequency band of 8-30 Hz possesses the most discriminative information associated with different motor imagery tasks. In this study, a fifth-order Butterworth band-pass filter with gain 1.5, cutoff frequencies [8,35] Hz was applied to attenuate the frequency component of specific noises while amplifying interested frequency band for motor imagery classification. After signal filtering, a large part of noise can be removed, such as EMG (high frequency noise, higher than 35 Hz), low frequency component of EOG (lower than 8 Hz) and electrical line interference (50 or 60 Hz). In addition, the baseline drift caused by head or limb motions can also be alleviated to reduce the impact on the raw EEG signals. Sliding Windows Segmentation For a continuous recorded EEG signal, we just only focus on the motor imagery segments. Then, the band-filtered and continuous EEG signals were segmented by a time window, which corresponding to a trial of motor imagery task. Moreover, a trial of motor imagery task needs repeatedly imagine limb movements for a certain time to generate stable and effective brain activity. In existing motor imagery EEG studies, the features can be extracted either by using the whole EEG trial or by dividing the trial into a number of overlapping/nonoverlapping time segments (Asensio-Cubero et al., 2011, 2013AYDEMIR, 2016). To improve the temporal resolution of EEG and obtain better performance of the classifier, a sliding window was commonly adopted to split the targeted motor imagery trial into overlapped segmentations which can be used for multiple classifications by a voting strategy (Herman et al., 2008;Shahid and Prasad, 2011;Choi, 2012). In this study, instead of using the whole data length of EEG trial, a four-second EEG trial was divided into 16 segments of 1 s length with 0.2 s step size by the 1 s sliding window with 80 % overlap. Artifacts or Data Loss Removal Even if the filter processing is done, some artifacts may still exist in the EEG segments. Furthermore, the residual elements stem from artifacts may overlap the effective frequency band correlated with motor imagery EEG. For instance, the EOG artifacts resulted from eye blinks are usually presented in the frequency band of 0-10 Hz. The high frequency elements of the EOG overlapping with ERS/ERD bands cannot be readily excluded by band-pass filters. On the other hand, the filters are in general ineffective in the case of the signal with data loss. Instead of rejecting the entire motor imagery EEG segments, an additional preprocessing implementation was proposed to address artifacts and data loss. For the case of the EEG segment contaminated by extreme artifacts, the entire EEG segment was divided into data chunks with different widths. The width which represents the number of data points in each data chunk can be generated according to a normal distribution with a mean of 10 and a standard deviation of 2. A form of data chunk removal was applied to directly discard data chunks which contain severe artifacts. In addition, for the case of data loss within the EEG segment, a form of data point removal was employed to eliminate acquisition outliers. For the two forms of data removal, the EEG portions contaminated by extreme artifacts or data loss within an EEG segment were directly discarded by a proportion from 10% to 80% in this study. For example, for the case of 10% data chunk removal, 10% data chunks in a 1 s EEG segment were randomly discarded. For the case of 10% data point removal, 10% data points (100 points in this study) in a 1s EEG segment (1,000 points) were randomly discarded. Subsequently, the remaining EEG data chunks or data points were combined to construct the incomplete motor imagery EEG segments. Feature Extraction Based on Lomb-Scargle Periodogram The crucial step in a BCI system is feature extraction, which is used to find mental task-related information and most discriminative representations from the brain activities for subsequent classification. The quality of extracted features highly affects the performance of the following recognition process. For motor imagery EEG signals, we concentrated on the spectral analysis during certain frequency bands. The non-parametric fast Fourier transform (FFT) and Welch periodogram methods have been confirmed to effectively estimate the spectral power features for the intact motor imagery EEG, such as power spectral density (PSD) (Herman et al., 2008;Djemal et al., 2016). However, due to the incomplete motor imagery EEG signals belong to a kind of non-uniformly sampled sequence, these methods may not extract stable spectral features. In our research, the Lomb-Scargle periodogram was adopted to estimate the spectral power features for incomplete motor imagery EEG segments. An incomplete EEG segment is denoted by X ∈ R C×N , where C is the number of channels and N is the length of signal points. For each channel, the signal series were denoted by eeg(t i ), where i = 1, 2, ..., N. Lomb-Scargle Periodogram For signal series eeg(t i ), the spectral power at frequency ω f should be estimated by solving the following fitting problem of sum of squared differences: (1) For simplicity, the dependence of α and φ about ω f was replaced by using a = α cos(φ) and b = −α sin(φ). The fitting problem can be reformatted by the term of a and b: The optimal parameters in the minimizing Equation (3) can be obtained by solving The power at specific frequency ω f corresponding to optimal parametersâ andb, is given as follows: Accordingly, the powers for each channel signal at all frequency ω can be obtained by Similarly, the estimation step was repeatedly executed for all channels of the incomplete motor imagery EEG segments to extract the corresponding spectral features. Previous researches demonstrated that significant power oscillations in response to various motor imagery tasks mostly located in 8-30 Hz bands (Pfurtscheller et al., 2006;Shahid and Prasad, 2011). In this article, the concerned band was divided into four sub-bands with a bandwidth of 5 Hz, including alpha (8-13 Hz), sigma (13-18 Hz), low beta (18-23 Hz), and high beta (23-28 Hz) rhythms. For each channel, the PSD features of each sub-band were computed by averaging powers within the frequency range. Hence, all PSD features for EEG segments were concatenated by channel arrangement into a feature vector: V = [p 11 , p 12 , p 13 , p 14 , p 21 , p 22 , p 23 , p 24 , · · · , p C1 , p C2 , p C3 , p C4 ] (9) where C is the number of channels. Feature Normalization Generally, the original features can be directly fed into a neural network or an SVM classifier to recognize which motor imagery class the current EEG signal belongs to. However, the spectral feature variations caused by various channels or different motor imagery trials may affect the performance of classifiers. To eliminate the variation factor of feature scale and accelerate the convergence of learning algorithm, a min-max normalization step was utilized in feature vector set V. Refer to (10), the raw features were divided by the difference of maximum and minimum to scale all the values between 0 and 1. Deep Belief Network Based on Restricted Boltzmann Machines Considering the advantages of high-speed and parallel computation, a neural network classifier is more suitable and efficient for the online BCI application and the trained Frontiers in Neuroscience | www.frontiersin.org parameters can be directly used to distinguish new EEG signals. Currently, a variety of deep learning architectures based on neural networks have been constructed and applied in motor imagery EEG classification Kumar et al., 2016;Tabar and Halici, 2016). In this study, we adopted a deep belief network (DBN) structure to obtain more robust and ultimately more notable representation for the incomplete motor imagery EEG. The DBN structure can be formed by multiple layers of stacked restricted Boltzmann machines (RBMs) or auto-encoders. Restricted Boltzmann Machine (RBM) Each RBM is composed of a visible layer, a hidden layer, and connection weights between two layers, which is greedily trained in an unsupervised mode (Hinton et al., 2006;Tang et al., 2015). The basic structure of RBM is presented in Figure 2. The neurons used in the RBM are stochastic binary units. Traditionally, the visible layer receives the input data and have undirected connections with the neurons of the hidden layer. Meanwhile, the neurons from the same layer are disconnected. The hidden layer is responsible to reconstruct the input data as close as possible by tuning the connection weights and biases repeatedly. For motor imagery EEG, each visible neuron represents a spectral feature with hypothetically Gaussian distribution. The energy function of joint configuration for the two layers is defined as where v i and h j are the binary states at the visible neuron i and hidden neuron j respectively. b i and a j are the corresponding biases of neurons, w ij is the connection weight between them. Based on the Boltzmann distribution and energy function, a joint probability for pair of the visible and hidden layer is determined by where Z = v,h e −E(v,h) denotes the partition function or normalization term. Considering that the hidden neurons are conditional independent due to no connections between them, given visible vector v, the conditional probability of neuron h j being 1 can be obtained as follows: Similarly, given hidden vector h, the conditional probability of the visible neuron v i being 1 can be determined by where σ (•) denotes the logistic sigmoid function. Given the training dataset S = {s 1 , s 2 , ..., s n s }, n s is the number of training samples, the parameters of RBM are trained to fit the training samples by maximizing a log-likelihood function, including connection weights w, biases a and b. Based on gradient ascent and contrastive divergence methods (Hinton et al., 2006), the derivative of the log-likelihood with respect to weights w can be formulized by where E data [•] and E model [•] are respectively the expectation under the distribution of the training dataset and the model. Furtherly, the gradient can be rewritten by The contrastive divergence method can be used to approximately estimate the expectation E data v i h i . The Gibbs sampling method can be adopted to calculate the expectation E model v i h i . Hence, the learning rule of connection weights can be obtained by Similarly, the updating rules of the biases are respectively and where η and ε donate the learning rate. According to the updating rules of parameters, each RBM is trained to reconstruct the input data in an unsupervised way. Deep Belief Network Three layers of RBM were superposed to construct a deep belief network with a layer of softmax regression in the study, as shown in Figure 1. The raw input data was fed to the bottom layer of RBM, and the output of the hidden layer from the lower RBM was delivered to the visible layer from the higher RBM. Compared to logistic regression, the softmax regression was used to solve multiclass recognition problems by statistically estimating the maximum probability of the class that a sample belongs to (Salakhutdinov and Hinton, 2012). The procedures of the DBN primarily consisted of pre-training stage and finetuning stage. The pre-training stage was conducted in each layer of RBM to obtain initial parameters of the DBN. The softmax regression was added to obtain prediction error to optimize the parameters by backpropagation algorithm in the fine-tuning stage. Additionally, some constraint terms were incorporated into the cost function of softmax regression to avoid overfitting, including weight decay and sparsity constraint (Cho, 2013;Plis et al., 2014;Jiang et al., 2016). In our research, the weight decay was set to 0.05 and the sparsity constraint was set to 0.1. The learning rates for connection weights and biases were set to 0.5 and 0.25 respectively. All these parameters were determined and optimized by a grid search procedure with 5-fold crossvalidation. MOTOR IMAGERY EXPERIMENTAL PARADIGM AND DATASETS In our study, nine right-handed volunteers (all males, mean age 26.5 years, ranging from 25 to 28 years, numbered S01-S09) with thin hair participated in the motor imagery experiments. All subjects were healthy, without any history of neurological, psychiatric or cognitive disorders. Specifically, none of them has any prior experience of the BCI experiment related to motor imagery. Moreover, details of motor imagery experimental procedures were explained to all participants and written informed consents were signed for all subjects before the experiment. The experimental protocol was reviewed and approved by the local ethics committee of the University of Chinese Academy of Sciences. In an electromagnetic shielding environment, the participants were seated in a comfortable chair with armrests and watched an LCD screen from a distance of about 1 m, while wearing an EEG recording cap. Three kinds of motor imagery tasks were performed including imagining left hand, right hand and foot movements. Before the experiment, the instructor explained the meaning of kinesthetic imagery of the limb movements to the participants. Additionally, all participants performed motor imagery practice to get familiar with the kinesthetic sensation. Each participant carried out an experimental block consisted of 10 sessions, which lasted ∼1.5 h. All sessions were executed in the same condition and a rest period with several minutes was given between two consecutive sessions. The experiment paradigm of each session was devised in Figure 3. For all sessions, the first 2 s was an idle state with a black screen. Subsequently, a fixation green cross was emerged at the center of the screen with a duration of 1 s to indicate the beginning of one trial. Immediately, a red arrow pointing to the left, right or down appeared with a duration of 5 s in addition to the fixation cross. In this specific period, the subjects were instructed to respectively perform the relevant motor imagery tasks according to the direction of the arrow, such as imagining repeated finger flexion and extension with the left or right hand at approximate 1 Hz frequency. Meanwhile, the subject must pay attention to imagine the kinesthetic experience of limb movements as much as possible. In addition, to minimize the artifacts, the participants were asked to limit their head movements and try not to blink or swallow during the motor imagery period. During the intertrial interval, the arrow cue and fixation cross were disappeared with the remaining of a black screen for 2 s, and the subject was instructed to perform idle state instead of motor imagery. To avoid the adaptability of brain activity for a given motor imagery task, each of the 3 cues was presented 10 times by a random sequence in each session. Hence, there are 30 trials for a session. For each subject, there are total 300 trials of motor imagery tasks in an experiment. During the motor imagery tasks, EEG signals were collected through a grid cap with 64 Ag/AgCI passive electrodes provided by Plexon Inc., USA. The multiple electrodes with roughly 3 cm separation distance were closely arranged on the cap according to the international 10-20 positioning system. Extra conductive glues or gels were injected into each electrode for higher conductivity and better attachment. The left mastoid electrode was used as the reference channel and the right mastoid electrode served as the ground. The original EEG data were recorded with a sampling rate of 1 kHz by OmmiPlex Neural Data Acquisition System (Plexon Inc., USA), including analog pre-amplification, analog-to-digital conversion, and a low-pass filter with a cutoff frequency of ∼200-300 Hz. An additional notch filter with 50 Hz was applied to eliminate the power line artifacts. Finally, the recorded motor imagery EEG signals for each subject were saved in the form of times × channels × trials with 5,000 × 64 × 300. To obtain dominant motor imagery EEG, a 4 s segment from 0.5 s after cue to 4.5 s was cut out from each trial. As mentioned in section EEG Processing Pipeline, the data was further band-pass filtered and segmented by a sliding window. Hence, the motor imagery datasets were represented by a three-dimensional array of size 1,000 × 64 × 4,800 for each subject, where 1000 was the length of time window (1 s), 4,800 was the number of motor imagery segments containing three class, and 64 was the number of channels. For each channel signal, there were 4 spectral power features estimated by Lomb-Scargle periodogram method. Then, the whole sample datasets with features were 4,800 × 256 for each subject, where 256 was the number of features (4 × 64 channels). The datasets were randomly divided into 75% training datasets (3,600 × 256) and 25% testing datasets (1,200 × 256). EXPERIMENTAL RESULTS AND DISCUSSIONS Simulation Comparison With Different Spectral Estimation Methods To evaluate the effectiveness of the Lomb-Scargle method for incomplete signals, the simulated signal was devised by mixing two sinusoidal signals with a dominant frequency of 4 Hz and 8 Hz, respectively. The amplitude ratio between 4 Hz and 8 Hz sinusoidal signal was set to 0.75. For the simulated signal, data points with a certain proportion were randomly removed to construct incomplete or irregular signals. In addition, for comparison with Lomb-Scargle periodogram, traditional Welch and FFT periodogram methods were also applied to estimate spectral power for different incomplete signals. The estimated spectral powers for the intact signal and the incomplete signal with various degrees of missing data are given in Figure 4. For the simulated signal, the data points were eliminated by a proportion from 10 to 80% with a step of 10%. Meanwhile, the powers were normalized to the same scale by dividing a factor, which was the proportion value of remaining data. From Figure 4, we can see that the spectral components at dominant frequency 4 and 8 Hz are more and more insignificant with the increase of proportional data removal for all three estimation methods. Especially, the spectral powers were obviously degraded after 30% data removed. However, the spectral powers estimated by Lomb-Scargle periodogram were more notable than those estimated by Welch or FFT method for various incomplete signals (the p-value from paired t-test was < 0.05). Indeed, the components at 4 Hz and 8 Hz were wellobtained for the incomplete signal even with 80% data removed. It demonstrated that compared to the traditional spectral analysis methods like FFT and Welch, the LSP method can estimate more stable and optimal spectral features from various incomplete or irregular signals. It proved that the LSP was particularly suited to estimate rhythm components in non-uniformly sampled signals (Stoica et al., 2009). Incomplete Motor Imagery EEG: Point Removal Form and Chunk Removal Form To systematically validate the discrimination ability of the PSD features extracted by the LSP method for the incomplete EEG, two forms were adopted to randomly remove the portions from the intact motor imagery segments to construct incomplete signals. For the condition of data loss, a form of data point removal was applied to eliminate the EEG outliers, which caused by high contact impedance between electrodes and scalp. Figure 5 presents the recognition performance of intact EEG and incomplete EEG with different proportions of data point removal for the nine subjects, obtained by the DBN classifier FIGURE 4 | The comparison results of spectral power estimations for the complete signal and incomplete signal with different proportional removal (from 10 to 80% with a step of 10%). Three estimation methods were used: Lomb-Scargle, Welch and FFT periodogram. FIGURE 5 | The classification results of the intact EEG and incomplete EEG with various ratios of data point removal (from 10 to 80% with a step of 10%), for the nine subjects (from S01 to S09). Three spectral feature extraction methods were used for comparison: the black lines, red lines and blue lines represent the accuracy of DBN with FFT, Welch and Lomb-Scargle feature extraction, respectively. with three feature extraction methods (FFT, Welch, and Lombscargle). For simplify, three methods were denoted as FFT+DBN, Welch+DBN, and Lomb-Scargle+DBN, respectively. From an overall perspective, the recognition accuracy showed a descending trend gradually along with the increasing proportion of data point removal for all three methods in Figure 5. For the intact motor imagery EEG, the average accuracies (±standard deviation) across the nine subjects were 72.27% (±1.33%) for FFT+DBN, 73.26% (±1.44%) for Welch+DBN, 74.77% (±0.43%) for Lomb-Scargle+DBN, respectively. There was no significant difference (p > 0.078, paired t-test) between the average accuracy of Lomb-Scargle+DBN and those of the other methods for the intact EEG across all subjects. This can be inferred that compared to the FFT and Welch method, the LSP method may not provide high-quality PSD features for the intact motor imagery EEG. Especially, for the intact EEG of subject 1 (S01), the accuracy of Welch+DBN was higher than that of Lomb-Scargle+DBN. Considering the computational complexity and the efficiency, it is not preferable to apply the Lomb-Scargle+DBN for the intact motor imagery EEG classification. However, the accuracy variation of Lomb-Scargle+DBN was obviously smaller than those of the FFT+DBN and Welch+DBN for the incomplete EEG with different point removal ratios. More specifically, for the incomplete EEG with point removal in the range from 10 to 80%, the mean difference of accuracy across the nine subjects was 13.38% (±2.67%) for FFT+DBN, 13.08% (±3.07%) for Welch+DBN, and 7.45% (±1.18%) for Lomb-Scargle+DBN, respectively. It demonstrated that the classification performance of Lomb-Scargle+DBN was significantly better compared to FFT+DBN (p = 0.012 < 0.05, paired Student's t-test) and Welch+DBN (p = 0.008 < 0.01, paired Student's t-test) for the incomplete motor imagery EEG. Implicitly, the spectral power features extracted by Lomb-Scargle periodogram can significantly improve the classification accuracy of the DBN for various degrees of incomplete EEG. An acceptable classification accuracy (above 65%) can be achieved by the FIGURE 6 | The classification results of intact EEG and incomplete EEG with various ratios of data chunk removal (from 10 to 80% with a step of 10%), for the nine subjects (from S01 to S09). Three spectral feature extraction methods were used for comparison: the black lines, red lines and blue lines represent the accuracy of DBN with FFT, Welch and Lomb-Scargle feature extraction, respectively. Lomb-Scargle+DBN method even when 80% of points were eliminated, while the accuracies of FFT+DBN and Welch+DBN were ∼60% or even lower. Interestingly, from Figure 5, we can find that the accuracies for the incomplete EEG after 30% data point removal declined sharply and substantially. Especially in the case of subject 1 (S01 EEG datasets), the accuracy obtained by FFT+DBN or Welch+DBN roughly varied from 70 to 53% for the incomplete EEG between 30 and 80% data point removal. This finding implied that the performance of spectral power features deteriorated distinctly for the methods of FFT and Welch periodogram, which was in accordance with the previous simulation comparison. Similarly, to eliminate the effects of extreme artifacts, a form of data chunk was adopted to remove the EEG portions contaminated by tremendous electrophysiological artifacts or complex background noises. The corresponding classification results for the intact EEG and incomplete EEG with various ratios of data chunk removal are presented in Figure 6. Compared to the data point removal, the accuracies of the incomplete EEG dramatically and significantly decreased across different degrees of data chunk removal (p = 0.022 < 0.05, paired Student's t-test). Especially, the average accuracies for the incomplete EEG with 80% data chunk removal were 51.03% (±2.23%), 51.47% (±1.60%), and 64.17% (±0.63%), significantly lower than those for the incomplete EEG with 80% data point removal by 58.13% (±2.52%), 59.15% (±2.87%), and 66.44% (±1.13%) for FFT+DBN, Welch+DBN, and Lomb-Scargle+DBN respectively. More commonly and exactly, the mean difference of accuracy for the incomplete EEG with chunk removal in the range from 10 to 80% across the nine subjects was 20.51% (±2.39%), 19.68% (±2.21%), and 9.30% (±1.17%) for FFT+DBN, Welch+DBN, and Lomb-Scargle+DBN respectively. The statistical analysis indicated that the proposed Lomb-Scargle+DBN method for the incomplete The maximum mean of comparative experiments were highlighted in the bold. EEG was constantly and significantly superior to the other two methods (p = 0.007 < 0.01 for FFT+DBN and Lomb-Scargle+DBN, p = 0.007 < 0.01 for Welch+DBN and Lomb-Scargle+DBN, paired Student's t-test). Moreover, the accuracies of the incomplete EEG in the condition of data chunk removal varied remarkably larger than those in the condition of data point removal (p < 0.05, paired t-test). It can be attributed to the fact that except for extreme artifacts, the informative signals corresponding to motor imagery tasks were also eliminated by the chunk form within the same contaminated segments. Thereby, for the incomplete EEG with data chunk removal, the extracted spectral powers of the mu/beta rhythms related to motor imagery tasks were relatively inferior to those for the incomplete EEG with data point removal. In addition, the overall recognition performance for the incomplete EEG across various degrees of point and chunk removal are provided in Table 1. The results (mean ± standard deviation) were obtained by averaging accuracies for the incomplete EEG with different ratios of point and chunk removal in the range from 10 to 80%. It can be observed that the classification results of Lomb-Scargle+DBN were significantly higher than those of FFT+DBN and Welch+DBN for both incomplete EEG with point and chunk removal. The incremental performances between Lomb-Scargle+DBN and FFT+DBN were 5.48%, 6.60% for the incomplete EEG with point and chunk removal, respectively. The p-values computed by the paired Student's t-test of this comparison were all < 0.001. Likewise, the incremental performances between Lomb-Scargle+DBN and Welch+DBN were 4.67%, 6.44% for the incomplete EEG with point and chunk removal, respectively. The p-values computed by the paired Student's t-test of this comparison were also < 0.001. Furthermore, from the view of standard deviation, the Lom-Sacrgle+DBN method (2.68% for point form, 3.58% for chunk form) performed prominently lower variability than FFT+DBN (5.08% for point form, 7.70% for chunk form) and Welch+DBN (4.93% for point form, 7.49% for chunk form). Therefore, it is evident that the Lomb-Scargle+DBN method can significantly and steadily improve the recognition performance for the different incomplete motor imagery EEG. Comparison of DBN With Various Structures It should be noted that the structures of DBN adopted in the incomplete EEG experiments were determined and selected by an optimization method. As previously mentioned, the DBN was constructed by three hidden layers of pretrained RBMs and an output layer of softmax regression. For this study, a number of 256 dimensional vectors were fed to the input layer of the DBN. Hence, the dimension of the input layer was 256. Furthermore, three units were utilized in the output layer of softmax regression, which corresponded to three motor imagery tasks. To obtain the relevant optimal parameters, various numbers of units were tried for the three hidden layers. More explicitly, different numbers of units varied over a range were used in one hidden layer, while the numbers of units in the remaining two hidden layers were unchanged. Since optimal parameters selection of the DBN was a combinatorial process, which yields comparable solutions rapidly. To evaluate the sensitivity of the hidden layers to the changes of the unit numbers, 5-fold cross-validation was applied for the classification of motor imagery EEG. For each subject, the intact EEG and incomplete EEG with various ratios of data removal were divided into 5 sections, in which 4 sections were adopted for training, and the rest section was used for the test. The average performances were obtained by executing 5 times procedures repeatedly. Additionally, all the evaluations were conducted in the features extracted by the Lomb-Scargle periodogram. For the first hidden layer, the numbers of units varied in a range of [15 30 45 60 75 90] while the numbers of units in the other two hidden layers maintained a constant value with 50 and 35 units, respectively. The corresponding comparison of classification performances for the DBN with different numbers of units in the first hidden layer is presented in Table 2. The results showed that the maximum mean accuracy 71% was obtained in the condition of 60 units of the first hidden layer. The decoding accuracies were remarkably improved in the 60 units compared to other numbers of units for the first hidden layer (p < 0.05, paired Student's t-test). Similarly, Table 3 gives the performance of the second hidden layer varying in [10 20 Comparison Between DBN and SVM In this series of experiments, performance comparisons between DBN and SVM were evaluated, with respect to the recognition accuracy for the incomplete EEG in the case of point removal and chunk removal respectively. As previously described, the Lomb-Scargle periodogram can extract effective and robust spectral features for various incomplete EEG to promote the classification performance. Hence, the DBN and SVM classifiers were executed on the same feature datasets extracted by the Lomb-Scargle method. For the three motor imagery tasks, three binary SVMs with a Radial Basis Function (RBF) kernel were built to obtain the final accuracy by a majority voting strategy. The relevant parameters of the binary SVM were optimized using a gridsearch trick (Quitadamo et al., 2017) in a range of [−5 5], such as regularization parameter C and kernel width σ of the RBF. In addition, 5-fold cross-validation method was also applied to avoid overfitting for both classifiers. Figures 7, 8 present the comparison results between DBN and SVM for the intact EEG and incomplete EEG in the case of point removal and chunk removal (ratios from 10 to 80% with a step of 10%), respectively. For the intact motor imagery EEG, the performance between DBN and SVM across the nine subjects was no significantly difference (p = 0.062 > 0.05, paired Student's t-test), with mean accuracies of 74.77% (±0.44%), 73.74% (±0.78%) respectively. From Figure 7, the overall performance of the DBN for the incomplete EEG with different ratios of point removal was better than that of the SVM. Especially, for the case of subject 5, 8, and 9 (S05, S08, and S09 EEG datasets), the accuracies of the DBN for the incomplete EEG after 30% data point removal were obviously improved, with an average increment of 2.64%. However, for the incomplete EEG with different ratios of data chunk removal, the accuracy improvement of the DBN was not significant compared with the SVM. For some subjects, such as subject 2, 3, 4, and 9, the SVM can outperform the DBN for the incomplete EEG with chunk removal in some degree (seen in Figure 8). For further clarification, the average accuracies (± standard deviation) of the DBN and SVM across the incomplete EEG with various ratios of data removal (from 10 to 80% with a step of 10%) were presented in Table 5, including the case of point removal and chunk removal respectively. As shown, for the incomplete EEG with point removal method, the average classification FIGURE 7 | The comparative performances between DBN and SVM classifiers for the intact EEG and incomplete EEG with various ratios of data point removal (from 10 to 80% with a step of 10%), for the nine subjects (from S01 to S09). performance of the DBN (70.72 ± 2.65%) was higher than that of the SVM (69.89 ± 3.08%) across the nine subjects. For the case of point removal, the p-value computing from the Student's t-test between DBN and SVM was 0.021 < 0.05. Moreover, the DBN led to relatively lower variability compared to the SVM, with a mean standard deviation of 2.65% and 3.08% respectively. These results indicated that the DBN was superior to the SVM for the incomplete EEG classification in terms of point removal. Whereas, in the case of chunk removal, the increase of accuracy between DBN (68.86 ± 3.58%) and SVM (68.74 ± 3.53%) was lower than that in the case of point removal. And there was no statistical difference between DBN and SVM (p = 0.79 > 0.50, paired Student's t-test) for the incomplete EEG with chunk removal. This may be due to the reason that compared to the incomplete EEG with point removal, the extracted features from the incomplete EEG with chunk removal were relatively poor and weaken the performance of the DBN and SVM. However, it is likely that the DBN can perform better than the SVM for the motor imagery classification of the incomplete EEG when parameters are subtly tuned and extra layers are added. CONCLUSIONS AND FUTURE WORKS In this study, a decoding scheme based on the combination of LSP and DBN was proposed to recognize incomplete motor imagery EEG segments. To construct incomplete EEG segments, point and chunk removal form were respectively utilized to randomly and proportionally eliminate the uninteresting EEG point or portion. The point removal form was mainly used to eliminate outliers within the EEG segments due to data loss. And the chunk removal form was used to eliminate portions within the EEG segments due to extreme artifacts. The LSP method was carried out to extract robust spectral power features of mu/beta rhythms related to motor imagery tasks for the incomplete EEG. The DBN consisted of three layers of stacking restricted FIGURE 8 | The comparative performances between DBN and SVM classifiers for the intact EEG and incomplete EEG with various ratios of data chunk removal (from 10 to 80% with a step of 10%), for the nine subjects (from S01 to S09). Boltzmann machines (RBMs) and a softmax regression layer was devised to perform motor imagery classification. Since this was a preliminary study, the chunk and point removal was processed in a random manner. However, for the real application, a more specific search process was needed to determine which chunks or points should be removed. To validate the effectiveness of the proposed decoding scheme for the incomplete EEG, various comparative experiments were conducted and evaluated on simulated signal and real motor imagery EEG, including the comparison of different spectral power estimation methods (FFT, Welch and Lomb-Scargle) and different classifiers (DBN and SVM). For the simulation comparison with three spectral estimation methods, the results show that the Lomb-Scargle method can extract more stable and remarkable spectral power for the incomplete or irregular signals. Furthermore, the PSD features extracted by the three estimation methods were recognized using a DBN classifier, and the classification accuracy of the Lomb-Scargle+DBN was not dramatically declined compared to FFT+DBN and Welch+DBN for the incomplete motor imagery EEG with increasing proportion of point removal or chunk removal (from 10% to 80% with a step of 10%). These results suggest that the Lomb-Scargle+DBN can lead to significantly and steadily improve the recognition performance for the incomplete motor imagery EEG. The significance statistical analysis between Lomb-Scargle+DBN and FFT+DBN or Welch+DBN was less than 0.05 for the incomplete EEG in the case of point removal and chunk removal. After three groups of experimental tests and comparisons, the structure of the DBN was determined to be 256 × 60 × 50 × 35 × 3 to improve the learning performance of the DBN. Extended comparison between DBN and SVM indicated that the DBN was superior to the SVM for the incomplete EEG in terms of point removal. Moreover, for the classification of the intact motor imagery EEG, there was no significant difference for the average accuracy (p > 0.078, paired t-test) between the Lomb-Scargle+DBN and the other methods (FFT+DBN and Welch+DBN). Considering the computational complexity and the efficiency, it is not preferable to apply the Lomb-Scargle+DBN for the intact motor imagery EEG classification. Therefore, the proposed decoding scheme is suitable to improve the classification performance for the incomplete motor imagery EEG. It means that instead of rejecting the entire segment, the motor imagery EEG segment with data loss or extreme artifacts can still be used to generate comparable classification results when the affected portions are eliminated. Thanks to decoding the incomplete EEG, the proposed scheme will be beneficial to improve the stability, smoothness and maintain continuous outputs for a BCI system. Especially, for online BCI systems, the intentions of subjects are continuously decoded from the EEG signals with no interruption. In the future work, the online test based on motor imagery EEG will be carried out to evaluate the validity of the proposed decoding scheme for the incomplete signals. Additionally, because of the Lomb-Scargle periodogram was particularly suited to estimate rhythm components in non-uniformly sampled signals (Stoica et al., 2009), it may be applicable to other modalities of the EEG signal related to spectral analysis. For example, the proposed method can be applied to decode the incomplete SSVEP EEG. For the structure of the DBN, more dedicated procedures can be implemented to further boost the decoding performance, such as adding layers of the RBMs and utilizing search algorithms to optimize the hyper-parameters of the DBN. Additionally, optimal frequency bands associated with relevant motor imagery tasks can be further investigated to promote the overall performance of the proposed method. For the segmentation processing of the sliding window with 80% overlapping, there was a correlation between the 16 samples from the same EEG trial. This factor may influence the performance of the proposed method for the incomplete EEG classification. In the next work, similar to the study of Asensio-Cubero et al., a comparative research should be conducted by applying the proposed method to three different segmentation strategies: (1) no segmentation, by applying the proposed method directly to the whole EEG trial, (2) uniform segmentation without overlapping, and 3) segmentation with different overlapping (sliding window method) (Asensio-Cubero et al., 2011). In this study, the BCI system based on motor imagery EEG works in a synchronous way. And an asynchronous BCI system needs to be further investigated in the future work. In conclusion, the introduced decoding scheme provides an effective solution for the incomplete motor imagery EEG in the BCI system. AUTHOR CONTRIBUTIONS YC, XZ, YijZ, WX, and JH conceived the conception and designed the decoding scheme for this research. YC and YZ carried out the comparative experiments, including acquisition and analysis of data for the work. YC, XZ, and YijZ interpreted the experimental results. YC drafted the manuscript. XZ, WX, JH, and YiwZ revised the manuscript.
2018-09-28T13:17:55.877Z
2018-09-28T00:00:00.000
{ "year": 2018, "sha1": "485bdcc94bb3890462824f7eeeb5739e0b6accc3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00680/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "485bdcc94bb3890462824f7eeeb5739e0b6accc3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
133972454
pes2o/s2orc
v3-fos-license
Estimation of Serpentinite Rock Mass Strength of Placetas-Cuba Underground Gold Mine Deposit This study estimated the strength of the serpentinite rock mass of the underground gold mine “Oro Descanso” Placetas, Cuba. The rock mass was classified into its lithological group of massive, sheared serpentinite rocks and gabbros. The geo-technical information from the well log data obtained during drilling process (geological logs). The structural analysis was carried out through field observation and quantified by Geological Strength Index (GSI) of average values for massive serpentinite 60, sheared serpentinite 38 and gabbros 78. The generalized Hoek-Brown criterion with software programme, Rocklab 1.0, 2004 version was employed for the analysis and the determination of the rock mass local compressive strength (massive serpentinite = 1.733Mpa; sheared serpentinite = 0.464Mpa; gabbros = 10.354Mpa) and the global strength (massive serpentinite = 6.561Mpa, sheared serpentinite = 5.657Mpa and gabbros = 22.547Mpa). These estimated values characterize brittle type of failure mode and thus supports are recommended. INTRODUCTION he Cuba gold mineralisation is wide spread, occuring as alluvia and endogenous deposit.Available report shows that they are classified as Au-Ag deposit with Sb veins; quartz-Au sulphides with chalcopyrite.There are several distinct mining districts where gold has been mined with over 400 mining companies operating in Cuba (Figure 1).Many gold mines are located near Santa Clara in central Cuba.The Placetas underground mine is not as famous as some other dated back to the prehistiric era in Cuba or early Spanish conquest [Gold in Cuba -Mining and Prospecting Areas (Rare Gold Nuggets June 28, 2015)] Much geotechnical information on ore deposit is required in designing prior to mining and extraction. Reports (Qui et al.., 2017) show that the mechanical properties of transversely isotropic rocks have attracted much research interest in the past years (Cho et al., 2012;Dan et al., 2013) for the anisotropic behavior exhibited by this type of rock (Vervoort et al., 2014) and also for huge number projects built on these rocks.Thus it is necessary to understand the anisotropic behavior of these rocks, such as the exploitation of shale gas, (Harris et al., 2011), roof support design of transversely isotropic rock, (Lee et al., 2008) and excavation of anisotropic rocks in underground tunnels (Zhang and Sun, 2011). The estimation of the strength and strength parameters of the rock mass of any underground mine demands a high level of reliability.The strength of any rock mass depends on many factors like: strength of the intact rock, the *Corresponding Author condition of discontinuity, water inflow, anisotropy, homogeneity, etc.In order to include all these parameters for a reliable estimate of the rock mass is a complex task and many a times need the complex and costly state of art measuring instruments (Zhao et al, 2010) and which are not affordable by most third world countries, but many researcher had proposed empirical and theoretical methods which have receive worldwide acceptance due to their practical approval.Barton (1973) and Barton and Choubey (1977) proposed empirical method for the estimation of shear strength of rock mass, Maksimovlc, (1996) proposed hyperbolic relation which does not need any empirical assumption in order to determine shear strength of rock mass as compared to Barton's. Hoek and Brown (1980), Hoek (2007) proposed semi-empirical method for the determination of rock mass strength, which is the method that is applied, in this study, to estimate strength of the rock mass of Oro Descanso Mine. Geological Characteristics of the Deposit The study area is located in the municipal of Placetas, Villa Clara, Cuba, with coordinates points according to Lambert system: A(274300,628000), B (274300, 628450), C (273850, 628450) and D (273850, 628000).Geologically, it is located in a principal substratum folds in the central region of Cuba where a complex rock mass is found of the continental nature, oceanic nature type ophiolitic and with different mixture of earth types.The deposit is found within complex ophiolitic rocks located in wild form over the sedimentary sequence of the continental bank and at the same time over run by volcanic cretaceous arc.(Orestes et al. 2010).The principal rock mass type is massive serpentine with veins of gabbros.The mineral occurrence is associated with the tectonic zone conserved within massive serpentine wedge.Serpentinite is formed from olivine via several reactions between the magnesium-endmember forsterite and the ironendmember fayalite.In the reaction there is exchange of silica between forsterite and fayalite thus forming serpentine group minerals and magnetite as represented in equations (1-3). (2) Forsterite aqueous silica serpentine The zone is affected by systems of faults of orientation between 250º-285º and dip within 65º-90º, also there exist transverse fractures with little development along its length, all these provoke displacement generally not more than 0.2m. Generalised Method of Hoek and Brown This criterion was obtained through the curve of best fit of the experimental data of rock failure drawn on a principal stress plan σ1-σ3 and as one of the few techniques available for the estimation of the strength of rock mass with the aid of geological data, it is based on the assumption that the rock mass consists of a sufficient number of joint sets (at least three), such that the rock mass behaves as an isotropic material, such rock masses are interlocked, but the interlocking level is relatively low, as joints are persistent and therefore sliding on block boundaries dominates failure, with some rotation of intact rock pieces (blocks)., this criterion is often used for analysis in rock mechanics (Cartaya and Blanco, 2000;Bahrania and Kaiser, 2013).Hoek and Brown (2002) proposed the following expression (4) for the determination rock mass: where: 1 , and 3 are principal effective stresses.σci is the compressive strength of the intact rock.s and a are constants which depend on the rock mass characteristics. mb is a reduced value of material constant mi is the constant of the intact rock which is determined by statistical analysis of triaxial values of principal stresses or through chart (Hoek, 2007). The values of mi and σci are obtained by statistical fit to peak strength data within a confinement range of 0 to 0.5sci.The empirical constants mb and s are related in a general sense to the angle of internal friction of the rock mass and the rock mass cohesive strength respectively, a, controls the curvature of the failure envelope.The parameter a, is typically near 0.5 for high GSI-values (>55) and reaches 0.6 for extremely poor ground (Han et al, 2012;Bahrania and Kaiser, 2013) Initially, this criterion was developed for the analysis of fractured but unaltered rock mass with resistant and hard intact rock, supposing that the blocks of rock are interlocked and that the strength of the rock mass depends on the strength of the discontinuities (Hoek and Brown, 1980).Therefore, this failure criterion is valid for isotropic rock mass and do consider the factors that determine the large scale failure of rocky medium, factors like: no linearity of a certain level of stresses, the influence of the types rock, the relation between the compressive and traction strength, the reduction of frictional angle with the increase in confining stress, etc.The strength of a jointed rock mass depends on the properties of the intact rock pieces and also upon the freedom of these pieces to slide and rotate under different stress conditions.This freedom is controlled by the geometrical shape of the intact rock pieces as well as the condition of the surfaces separating the pieces.Angular rock pieces with clean, rough discontinuity surfaces will result in a much stronger rock mass than one which contains rounded particles surrounded by weathered and altered material (Bahrania and Kaiser, 2013).The quantitative value of Geological Strength Index (GSI) was introduced by Hoek (1994Hoek ( , 2007)), through which the quality of the rock mass is estimated in 8) and ( 9) respectively (8) ( 9) METHODOLOGY The geo-technical information from the well logging data obtained during process drilling process based on visual inspection of samples brought to the surface (geological logs) and validated by the physical measurements made by instruments lowered into the hole (geophysical logs). The Well logging record of rock mass with the description of the lithology is presented in Table 1. Plate 1 shows pictorial view of the Oro Descanso -Placetas, Cuba underground mine deposit.The mine was lithological zoned into three major rock type namely massive serpentinite, sheared serpentinite and gabbros. The density, humidity and compressive strength tests of the intact rocks were carried out using 20 to 30 numbers of 54 mm diameter core samples.The tests were performed at three laboratories (Geominera Mining Company, Hidráulicos' Company and Recursos Company) in Santa Clara, Cuba.The average result of these properties was estimated by t-student statistical method at the probability of 0.95.The quantitative value of GSI was estimated using the Sonmez and Ulusay's chart (Sonmez andUlusay, 1999, 2002).Likewise the values of mb, a and s. are determined using equations 2-4.The values of D based on the disturbance level (during blasting at Oro Descanso Mine) and the values of mi for massive serpentinite, sheared serpentinite and gabbros were estimated using the Hoek's chart (Hoek, 2007).The data was processed with the aid of Rocklab 1.0, 2004 computer program from which the estimated values of rock mass strength and its strength parameters are obtained. RESULTS Table 2 shows the values of compressive strength of saturated intact rocks of the rock mass, the values of the density and humidity.The values of mi, D, surface structures, roughness, weathering and the filling materials were determined based on the data obtained from the field.Using the charts of Sonmez andUlusay (1999, 2002); Zhang (2005) and Hoek, (2007).The GSI values were determined for each rock type (Table 3).Also, the values of DISCUSSIONS The strength of rock mass depend on various factors like the mineral components of the intact rock, the strength of the rock grains internal bonding force, cohesive force, the level discontinuity and its binding force, water content, etc, that is why the values of compressive strength (σci) of massive serpentinite, sheared serpentinite and gabbros (Table 2) were reduced from 31.97Mpa, 46.63Mpa, 54.96Mpa to 1.733Mpa, 0.464Mpa, 10.354Mpa respectively in the rock mass (Figures 2-4), also, the deposit zone has been affected by fault, past tectonic activities and constant minor seismic disturbances which do occur in central province of Cuba.The humidity of the mine is between 0.33-0.44,this is partly due the fact that the water inflow is low.The knowledge of the state of the GSI (Table 3) local strength of the rock around the excavations, the global strength and modulus of deformation of the rock mass (Figure 2-4) will aid in the design of the excavation, selection of proper support method and mining system so as to avoid the danger of the falling of small pieces of rock during mining activity.Generally, the values mb, ranges from 0.98 to 8.88, s from 1.8. 10 -4 to 4.34.10 -2 and a, from 0.501 to 0.514 the significance these variations of the material constant of the rock mass due to different lithology in the rock mass, and this values are of great importance in the numeric analysis of the rock mass which is beyond the scope of this paper.The Mogi's line (green) in Figures 2-4, which is defined by the ratio of the principal stress as σ1/σ3 = 3.4 is generally found below the principal stress failure envelop (red), this means that the failure mode that will occur in Oro Descanso rock mass will be brittle type. CONCLUSIONS Applying the generalized empirical criterion of Hoek-Brown, the local strength, global strength and modulus of deformation of Oro Descanso underground rock mass were determined which could be effective data for the design of the mine support, excavation design and for the selection mining system.Also, the material constants of the mine rock mass were determined and the equations that relate the principal stresses were established.The type of failure mode will be brittle.Hence the needed support are designed and recommended. Fig Fig 1: Gold-Projects-Cuba-HSBC-Nov-18-2016.(Source: Sierra Geological Consultants Inc) .edu.ng/journal function of the level and characteristic fracturing of the rock mass, its geological structure, block sizes and condition of discontinuities, butSonmez and Ulusay (1999, 2002) amended this by introducing chart which included the structure rating, SR, based on volumetric discontinuity frequency, introduced to describe the rock mass structure and the surface condition rating, SCR, estimated from roughness, weathering and infilling conditions, to describe the discontinuity surface conditions (Zhang, 2005; Shen et al 2013).The values of mb, s and a are determined by the following equations factor which depends on level of disturbance by blasting and stress relaxation.The constants 28 and 9 in Eqs.(2) and (3) are called the degradation constants, as they control the reduction rate of mb and s as a function of GSI.The uniaxial compressive strength, σc, and tensional stress, σt are estimated by equations ( b Figures 2 to 4 were obtained through computer program, Rocklab 1.0, 2004 using data values of unconfined compressive strength of intact rock, mi, GSI, and D. The graphical plots of principal stresses (σ1-σ3); mb, s and a, from the Hoek-Brown criterion; as well as the cohesion and friction angle were obtained from Mohr-Coulomb criterion. Table 1 : Summary of Well Log Record of Rock Mass. Table 2 : Physical Mechanical Properties of Massive Serpentinite
2019-04-27T13:02:36.406Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "4014e672394d7d18e4493894cd854c94e1d20e46", "oa_license": "CCBYNC", "oa_url": "http://engineering.fuoye.edu.ng/journal/index.php/engineer/article/download/79/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4014e672394d7d18e4493894cd854c94e1d20e46", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
231709413
pes2o/s2orc
v3-fos-license
Tell Me Who Your Friends Are: Using Content Sharing Behavior for News Source Veracity Detection Stopping the malicious spread and production of false and misleading news has become a top priority for researchers. Due to this prevalence, many automated methods for detecting low quality information have been introduced. The majority of these methods have used article-level features, such as their writing style, to detect veracity. While writing style models have been shown to work well in lab-settings, there are concerns of generalizability and robustness. In this paper, we begin to address these concerns by proposing a novel and robust news veracity detection model that uses the content sharing behavior of news sources formulated as a network. We represent these content sharing networks (CSN) using a deep walk based method for embedding graphs that accounts for similarity in both the network space and the article text space. We show that state of the art writing style and CSN features make diverse mistakes when predicting, meaning that they both play different roles in the classification task. Moreover, we show that the addition of CSN features increases the accuracy of writing style models, boosting accuracy as much as 14\% when using Random Forests. Similarly, we show that the combination of hand-crafted article-level features and CSN features is robust to concept drift, performing consistently well over a 10-month time frame. INTRODUCTION The spread of false and misleading news is damaging to society [26,27]. Its harms can be felt across many parts of society, including politics [2], education [3], and health [32,38,40]. Due to this cost, limiting false and misleading news has become a concern for both researchers and practitioners. Due to the scale of this problem, many researchers have built classifiers to automatically assess the veracity of news [25]. The vast majority of these newly-developed classifiers are based on features of the text in news articles or claims [5,20,33,34]. These text-based methods have been shown to work well in lab-settings because unreliable news is often written in a different style than reliable news, employing many different linguistic and grammatical markers. These differences are often attributed to various factors, such as the use of moral-emotional language to gain engagement [10]. Despite this success, there are still concerns about the robustness of these methods. Specifically, text-based methods are prone to performance degradation over time (often called concept drift) due to the dynamic attributes of the news cycle [22]. Furthermore, textbased models may be dependent on language or over-fit to specific domains or topics, making them less generalizable. In this paper, we present an alternative and complementary method for detecting unreliable information based on the behavior of news producers. Specifically, past work has documented that many news producers copy news stories from each other. In essence, copying is a type of amplification, making a story available to the readers of a specific source. In mainstream media, this has been attributed to meeting the demand of all-day news consumption [8]. However, this behavior is very common in alternative media as well, with different motivations. These motivations include generating engagement at a low cost, increasing perceived credibility of stories, and their algorithmic visibility in social media platforms. It has been shown that when this behavior is formulated as a network, the community structures found in the network correspond to different types of news sources in the media ecosystem, including mainstream media, hyper-partisan media, and more [21,41]. Building on the pervasive nature of content sharing among news producers, we propose a new set of source veracity features using content sharing networks, or 'CSN' for short. Our hypothesis is that the network location of sources in the CSN can provide a strong signal of source reliability. We introduce three feature sets using CSNs, one set based on well-known network properties of nodes and two sets using network embedding methods. To show the effectiveness of CSN features, we conduct a thorough study comparing them to previously studied text-based features. Furthermore, we test the stability of different models to over time. Through our comprehensive study, we show that CSN information alone outperforms the previously used text-based methods. Despite the high accuracy of CSN-only models, the combination of CSN information and text information works best, increasing accuracy by at most 14.7% over text-only models. We also show that the combination of CSN models and text-based models provide stable performance over time. Additionally, we find that text and CSN models are highly complementary: they make different types of errors in our data set. The text models make fewer errors when predicting reliable sources and the CSN models make fewer errors when predicting unreliable sources. In short, using the content sharing behavior of news sources in veracity detection leads to highly accurate models. By adding complementary information to existing text models, we improve the overall performance and enhance model robustness. RELATED WORK There is a large body of work on news veracity detection, particularly focused on political news articles since 2016 [25]. These works have used a variety of machine learning techniques. These techniques include binary supervised models [4,12,14,19,22], multi-class supervised models [5], semi-supervised models [1,18], unsupervised models [23], and various Neural Network models [15,28,29,39]. Some works have also framed the problem as a ranking problem, rather than a classification problem [6,46]. The primary features of these detection methods are based on the article text, many of which are hand-crafted feature sets. These text features range from very specific, such as the bias and emotion in an article, to very generic, such as the term frequency within an article. In general, these types of features have been shown to work well and can be used to explain algorithm decisions, but they are prone to sub-optimal performance over time and across domains. Theoretically, they are also prone to text manipulation from malicious sources [22], although this behavior has not yet been shown in real life. One method of strengthening these text-based models is to augment them with features unrelated to the content of the article. To some degree, this has been done. Baly et al. add the presence of a Wikipedia page and Twitter account for each source [4] to article-related feature models. Similarly, Li and Goldwasser use both text features and Twitter social features to detect veracity [28]. Ye and Skiena add the number of advertisements on a page and the popularity of the source to text-based ranking models [46]. Castelo et al. add various web markup features such as the presence of an article author, number of advertisements, and number of images [12]. However, with the exception of the number of advertisements, these additional features can be easily manipulated with little cost to the malicious news producer. Mixing text features with source-level features has also been done in false claim and rumor detection (rather than news article or news source veracity detection). Many studies of false claims on Twitter utilize features of the users who spread the claim, such as number of followers, number of friends, age of profile, or temporal patterns of the user posts [13,37,45]. Other claim veracity works have used popularity as a feature [33]. Again, these additional nontext features are shallow and easy to manipulate. In this paper, we address this gap by introducing a new sourcelevel, behavioral feature for the news source veracity prediction task, namely content sharing behavior. This behavior is costly to manipulate and highly consistent over time, which lends itself to building robust prediction models for the task. This cost stems from the additional effort malicious news producers would need to exert to produce independent false content by not copying content from their peers. Further discussion of network construction and the intuition behind using content sharing networks as signals of veracity can be found in Section 4. DATA In this work, given a news article from an unknown source, our goal is to predict if the source of the article is reliable or unreliable. To this end, we extract news article data from the NELA-GT-2018 1 data set [31]. The NELA-GT-2018 data set is a political news data set that contains 713K articles from 194 sources, containing all articles by these sources from February 1st, 2018 to November 30th, 2018. These sources come from a wide range of mainstream and alternative media, including many conspiracyspreading news sources and hyper-partisan blogs. Included in the NELA-GT-2018 data set are source-level labels of credibility from several assessment platforms. Two of the assessment platforms will be used for labeling sources in this paper: Open Sources and NewsGuard 2 . Open Sources ratings have been used in many other studies. It uses a panel of experts to mark sources as one or more of these 13 categories: reliable, blog, clickbait, rumor, fake, unreliable, biased, conspiracy, hate speech, junk science, political, satire, and state news. The criteria for deciding source labels on Open Sources is available on their website. NewsGuard is an independent journalistic organization that similarly uses a group of experts to score news sources based on credibility and transparency using a stringently developed rating process. Specifically, NewsGuard rates sources on the following criteria, with each criteria having an assigned weight: (1) Does not repeatedly publish false content (22 points) (2) Gathers and presents information responsibly (18 points) (3) Regularly corrects or clarifies errors (12.5 points) (4) Handles the difference between news and opinion responsibly (12.5 points) (5) Avoids deceptive headlines (10 points) (6) Website discloses ownership and financing (7.5 points) (7) Clearly labels advertising (7.5 points) (8) Reveals who's in charge, including any possible conflicts of interest (5 points) (9) Provides information about content creators (5 points) Using these two sets of source-level labels, we create two classes of news: reliable and unreliable as follows. We extract all articles from sources that have a credibility score above 90 according to NewsGuard to create our reliable class and sources that have a credibility score below 40 or sources that are marked as unreliable/conspiracy/fake by Open Sources to create our unreliable class. Often sources with a score below 40 by NewsGuard are also marked as unreliable/conspiracy/fake in Open Sources. To obtain a score above 90 by NewsGuard, a source would only be allowed to miss one of the last four criteria (criteria 6, 7, 8, or 9). Based on this labeling method, we extract 184736 articles from 52 sources, where 25 sources are marked as reliable and 27 are marked as unreliable. These articles cover 10 months in 2018 (February through November). The sources in each class can be found in Table 1 USING CONTENT SHARING NETWORKS AS A SIGNAL OF RELIABILITY Several recent studies have shown that both mainstream and alternative news sources often share (or copy) articles from each other either verbatim or in part [21,42]. The motivation behind this content copying can differ greatly depending on the source. Mainstream sources copy articles from news-wire services often to meet demand or "break" news in a timely manner. Conspiracy sources may employ this tactic with malicious intent to spread false content, create uncertainty surrounding an event by amplifying alternative narratives, or to simply make money from clicks [11,21,42]. This behavior may also indicate coordination between disinformation producers. This article sharing behavior can be formulated as a network where each node is a news source and each directed edge → has weight proportional to the number of articles in that are copied from . This network captures various important aspects of the news ecosystem: communities of similar media sources, hubs of conspiracy news production, and bridges between the mainstream and alternative media. It is likely that these network structures, particularly community membership, provide a strong signal of veracity. It is easy to imagine that an unknown news producer, which copies articles from a well-known conspiracy news producer, is also a source of conspiracy news. This signal can be extended to more indirect cases where unknown news sources fall in a path between two known news sources, or sources that copy from both reliable and unreliable sources can be labeled as mixed veracity. It is this rich structure of information that we wish to take advantage of in detecting articles from reliable and unreliable sources. Network Construction Using the whole NELA-GT-2018 data set (rather than our extracted labeled data set described in Section 3), we follow the process described in [21] to create a near-verbatim content sharing network (CSN) of news sources. Specifically, we compute a TF-IDF matrix of all articles in the data set and compute the cosine similarity between each article vector pair (given that each article comes from a different news source). To reduce the complexity of this process, we use a sliding 5 day window of articles. For each pair of article vectors that have a cosine similarity greater or equal to 0.85, we extract them and order them by the timestamps. This is the same cosine similarity threshold used in both [21] and [42]. This process creates a directed graph = ( , ), where is the set of news sources and are directed weighted edges representing articles shared. Edges are directed towards publishers that copied articles (inferred by the timestamps). We normalize the weight of each edge in the network by the number of articles published in total by the source. For example, if USA Today publishes 1000 articles and copies 100 of those articles from Reuters, the edge from USA Today to Reuters would have weight 0.1 and be directed towards USA Today. We show a visualization of this constructed network in Figure 1. We built this visualization using Gephi and used the Newman Spectral Method for directed modularity to label community membership [30]. Specifically, we use the default parameters from Zhiya Zuo's modularity maximization Python package 3 . This network includes the 52 sources with known labels used in this study as well as 88 sources with no labels. The presence of both labeled and unlabeled sources provides us with a rich network structure. In addition, the community structures in the network (as shown by colors in Figure 1) would likely be lost if only labeled data was used. We find that the community structure looks very similar to the structure displayed in [21], as we use the same dataset and the same community detection method. To provide some intuition of where labeled sources are placed in the network, we show the number of sources from each class in each network community in Figure 2. We also show the degree distributions of labeled sources in We choose to focus on near-verbatim content sharing networks in this paper due to the well-studied properties of these networks. Partial content sharing networks can also provide useful additional information, however these networks are not yet studied in the literature. Hence, we leave study of partial sharing behavior to future work. Figure 1: Visualization of CSN using Gephi [7]. Colors represent communities using directed modularity. Edges are directed, where the outdegree of node is how many news sources copy articles from node . The size of each node is based on outdegree. Just as shown in [21], each community contains sources from distant parts of the media landscape, often grouping sources on similar veracity. In particular, we can see many of our unreliable sources in the magenta and green communities, while our reliable sources fall mostly within the blue community. Figure 1. The high separation between reliable and unreliable labeled sources supports the intuition that the CSN network can be used to approximate veracity. Network Representation for Classification Hand-crafted Network Features (HCNF). One way to represent sources in the CSN is to craft a set of network features for each source. To do this, we choose several standard network measures, as well as more community-focused features. In total we compute 11 features that include: (1) Community -What community is the source in, as determined by directed modularity [30]. shortest paths that pass through the source. (6) Eigenvector Centrality -The centrality of a source based on the centrality of its neighboring sources. (7) Community Core -Is the source a member of the k-core of its community or not, where the k-core is a maximal subgraph that contains nodes of degree k or more. We compute the the core with the largest degree. Out-Degree represents how much a source is copied from. As discussed in literature [21], unreliable sources generally copy more articles verbatim than reliable sources. Node2Vec (N2V). Another, likely more complete, method to represent the CSN network is network embedding. Specifically, we use the Node2Vec [17] network embedding method. As with word embedding, Node2Vec uses the skipgram model and transforms the sparse adjacency matrices of networks into a dense vector representation of nodes. This representation aims to preserve network structure and node neighborhood, clustering together those nodes with similar functionality and structure in the network, such as hubs and peripheral nodes. Additionally, the dense vector representation captures latent similarity relations within the network. Node2Vec uses the return parameter and the in-out parameter to control the breadth and depth of random walks on the network used to generate the embedding. In this work, we use = 0.5, = 0.5 and set the vector dimension to 40. The output are vectors representing the network nodes (news sources), we refer to these vectors as N2V features. Note that we embed all sources in our dataset, including those with no labels to fully represent the CSN. Note, we remove all articles used in the CSN construction from our training and test data in later experiments in order to avoid data leakage. NetworkText2Vec (NT2V). Naturally, the CSN embedding can only represent sources that share content. The sharing behavior may be rare and not present in specific settings. Furthermore, not all sources may share content in verbatim, especially if they are new or are representing different topics. For example, a source may focus on breaking news, which lends itself to content sharing, while another source may focus on investigative pieces, which may not lend itself to content sharing. Although these sources can be represented as completely disconnected nodes in the network, embedding disconnected nodes with N2V would give us no relevant information with respect to node similarity. To fill this gap, we can use the similarity of sources with respect to the text that they publish, using text as side information in the embedding. The problem of attributed network embedding was addressed by Yang et al. [44], they proposed TADW, a method that performs matrix factorization on a matrix using as input both network and text features. One limitation of this method is that it requires network and text representation of every source being embedded. In our case, this is a major drawback, as CSNs can have missing edges due to lack of content sharing information. To mitigate this issue, we propose a method based the multi-scale attributed network embedding by [36] that we refer to as NT2V . This method takes as input both the CSN and the text attributes of news articles. The text attributes are a representation of a source given by the average of its word embedding vectors. Using this information, NT2V combines two random walks based on the similarity of nodes (news sources): one over the network as in Node2Vec and the second one over the text attributes. More formally, let be the -th source from a random walk over the text corpus, the transition probabilities from to +1 are obtained from the cosine similarity between and its -nearest neighbors, and normalized by the sum of weights of the edges leaving . Sources with higher cosine similarities have higher chances of being picked in the random walk, thus appearing more often in contexts. We set a lower bound cutoff similarity of 0.5 to prevent selecting sources that are significantly dissimilar. Intuitively, the process of context generation is carried out by interchanging random walks over the network space and the text space. At random walk we decide with probability that the walk will happen over the text space, or 1 − that it will happen over the network. If the network is chosen, we perform a random walk entirely over the network, as with Node2Vec, otherwise the random walk is entirely over the text corpus space. We generate contexts for each source. Once contexts are generated, they are used as the input to a skipgram model. In addition to the input parameters and of Node2Vec, NT2V requires the parameter that controls the likelihood of performing a walk over the text and the parameter that controls the number of nearest neighbors to consider during the text corpus walk. The output is vector representations of sources based on the generated contexts. We set the output vector size to 40, number of walks to 1000, walk length to 80, and tune parameters , and by performing a grid search over the interval [0.2, 0.8] with a step size of 0.1. We select the model that yielded the best classification accuracy on a validation set. The final parameters are: = 0.8, = 0.5, = 0.4. Code for NT2V and additional documentation are publicly available 4 . We uniformly sample 20% of the articles for each source to use in NT2V, the sampled articles are used exclusively to compute the source representation and are not used in any other scenario. This is done in order to avoid data leakage from the source representation into the article level experiments. BASELINE TEXT MODELS To compare our CSN feature models to state-of-the-art text-based methods, we compute several text feature sets and discuss the details of each below. NELA. NELA is a hand-crafted, text feature set used in whole or in part in several news veracity studies [4-6, 14, 19, 22] with available code online 5 . This feature set can be divided into five different groups: (1) Style -This group represents the general writing style of an article, including parts-of-speech used, punctuation used, and capitalization used. of text using two well-known works in text processing: LIWC [43] and VADER [24]. LIWC is a gold-standard, lexicon based method for discovering various social and psychological traits in text. These include various types of emotion, such as anger, anxiety, affect, and swear words. VADER is a state-of-the-art sentiment detection tool that provides measures of positive, negative, and neutral emotion in text. (5) Moral -This feature group is a lexicon based method that measures morality in text on the basis of Moral Foundation Theory [16]. Examples of these features include fairness, authority, and care. In total NELA contains 194 features, computed independently on the body text and title text of an article. FastText (FT). Another method we can use to capture textual differences between news articles is using word embedding. Word embedding features have been used in only a few news veracity detection studies so far [39] and are still under-explored. The potential advantage of word embedding features over hand-crafted feature sets, like NELA, is that features can be automatically captured regardless of language and domain. The disadvantage is that we cannot control the specific concepts captured in the text, which may lead to worse performance and robustness. In this work, we use the wiki-news-300d-1M 6 pre-trained Fast-Text model [9] to obtaine the representation for 184736 news articles, the model was pre-trained on Wikipedia and news data, contains 1 million words and the vector dimension is 300. To obtain the representation of an entire news article we average the vectors of all the words in an article's title and content, thus, arriving at the final representation of an article, given by a 300 dimension article vector which we refer to as FT features. Note, we also experiment with a LSTM sequence classifier and BERT embedding vectors as baseline text models, but due to the similarity of results across the text models and space restrictions, we do not display those results. CSN features improve the accuracy of text-based models Again, the goal of our classification model is to predict if the source of a news article is reliable or unreliable, given a news article and its source name as input. To this end, we train Random Forest classifiers on 80% of the sources and test on 20% of the sources. For each source, we uniformly sample 1000 articles before splitting into train and test sets to ensure each test set is balanced. Note, we are simulating a setting in which the classifier is given an individual news article from an unknown source as input and uses both articlerelated features and source-related features to predict. If a source is selected for testing, all 1000 of its sampled articles are removed from training. We repeat this experiment 50 times and average the performance metrics. We also repeat these experiments using a fully-connected Neural Network classifier, but find little to no improvement over the Random Forest Classifier, hence we only display the results using Random Forest. To assess how much CSN features and text features contribute to distinguishing articles from reliable and unreliable sources, we test each individual feature group as well as combinations of articlelevel text features with their respective source-level CSN features. We combine text and CSN features in two ways: (1) We concatenate text and CSN vectors (represented with a plus sign, e.g. NELA+N2V) and predict using a single binary classifier, or (2) We use a feature ensemble of two binary classifiers, one trained on text features and the other trained on CSN features, using the sigmoid function to predict a probability that the given input belongs to class 0 (reliable). Those probabilities are then combined using a soft voting. Table 2 shows the classification results for all feature group combinations and classification algorithms. As shown in Table 2, both the hand-crafted text model (NELA) and the word embedding model (FT) are improved by the CSN features (N2V and NT2V). These improvements are significant, increasing accuracy as much as 20%. Based on overall accuracy, the best model is FT+N2V, while the feature Ensemble using NELA shows the best F1 and Recall scores. While the best performing models are all using combinations of the CSN features and the text features, we do see the CSN models 6 https://fasttext.cc/docs/en/english-vectors.html alone also perform well. In fact, N2V has the best precision score among all models and has only a 3% decrease in accuracy from the best combination model, demonstrating the strong signal provided by the CSN. Text models and CSN models often make different mistakes It is clear that CSN features capture some signal of veracity and improve upon the text-based models. However, do CSN models make the same mistakes as the traditionally used text models? To test this, we use two methods. First, we compute the conditional probabilities that a feature correctly classifies the articles given that another feature set has failed to classify it, shown in Table 3. More precisely, given feature sets and , we compute ( = 1| = 0) as the conditional accuracy, where = 1 is the event where feature set B correctly classifies an article, and = 0 is the event where feature set A does not correctly classify the same article. The probabilities were computed using a classification model trained on a leave one source out subset of articles. Specifically, for each source , let S be the articles from in the data D. We train a Random Forest classifier on D − S, and test the classification on S. The conditional probability indicates how many of the mistakes of are corrected by , and it is given by . Second, we examine the distribution of errors per class for each feature group, shown in Table 4. Simply put, using the leave one source out method, we calculate what proportion of the wrong classifications are in each class. This analysis shows us which feature groups are better or worse at classifying one class or the other. As shown in Table 3, the CSN models (HCNF, N2V, NT2V) made very different mistakes than the text models (NELA, FT), with at most a 83% chance of a CSN model correctly classifying an article that a text model missed. When reversing the probability, we similarly see different mistakes made, with at most a 66% chance of a text model correctly classifying an article that a CSN model missed. When looking at the specific types of mistakes made, we see several consistent cases. Generally, we see the same trend in Table 4. Specifically, we see that both NELA and FT (text models) are better at classifying the reliable class than the unreliable class, while N2V and NT2V (network models) are much better at classifying the unreliable class than the reliable class. Higher conditional probabilities imply greater distinction between the errors made by one feature group and the other. Overall, there are very few mistakes by the CSN features, but when they do make mistakes, it is on sources in sparsely labeled areas of the network. For example, in our data set, Reuters and The Guardian, are often mis-classified by N2V (i.e. purely CSN information). This mistake is because articles from both Reuters and The Guardian are often copied by U.K mainstream sources, which are unlabeled in our data set. Table 3: Conditional probabilities of mistakes made by each feature set. ( = 0) is the probability of feature set making a mistake, ( = 1| = 0) is the conditional accuracy defined as the probability that feature set B correctly classifies samples given that feature set failed to do so. The higher the probability, the more dissimilar the mistakes made by each feature set is. Each model uses Random Forest. We use bold font to indicate the highest dissimilarity between CSN models and text models and vice versa. having more labels in the non-U.S. communities of the network. This sparse label problem is also why we see the CSN models classifying the unreliable class better, as the unreliable sources are more densely clustered together than the reliable sources in the network. We leave explicit tests on the impact of removing and adding nodes/labels in the CSN to future work. Another interesting case is when both the CSN and text features incorrectly label an article, but the combination of them flips the label. For example, some articles from Business Insider, a reliable news source, are classified in this way. In the CSN space, Business Insider falls in the U.S. mainstream community, but is a peripheral node, which may lead to very few other reliable nodes being sampled in the network embedding process. In the text space, the articles are similar to other mainstream sources in the body, but the titles can sometimes be considered 'clickbait', which is often a trait of unreliable news articles. Hence, both feature models individually may not have enough information to say it is similar to a reliable source, but together they can correctly label the article. We also note that not all text models are alike. We found that the hand-crafted text features (NELA) and the word representation features (FT) also make dissimilar mistakes. While these mistakes are not as dissimilar as those between the text-based models and the CSN models, they are notably different, with a 33% chance that a mistake made by NELA is correctly classified by FT, and a 43% vice versa. However, these differences in mistakes do not seem to be enough to help prediction performance. When qualitatively looking at these differences in mistakes, it is hard to say what specifically Figure 4: Classification accuracy overtime. The first month of data is used for training the classifiers, which are tested on each subsequent two week time slices. The combination of text and CSN features provide higher accuracy and stability over time, particularly with the combination of FT and NT2V features. CSN features improve the stability of text-based models over time In this section, we examine the stability of the performance improvements from network models over time. To test this, we train each classifier on the first month of data and test the classifier on each 2 week slice of data moving forward in time. We only test the models on sources that are unused in training and perform this train-test split over 50 runs of 20% of the sources. Again, we ensure that each source is balanced. Note, we also reconstruct the CSN network to only include information from the first month of data. This simulates a classifier that is built in February 2018 and left static for the rest of the year. These results are in Figure 4. In addition to showing performance stability over time for each model, in Table 5, we show the classification accuracy for each feature group in two scenarios, using a Random Forest classifier: in time and forecast. The in time test is a prediction test on data from the same time period as the training (i.e. February 2018), while the forecast test is a prediction test on the remaining time period without re-training. As shown in Figure 4, the addition of NT2V features improves both the overall performance of the model and its consistency over time. For example, for FT, there is at most an accuracy drop of 11% over 10 months (0.61 to 0.50). However, when combined with NT2V, not only the initial accuracy is higher, but the drop is more subtle (0.75 to 0.70). However, not all NT2V combinations remain this stable. Specifically, when NT2V is combined with NELA, we similarly see a boost in overall accuracy, but see a significant initial drop in accuracy from February to March (-8%). However, the model remains very stable after the initial drop. The results in Table 5 show that combining text and network features improve the forecast performance, but this performance increase is not always significant. DISCUSSION AND CONCLUSION In this study, we presented a novel feature set for the detection of articles from unreliable sources, utilizing the rich structure of news content sharing networks. To do this, we used a network embedding method that takes a deep walk approach to sample from both the CSN space and the text space. The addition of the text space to the CSN space in the sampling process makes it possible to find representations of incomplete networks by positioning sources with unknown CSN information close to those with high similarity in the text space. We show that the information provided by embedding CSN networks provides a strong signal of reliability and boosts the accuracy of text-based models. We show that text information and CSN information make dissimilar mistakes, illustrating complementary signals between the two types of models. Saliently, these CSN features also stabilize the performance of text-based models over time, performing consistently over a 10 month time frame without retraining. This stabilization is likely due to the fact that the CSN structure remains largely unchanged over time, while text features are vulnerable to recurrent topic changes. There may be additional advantages to using the CSN embedding model that can be explored in future work. First, both the CSN construction and word embedding are language-agnostic, unlike the hand-crafted text features (NELA). Assuming reliable and unreliable media operate distinctly in other languages and cultures, the NT2V embedding can be used out-of-the-box to detect these differences. In fact, this method could be extended to many other types of information spaces beyond political news, as it is common for sources to amplify their message by creating copies (e.g. bot-generated retweets on Twitter). Second, CSN features may also work in distinguishing different granularity of labels, due to the tightly-formed communities in the network. For example, if we have labels of political-leaning or other characteristics of sources, it is possible that we can separate them in the network space. In conclusion, using the behavior of information producers provides valuable signal in news veracity classification. This result points to a bigger picture need to explore and understand tactics used by disinformation producers not only for social interventions, but for automated support tools. If we can continue to structure information producer behaviors and tactics clearly, they can be used to aid our automated methods, which in turn can further our understanding of the news ecosystem.
2021-01-27T02:16:19.982Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "6742ba117c754fa63a43932910921447cfae554c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6742ba117c754fa63a43932910921447cfae554c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
151762862
pes2o/s2orc
v3-fos-license
Primary spaces of social interaction and insecurity in Matamoros , Tamaulipas Espacios de convivencia primaria e inseguridad en Matamoros , Tamaulipas This article reviews the importance of gathering places in strengthening the primary social groups of individuals over the age of 15 years within six families in Matamoros, Tamaulipas. The relationship between primary social groups and spaces of social interaction is contextualized in an environment of insecurity fostered by the existence and violence of criminal groups who have managed to involve themselves in a range of significant activities in the city. Together with structural factors, insecurity has helped lead to a reconfiguration of gathering places between young people and adults; private and semi-public spaces predominate, while the intensive use of certain public spaces in the city has diminished. Introduction The purpose of this article is to reflect on the influence of insecurity in the city of Matamoros on spaces of social interaction of a group of residents over the age of 15 years.This objective is achieved first by defining the primary groups of interviewee and some external characteristics of these groups (type of group, number of groups, intensity of shared life).Additionally, there is an attempt to identify the public and private spaces that do or do not foster this type of social interaction, whether as generators or strengtheners of primary networks in the population. 2 Finally, these aspects are contextualized in an environment of violence and insecurity fostered by criminal activity over recent years in the city of Matamoros, with the aim of understanding the effect of this context on the intensity of spaces of social interaction, primarily regarding the young people interviewed. The text is organized as follows: First, there is an introduction that contextualizes and offers a general approach toward the issue; then, in the first section, the analytical framework of the paper is presented.Next, a methodological section considers the criteria for choosing the neighborhoods and interviewees for the study, as well as the type of interview and how the interview results were analyzed.Subsequently, the historical and urban background of the city of Matamoros is described, emphasizing the spatial distribution of the population and the availability of public and private spaces as existing and possible spaces of social interaction.The remaining sections discuss the results of the interviews and of the focus group. Background and approach Currently, the long workdays of heads of households and seasons of unemployment and underemployment limit the time for participating in family activities as well as for forming peer groups and mutual interest groups that strengthen the values of social interaction within and without the family. There are other aspects of the social lives of working families living in large urban centers that have an impact on the strengthening of their primary networks.For example, the construction of public residential developments in outlying areas without enough urban services, such as education, work, recreation, and commercial centers, forces residents to travel long distances from their homes to different locations to satisfy their different needs.Thus, the daily time for recreation and social interaction among the family members is significantly reduced. Together with these social factors, in recent years, the presence of the violence caused by criminal groups has been significant, and it has increased the feeling of insecurity in the daily lives of people in different social strata in Mexico.It has also caused forced displacement of the population in some regions of the country (Durin, Primary social groups, gathering places and insecurity Charles Horton Cooley (1909) is the first to mention the primary group.Previously, the discussion about aspects of the primary group centered on the traits of traditional society compared to the modern one characterized by the industrial revolution.Tonnies considers traditional society as that where family plays a predominant role and relationships are intimate, private, and with face-to-face bonds, as well as lifelong commitments and a common understanding based in harmony (Tonnies, cited in Dunphy, 1972). Cooley defines primary groups as follows: By primary groups I mean those characterized by intimate face-to-face association and cooperation.They are primary in several senses, but chiefly in that they are fundamental in forming the social nature and ideals of the individual (1909, p. 23). These are some basic characteristics of primary groups: 1) face-to-face association, 2) nonspecialized association, 3) relative permanence of the group, 4) a small number of people involved,4 5) relative intimacy among the group participants (Cooley, cited in Dunphy, 1972), 6) a set of implicit norms that regulates the behavior of the members, 7) a high level of solidarity among the members (Shils, cited in Dunphy, 1972), 8) the group is based on and sustains the spontaneous participation of members, 9) the group involves emotional manifestations and expressions, and 10) the group allows relationships that are satisfactory in themselves, unrelated to other ends and not calculated and explicit (Schäfers, 1984). Among primary groups, Dunphy includes 1) the family, 2) free associations of groups in childhood, adolescence, and adulthood (gangs and groups of political elites), 3) informal groups that exist in organizations (such as in classrooms, factories, military organizations, churches, and sports teams), and 4) resocialization groups (therapy, rehabilitation and/or self-analysis groups). 5e will now consider some of the varying characteristics used to define primary groups, such as spatial proximity or the face-to-face relationship; these characteristics are observed especially in family members who live in different distant cities and who maintain contact through means such as telephone or, currently, electronic media, such as the internet.The other characteristic is the primary transmission of norms and values because primary groups also exist among adults, once they have consolidated their personalities and are clearly established in their identities (Berger and Luckmann, 1998;Heller, 1977). Particularly notable is the counterbalancing role played by primary groups when faced with social aspects such as anonymity, isolation, alienation and role specialization (Schäfers, 1984). This quality of the primary group can provide society with citizens with a higher probability of accepting the commitments that come along with the activities specific to the secondary world, where social roles are important (connected to occupation, study, and politics, among others). 6The role of these citizens is why strengthening primary groups and their shared spaces of social interaction is important in public policy, while at the same time seeking to improve quality of life and the educational, work, and social levels of the population. The number of primary groups to which an individual belongs becomes his or her personal network, which at the same time is part of the social network that serves as a support to satisfy certain needs.In terms of social networks, the individual participates in different aspects of the group (personal, family, neighbor, community, citizen).In our case, we will only emphasize the two former dimensions that are most connected to the emotional state and emotional supports of the members of these networks (Oddone, 2012). One limitation of this study is that it does not investigate the internal nature of group function; that is, we do not examine whether it functions through solidarity, cooperation, trust and respect or if it is an antisocial group that seeks distinction and separation and its own interests without respecting those of others. 7In this sense, we do not reflect on the degree of the social integration of the individual, although this integration is one of the functions of the primary group.We are only interested in contextualizing the urban environment, from the home to urban and public spaces. In this text, we talk about different types of gathering places: public, private or pseudo-public spaces.Public space derives from the differentiation between private and public property.Public spaces are allocated for collective activities such as recreation, transportation, and cultural activities, among others, and free access to all should be guaranteed legally (Segovia and Jordán, 2005). The focus on the meeting space helps us consider the different possible spaces for social interaction and be open to different possibilities and interpretations.Together with this aspect of the meeting space, we also utilize other interpretations of public space.First, public space is the territory where our differences and inequalities are reflected and the site where social power manifests and expresses itself (Salcedo, 2002;Valera, 2008).Lofland (1998) notes that in cases where private or intimate bonds prevail in public spaces, the spaces become privatized places.When work or neighborhood connections prevail, then spaces are said to be local spaces.When there are more strangers or outsiders, then we speak of public space.The challenge is promoting personal and group interests-in other words, diversity-without ending up with privatization, exclusion, and division (Segovia and Jordán, 2005). Therefore, if we think about public space as part of a social relationship in which individuals can give the space their own meaning according to certain social characteristics, class or their social role (e.g., man or woman, young or old, worker or student), then we can speak of a very heterogeneous space that depends less on its territorial configuration than on the perception or its potential for building and strengthening relationships. In contrast, the home is the private space par excellence, the refuge when public space is privatized.The tendency in current society, according to Borja (2005), is the holistic search for social functions in the home.In the present day, these functions are designed to substitute public activities with modern artifacts: Television substitutes for cinema, the yard with a garden substitutes for the public park, and the internet and Facebook substitute for communication and face-to-face interaction between relatives and friends.This trend applies to the middle-and upper-class sectors but not to the majority of the population living in marginalized sectors, who live overcrowded in small homes.Therefore, we note that there is a movement from the public to the private and consequently a vacuum and deterioration in social space (Remedi, 2000).Safe activities in the context of violence and insecurity are considered to belong to private space, while dangerous activities are carried out in public and in less clearly defined spaces, such as alleys, alleyways, access elevators, and benches. Nevertheless, there are those who maintain that public space helps socialize children and foments the creation of new friendships, among other benefits (Segovia and Jordán, 2005).Some places appear to be public (cinemas, buses, religious temples, private teaching centers, shopping malls) and where people congregate or gather but, in reality, are not public.Together with those, there are commercial spaces that have received a greater boost in current society, spaces where limits are imposed by the owners (Salcedo, 2002;2003).These spaces are private property and therefore are not public, despite the fact that large groups of people come together in them.These are mainly malls (shopping centers) and, in our case, also private gathering places, such as restaurants. Methodology We consider studying the changes brought about by the insecurity and violence in Tamaulipas in a city such as Matamoros important because it is one of this state's main cities that, together with cities such as Reynosa, Ciudad Victoria and the metropolitan area of Tampico, have suffered from the presence of organized crime for some time.Additionally, the population's daily life has been affected by this context. Because we wanted to conduct an exploratory study, we decided to employ a perspective that would encompass cases of different social strata; therefore, we chose three neighborhoods that would reflect different socioeconomic situations.We based our study on a stratification using educational, income and occupational variables, and we chose three neighborhoods to work in: one from an upper-middle-class stratum, one from a median-middle-class stratum, and one from a lower stratum.Due to the situation of insecurity, we considered it important to conduct the interviews in private, at the home of the interviewee.Additionally, the families that we visited to interview individuals older than 15 years were chosen using the snowball technique, with the aim of situating the interview in an environment of trust and security for the interviewees and the interviewers.Originally, the plan was to interview 24 people; however, this was not possible because two were not in the city and we were unable to align our schedules with a third.The interviews were conducted between October 11, 2012 and December 13, 2012.In total, there were 21 interviewees: 11 young people and 10 adults, seven males and 14 females.Of the 21, 12 were workers, seven were students, and two were homemakers.The ages of the adults interviewed ranged from 27 to 58 years, while the range for the young people was 15 to 24 years.Additionally, the recommended families had a mainly female composition.The interview was semi-structured and was composed of four sections.The first included identification questions; the second section concerned primary groups, with the interviewees asked about their friends, family relationships, and consideration and type of social interaction with their neighbors and those whom they considered their most intimate connections.In the third section, the interviewees were asked to talk about gathering places; a description of the interviewees' daily routine was included in this part of the interview, which helped us complement the information about spaces of social interaction, and we obtained details about different activities (e.g., meeting needs and going to the supermarket, school, and church).Finally, in the fourth section, the interviewees were asked about life before and after the period of insecurity.The duration of the interviews had a range of 20 minutes to one hour, with a mean duration of 37 minutes. Families with at least three members older than 15 years were included.Thus, we have the perception of the problem from the perspective of members of a traditional family at a certain moment in their life-cycle. The interviews were analyzed based on four major categories: primary groups, spaces of social interaction, perception of insecurity, and strategies for social interaction; each of these categories in turn has different corresponding subcategories.The interviews were transcribed, and analysis was conducted to identify the significant relationships in these categories. A focus group was also conducted on January 4, 2013, with university students.The group was composed of six young people (four male and two female).This group gave us their perspective on the difficulties for young people in carrying out their recreational and personal activities in a context of high insecurity.The duration of the focus group was 1 hour and 26 minutes. Spatial distribution of the population of Matamoros Matamoros is a city with 493,000 inhabitants; it is the third most populous city in Tamaulipas.An important economic feature of the city was the installation of textile factories beginning in the 60s and 70s.The textile factories are one of the main sources of employment.8These factories have promoted the immigration of workers from different states of the republic.For this reason, housing developments under the aegis of the Instituto del Fondo Nacional de la Vivienda para los Trabajadores (Infonavit) for maquiladora workers have proliferated.Currently, 40 percent of the existing residential developments in Matamoros are connected to this type of financing (Jurado, 2011). The distribution of the residential zones in Matamoros begins from a nucleus around the Bravo River and close to the international bridges and then distributes in a radial and irregular form toward the west, east and south of the city.As in the other border cities of the northwest, Matamoros has its city center (plaza, cathedral, government palace) relatively close to the border with the United States.Unlike other Mexican border cities, however, the sectors with the highest incomes are established around the center and closer to the Bravo River and the international bridges (Alarcón, 2000, p. 117;Castro, 2011); they are also distributed along significant roadways and close to central spaces, while the lower-income sectors are found on the periphery of the city (Alarcón, 2000, p. 123). There is currently a more heterogeneous distribution of the different social strata in the urban space (Castro, 2011).The historical center of Matamoros is not directly next to the international bridge.The area between the center and the bridge was a commercial area with nightlife and entertainment and currently has a number of abandoned and closed facilities due to the lack of tourists and nightlife clientele. In relation to the lower-income sectors, the population that resides in these residential developments lives in overcrowded conditions, with the average square meters per home varying between 43 and 60 m 2 with a 6 m front.Green areas are minimal, and the average square meters per inhabitant does not surpass two, while the mean recommended by the United Nations [un] is 9 m 2 (Jurado, 2011). Some considerations regarding public and private spaces in Matamoros According to urban land use in Matamoros, there are 196 ha of green spaces representing 3.24% of land use, while abandoned lots add up to 22.84% and residential use is 18.9%.Industry, commerce and equipment encompass 29.58% of the total urban surface area (Ayuntamiento Municipal de Matamoros, 1995).The above means there is a great deficit in public areas for social interaction.The Municipal Plan of Matamoros (Plan Municipal de Matamoros) (1995) calculates that the city should have at least 340 ha of green spaces. The deficit seems greater because the vast majority of green spaces are unused; they have not been set up for the population to be able to enjoy them but rather are hills of brush and abandoned lots.Some have been transformed and used to construct schools or churches. Additionally, looking at their distribution (Figure 1), we note that the green spaces are present in different neighborhoods of the city.This means that there is huge potential for the establishment of social interaction zones that could strengthen the primary connections of the inhabitants of Matamoros. In a diagnostic of the spaces allocated for sports, health, culture and recreation in the city of Matamoros, Quintero (2011) notes that despite the sustained growth and the distribution of the population toward the periphery, the urban infrastructure allocated for these activities is concentrated in the central part of the municipality. However, social interaction has also developed in closed spaces where the main objectives of the use of these spaces are commercial or service-oriented.In the case of Matamoros, Implan has located 12 commercial centers that are regularly mentioned as shopping places and occasional sites of family outings; the majority are located on significant avenues and distributed at different points throughout the city.Additionally, there are at least eight sports centers, some private and others related to schools or restricted to sports teams that compete in amateur sports leagues.There are five community centers, called Tamules, which were created recently and can be a great help in promoting gathering, mainly for those who live around those centers. According to Table 1, we calculated the square meters per inhabitant, both on the urban level and for the neighborhoods in the sample: There are 422,891.24m 2 of green space (in the locality), divided by the population of the locality (449,815 inhab.),giving us 0.94 m 2 /inhabitant.In this case, the square meters per inhabitant of an upper-middle-class sector, such as the Las Arboledas neighborhood, stands out in comparison to the other two neighborhoods; Lomas de San Juan and Cima 3 are residential developments built using Infonavit housing credits.Unfortunately, sufficient green spaces were not built into these residential developments, whereas in Las Arboledas, the green spaces were considered part of the promotional attraction of the residential development. Sociodemographic characteristics of the inhabitants of Matamoros and the neighborhoods studied The data presented refer exclusively to the inhabitants of the municipal seat, which in general contains 90% of the population of Matamoros.Some characteristics should be highlighted about the sample neighborhoods.First, according to the neighborhoods' locations and ages, Cima 3 is the youngest of the three, first populated in 2006.It is a neighborhood that, despite not being in flood zones according to the official information, has flooded several times according to the press (Martínez, 2012). The Lomas de San Juan neighborhood is the largest of the three neighborhoods considered and is, to a certain extent, more heterogeneous because it has small 64 m 2 houses together with houses that are larger than 100 m 2 .Like the Cima 3 neighborhood, together with other neighborhoods, it has suffered the abandonment of houses by inhabitants who could not continue to pay their housing credit due to the unemployment crises that have occurred in border cities. 9 One newspaper has reported at least 80 abandoned houses in the Lomas de San Juan neighborhood since the beginning of 2011 (Valle, 2011). The Las Arboledas neighborhood is the oldest of the three.One of the interviewees informs us that he has lived in Las Arboledas for more than 24 years and bought his house using a bank loan.The neighborhood has sections in which only the lot was sold and the home was constructed according to the taste or means of the inhabitant.This is in contrast to the other two residential developments, which are more uniform in the design of the houses. An indicator of the age of the Las Arboledas neighborhood can be found in the proportions of the adult population and those younger than 15 years.These proportions allow us to consider families whose life-cycle is approaching the "empty nest" stage, when the children leave to form their new family.For this reason, in Las Arboledas, the percentage of people older than 60 years is higher than in the other two neighborhoods. One of the indicators that reflects the social stratum of the inhabitants of these three neighborhoods is the education level of their residents.Compared with Cima 3 and Lomas de San Juan, more professionals and people with higher incomes live in Las Arboledas, as observed in Table 2. Jurado Another datum that tells us about the age of the neighborhood is the impact of migration.Being the newest neighborhood of the three, Cima 3 has a higher rate of migrants than the other two, while Las Arboledas presents the lowest rate.This datum can also reflect levels of social interaction.One could suppose that the more time a person has spent living in a neighborhood, the higher his or her probability of increasing ties with the neighbors; nevertheless, as deduced from the interviews, this tendency is not always manifested (Table 3). 10Spaces of social interaction and primary groups: Neighbors, friend groups, common interest groups, extended family With the aim of determining whether the structure of the interviewees' primary network and its connection to spaces of social interaction is maintained or whether it changes as a result of the insecurity that the interviewees perceive, we will analyze the results of the interviews conducted. The interviewees of the Cima 3 neighborhood Cima 3 is one of the city's peripheral neighborhoods, and this peripheral location makes distance a deciding factor when a resident is carrying out an activity outside of the neighborhood.This is especially true if we consider that the inhabitants of Cima 3 constitute a low-and middle-income population.A mother comments that when she purchases items for the house, she does so in a supermarket: It's closer, and from there, sometimes we go to… yes, Lauro Villar (Avenue), and well sometimes,… we go in to walk around (name of a supermarket), the one that's here, is where we mostly, only… to begin with, it's the closest one that you have here in the neighborhood, since you see how far out we are, and this one, it's very good (Jiménez, 2012a). Despite having a car, she says: If I leave here for school with the children every day, it wastes gas no matter what, and so sometimes we think about it, and sometimes I tell him (her son), let's go, but let's go in the "pesera"11 … because man, it's rough (Jiménez, 2012a). Distance is not an insurmountable obstacle.A young 18-year-old man who attends a school an hour and a half away (according to the time it takes using public transportation from the neighborhood) takes advantage of the route home to visit friends in the city center.What is reduced is the intensity of social interaction with people in primary networks who live far from the neighborhood.A father who likes to play soccer goes once a month to play with some friends who live at the extreme opposite end of the city.A mother mentions how distance has diminished the intensity of her social interaction with a friend: Yes… if we get together, she comes to visit me, or sometimes, I get away and go see her, but it's not the same anymore.We do see each other but not like before, before she lived nearby and I went to have breakfast at her house, but now it's farther out (Jiménez, 2012b). If the neighborhood is a great distance away from some recreational and commercial services, then one would expect the social interaction within the neighborhood to be intense.In this regard, we would state that there is a relationship with the neighbors among the majority of the interviewees but that almost none count their neighbors as part of their most important personal network.There are two exceptions-one in which brothers are neighbors and another in which one of the interviewees considers a neighbor to be his best friend.However, in general, there is communication between neighbors, sometimes on special occasions.For example, on Children's Day, the woman who runs the store provides candy for children and throws a small party together with some neighbors.Additionally, in connection with a particular female leader who belongs to a political party, the poorest neighbors receive food support from the municipal or state government, and some neighbors organize themselves for these tasks.A father states: Sometimes we get in touch when someone needs a favor, we might lend each other a shovel or something, or if someone needs a tool; for example, I told the neighbor here the other day that I had a problem, and I asked her if she would call the police if I needed her to, and she said yes.So like, we do have communication with the woman living behind us, the neighbors on the two sides and in front (Jiménez, 2012c). The friends of the interviewees in reality come more from the main activity in which they engage.For example, one interviewee, when working many years ago in domestic service, became friends with a woman who had hired her.It has been more than ten years since she left that job; however, the friendship continued, and she continues to see her friend approximately once a month because she lives in a neighborhood far from Cima 3. Similarly, the majority of the friendships of the young people who study come from school. The family members of the interviewees are distributed in a more dispersed way than other groups.Some are immigrants, and their relatives live outside the city, some live in cities in Tamaulipas, others live in the United States, and some live in various states in the country; even so, there are intense relationships with relatives who live in the city, and for the young people, cousins are important in games and conversations. The other important group is that related to play and recreation.The young people mainly play soccer.One of the interviewees belongs to a bicycling club, and another plays basketball; however, in the neighborhood, they have had difficulty taking part in these activities because there are no adequate places for games and sports.In relation to the places where groups are strengthened, we would state that the interviewees who play football consider the street an option because the park near the neighborhood is almost always occupied and it is difficult to find space there, especially in the evening hours when the young people are no longer in school.Another relatively close soccer field is private property, and it is necessary to ask permission to use it because it belongs to a textile factory.One of the fathers coaches a team but not one from the neighborhood itself, which does not have a team; as a result, he commutes to relatively far places.Another father, who is an immigrant, likes to play basketball but finds neither a court nor people who like the game in the neighborhood, and he still has not found a nearby option where he can practice.Other young people play basketball at school during recess. The home appears to be the main place for family and friends to meet.The interviewees rarely go to the movies, and restaurants are barely mentioned in the interviews as places to get together, whereas the shopping mall is included as a family outing place and, in some cases, as a meeting space for young people; the beach is mentioned twice and is only visited on vacations, by the family. In the opinion of one mother: No, well it's that here there isn't even one public space; look, the green space we have is very neglected.So like, here there are no Tamules, there isn't any of that, where people can go, nothing, so people who have weekends off, we want to go out with the kids to distract them, be it to a store.Or, a lot of people take the opportunity to take them to the (a hamburger place) to have breakfast or whatever, because well, what are we going to do here all day locked up inside the house?There isn't even room to run around (Jiménez, 2012a). Regarding the impact of insecurity, the majority of the interviewees consider the neighborhood to be a quiet place and have not been affected by any events, except one interviewee who mentions that he once had to protect a person who was being followed. The insecurity has mainly limited their movements outside the neighborhood, especially at certain times of day, and they have stopped visiting some friends who live in some areas that they consider dangerous.The insecurity also affects visits with relatives who live in the rural part of the state of Tamaulipas; however, the structure of the primary network remains relatively the same, and the types of meeting places are maintained.What has decreased is the frequency of gathering with friends and relatives who live outside the neighborhood. Interviewees in the Lomas de San Juan neighborhood The Lomas de San Juan neighborhood is better located than Cima 3.For this reason, there is no allusion in the interviews to distance as a factor that limits the mobility of the population.The difference between this neighborhood and Cima 3 is in insecurity.The majority of the interviewees here agree that there are places within the neighborhood that can be considered dangerous.For example, one of the young women interviewed, a 16-year-old, mentions that one dangerous place is "here in front of the Tamul because there are times that soldiers can arrive and anything can happen" (Rodríguez, 2012a). We also interviewed the woman in charge of the Tamul, and she states that the period of kidnapping at that location is over: There used to be bad people around here, people into destroying things and into hurting themselves.There were a lot of young people taking drugs, there were a lot; honestly, when I arrived here (in the year 2011), there were violent people.There were people who didn't even want us to arrive because they were the bosses (Jiménez, 2012d). What the woman in charge did was negotiate, and it seems that they established themselves in front of the Tamul and in other places: They felt that they owned the place.And when speaking and talking with them and making them understand that everyone should find his or her place, I told them we are going to put ourselves each one in our place and each one has to do their own thing.My thing is to work with families and support them, and that is my work.If you want to destroy families, you know what, you have to do it somewhere else, because I am here.And I like to respect people, but people have to respect me, too.And I have been talking with the majority of the young people, and they have been a bit… I mean, they are not violent, they are people who understand.So, thank God so far until now they have been very good, off over there in their area and us here working with our families (Jiménez, 2012d). Despite the efforts of the woman in charge of the Tamul, the interviewees express feeling unsafe within the neighborhood, and they view the public spaces as neglected.A woman who is an immigrant and has been living in this neighborhood for three years says it bothers her that the neighborhood residents are careless, dirty the area and do not maintain the neighborhood Tamul. In reality, within the structure of the interviewees' primary network, there is social interaction with neighbors; however, neighborhood organization is not mentioned as one of the purposes of social interaction.The majority of the interviewees have friends who come from their neighborhood life, and there is more or less intense interaction between some neighbors. Yes, in December, we get together at one house, and we bring food or something there, and we eat; last September 16 we also threw a Mexican party.All of us dressed up in traditional Mexican clothes and brought plates of typical food… There are two women who love nothing more than to throw parties to bring us all together (Jiménez, 2012e). The network of friends also comes from workplaces or school, and there are no stories about people who the interviewees may have met in public spaces.Even the interest groups related to games, sports or hobbies are connected to the school and the neighborhood. In interviews with three young people from the Lomas de San Juan neighborhood, we note the importance of the school in promoting the formation of primary groups, such as cheerleading squads and bands: The first is the band from here from the cbtis; we rehearse every Saturday from 8 to 1, more or less, now we are preparing for a state competition, it's going to be in Tampico… right now we are focused on that, but sometimes people get together; there are get-togethers in the teacher's house, the conductor of the band… I really like talking with them, playing or bringing out all the songs, helping them with the arrangements.You could say it's my hobby, it's what I like to do (Rodríguez, 2012b). In relation to the family network, the interviewees maintain close ties with their relatives who live in the city.The places where they meet and share are usually their homes; some of them get together daily and others on weekends to talk and share their lives.They barbecue and obtain updates on the lives of their relatives who live in the United States or in other states in the republic. Regarding social interaction spaces, one of the most often used by young people in Matamoros is a mall.The mall is important because it has movie theaters, clothing and shoe shops, and a food court that is perfect for young people to meet and make plans, check the internet or walk; additionally, it has a good location because it can function not only as a place for meeting, shopping and social interaction but also as a node for traveling to other parts of the city. What makes this place useful is the possibility of multiple activities: One can shop, eat, take a walk and go to the movies if he or she has the economic means to do so.Additionally, in the context of insecurity, this particular mall, due to its location, has become a good place for the cheerleaders to practice when preparing for regional competitions. The other social interaction spaces mentioned are houses and, for some young working women, hangout spots such as karaoke bars.Some green spaces are mentioned; however, they are visited occasionally, not regularly, and it is noted that they are near the city center. For the interviewees, the insecurity has mainly affected them in terms of their commute, depending on the time of day they do so; however, in contrast to Cima 3, in Lomas de San Juan, insecurity has affected the internal life of the neighborhood. One mother, whose best friend is an aunt of her husband and who has limited her gathering places, comments: Yes, we go out sometimes, but yeah, before the situation was so difficult, the insecurity, we would go to the parks, we would go have some juice or to socialize, just walking to the parks, but now we prefer to meet at her house or at my house (Rodríguez, 2012c). With her family, she has also been limited in taking walks; for example, she does not go to the Tamul with her children because she considers it dangerous.She laments having to go far to take walks with her children (the mall or Rotonda park) and wants an abandoned lot in front of her house to be developed: I would like for my daughters to be able to go there so they could have a place to walk, distract themselves, something closer so they're not looking for a park that's so far away and everything and so they can have a place to distract themselves and walk, and my son, too, so he could have a place to play so he won't be putting himself in danger in the street, somewhere he could ride his bicycle safely.I mean there is space, there's this whole area, but it's empty and everything, but it's neglected, there is space to do that, and it would really help out (Rodríguez, 2012c). It is mentioned that some have left this neighborhood to live elsewhere: "well there are people who have left the neighborhood for that reason, yeah, yes there have been people who have moved to the United States" (Jiménez, 2012f). Well I would like a solution for so much insecurity, to be able to live with my family in peace without worrying.I wish that one could go out like before, go out to the stores, without being afraid of getting kidnapped or of one's kids getting kidnapped.You just don't know anymore, I mean, how it's going to be, mmm... how can I put this, well you don't know what you're going to find in the street, what you're going to run into or not, what kind of person you're going to bump into in the street.You're afraid when you go out (Jiménez, 2012f). Interviews in the Las Arboledas neighborhood Las Arboledas is a neighborhood with better indicators of well-being that, similar to many other neighborhoods, also floods.Its advantage is due to its access to the avenues, industry and the city center, located near the Bravo River, which serves as a barrier making the neighborhood semi-closed.One resident who moved there three years ago says of Las Arboledas: Supposedly it is "private," quote unquote because there is no access to other neighborhoods other than Las Arboledas, that is, you have to enter and then go back again, supposedly it's a so-called residential area, well we get flooded, but at least there isn't mud like where we used to live (Rodríguez, 2012d). Those interviewed in this neighborhood have university degrees, with higher income levels than those in Cima 3 and Lomas de San Juan.Some have relatives in Brownsville, Texas, and this broadens the social interaction spaces and the options regarding insecurity. Therefore, both young people and adults extend their recreation and meeting places toward Brownsville.As one comments to us, they go "with my cousin, over there in Brownsville in the (name of the place) to dance.It's a bar but where we can dance, country music, and the mall, that's all" (Rodríguez, 2012e). Additionally, the types of spaces where the young people from the upper-middle class are most connected are places of consumption or private places.In addition to meeting in restaurants and cafes, they also have the possibility of exercising in private paid spaces: "The gym is in a plaza by the beltway, it's a gym for only women, and what's great about it is that you can go with confidence because it's only girls and it's not a mixed gym" (Rodríguez, 2012f). As with interviewees from the other two neighborhoods, friend groups come from the centers of work and study, except for one interviewee, who mentions that she met her boyfriend at a party; friendships, according to the interviewees' comments, do not emerge from meetings in public places. In this neighborhood, a lower level of social interaction with neighbors is also noted; however, it has not always been this way, as explained by a father in the Las Arboledas neighborhood: Well, at first, we were invited to look into issues in the residential development, they invited all the inhabitants of the then-small neighborhood to someone's house, but it's been many years that and we don't participate in that because it's like it seems to be becoming very political.For example, they invite us to somebody's house when they want us to vote for a certain candidate and that sort of thing, so no.And maybe then it's with the aim of getting a vote, not really to improve the condition of the residential development, so no (Rodríguez, 2012d). Regarding insecurity, the interviewees comment that both within the neighborhood and in the entire city, violent acts appear in the forms of roadblocks and gun fights in the street.An interviewed mother notes: Well, here it happened to us once, witnessing a problem in the street in front… they closed the road with armored cars and all that, and we saw that they shot someone, but in the case of our house, it hasn't suffered any problems of that kind, but yes in the neighborhood, you hear a lot that there are gun fights near the river, more in the east of the neighborhood that there are a lot of gun-fights and house raids.We haven't seen it, but they do say it happens (Rodríguez, 2012g). A father notes that the insecurity does not affect his activities but that it does affect their duration: In reality, we haven't stopped doing our normal activities, but the impact has been that if we are at a get-together, we try to get back earlier than normal.Because, for example, for family gatherings, we would sometimes get back at 3 in the morning, 2 in the morning and now no, we find a way to get home at 11 at night or around then (Rodríguez, 2012d). Gathering places before and after the insecurity or the intensification of the violence An activity unique to the young people was what they call rol, performed in cars; this activity began occurring regularly on two streets, the Paseo de la Reforma and Álvaro Obregón Street.As one young person notes: I remember before, when I was in high school, we would go out usually, I remember one meeting point was the Paseo de la Reforma, we were always there, the police would come, we would get in the cars, and ten minutes later, we'd be there again, they wouldn't say anything at all (Jurado, 2013). Another public place that was often visited by young people was Bagdad Beach, and their comments express how these places were part of a set of fun weekend activities that established the necessary places for social interaction and getting together.These activities began in the street where youth made plans and chatted and then continued in the clubs and ended up on the beach.As commented in the group: "The beach, too, you remember, we would go at all hours, sometimes we'd come out of the club and go, or at 12 at night, we would go for a stroll"12 (Jurado, 2013). Another popular event on the beach was organized by the students of the Tecnológico Regional de Matamoros.During Easter week, they would get together, beer companies would set up shop, and the beach would become a party place.Now, according to the perception of the focus group participants, to hold that event, it is necessary to ask permission from the criminal groups who have taken over such recreational spaces.Some young people comment that even the beer companies have been affected: That's how it is, to do something you practically have to ask permission, they say yes, in the case of bars, but they say, let me ask permission from those people (in this case permission for the party on the beach), because you can have music and then suddenly these people show up and tell you to lower the music or everything is over, get out, shut it down, just because they say so (Jurado, 2013). For this reason, one young man's conclusion is clear: "really they have taken over our things, there is no respect for the government… they have hands in everything" (Jurado, 2013). Obregón Street was the place for tourism and youth nightlife.During the day, tourists went to restaurants on that street or saw dentists or doctors located in the commercial and services corridor, while at night, young people did their rol and visited the nightclubs known as antros.Currently, the social practice of the rol has disappeared, despite the efforts of the young people who tried to revive it but did not have support from the authorities: "Even, not long ago, using social networks, we agreed to open the roles again, all of us got together, but then when we got there and people were getting together, these people showed up and again, let's get out of here" (Jurado, 2013). This situation is in contrast to those of other cities such as Tijuana, where, after the violence and the lack of tourism on Revolución Street, young people and national tourism took over this space.In Matamoros, Obregón Street remains empty, as the young people say: No, Álvaro Obregón has lots of empty spaces, it's dead.If you go on a Friday night to Álvaro Obregón, where before it would be full, that street is where people did the rol, it's empty and that was the place where the clubs were, the antros and all that, and now everything is desolate (Jurado, 2013).Now, the social interaction spaces extend to Brownsville.Various young people tell us about places that they visit in that US border city.These places are usually public spaces for consumption, such as restaurants, bars and clubs that were in some cases founded in Matamoros and are now located in Brownsville.However, they also include concerts: "Even the concerts that radio stations would hold aren't done anymore; if they do them, they do them in Brownsville" (Jurado, 2013). The only action that the government has implemented to revitalize social interaction in open spaces is to manage events with the best security possible.It seems that unless there is an evident and large security presence, people do not go to public places that offer open shows.As the young people tell us: And there was a lot of security at the tri concert, it was impressive.What really drew my attention was a concert that was held in the plaza… I think that's the strategy right, tons of security so people can go (Jurado, 2013). Conclusions In Matamoros, we see a municipal strategy of centralizing public spaces.After listening to the interviews and conducting a focus group, we conclude that this strategy can be a mistake because the population has difficulties commuting daily toward the center of the city.Their movement is limited due to their daily work routines, their low incomes or the peripheral location of the residential developments where they live.Social interaction in these concentrated public spaces, at a certain point, becomes not daily but rather extraordinary, limited to when there is an event, or weekly, when it is possible to go out with one's family. Criminal groups use the main avenues of the city as their workspaces or as their escape routes from the threat of the military.For this reason, people are afraid to move from their neighborhoods toward the center of the city; the less distance that people travel, the better it is for them because limiting their travel increases their sense of security.For this reason, they are implicitly asking the municipal government to invest in the public lots close to their homes. Daily social interaction requires nearby spaces; the majority of the interviewees note the lack of green space in their neighborhood.The advantages of green space are enormous because this space would spare them the need to travel, giving them more time for social contact with a greater number of closer people, increased social interaction with their neighbors, which is currently very rare, and a strengthening of activities for children. The issue of spaces close to home is important in the populous sectors, which have small houses that do not make intimate social interaction or personal activities very easy. We should also take into account the different dimensions of personal social interaction-games, conversations, health, strengthening of identity and learning-to determine the profiles of public space.To date, this and other cases show us that public space is losing ground in the struggle for family and personal social interaction due to the "preference" for private spaces (the home) or semi-public spaces (commercial centers, schools and restaurants). This privatizing tendency and this deficit already existed before the context of violence and insecurity, and they are reaffirmed with the limitations imposed by the actions of criminal groups.The structure of social interaction spaces does not change; however, their accessibility and use do, such that primary groups continue to meet in private or semi-public spaces but not with the same frequency or intensity.In the fight for public space, the criminal groups manage to impose unwritten norms and rules on some, such as on the beach or in the streets where young people would do their rol.Even taking into account that public space is not mainly defined by its territorial nature, we can say that in the case of the rol, the power of criminal groups is causing the disappearance of one of the most important public spaces for young university students. This struggle for public space, above all tied to the access routes to different social interaction spaces, is being won by the actions of criminals, such that these groups are becoming highly powerful actors that clearly limit the population's mobility and ability to strengthen and expand their personal and social networks. Figure 1 : Figure 1: Distribution of the green spaces in Matamoros, Tamaulipas and in the neighborhoods considered for the study , M.A. (2016) / Primary spaces of social interaction and insecurity in Matamoros, Table 3 : Sociodemographic indicators in Matamoros and in the neighborhoods of the study, 2010 Source: Authors' table created from the database of the Área geoestadística Básica (ageb), which comes from the 2010 population and housing census(Inegi, 2011). 10The percentage of female heads of household is calculated based on the particular homes that are led by female heads of household and is not based on the number of members nor the number of female heads of household.
2018-12-11T21:13:37.099Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "64e6e0e7d93efbd693c2d430656bc9f86987c121", "oa_license": "CCBYNC", "oa_url": "http://ref.uabc.mx/ojs/index.php/ref/article/download/513/1010", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "64e6e0e7d93efbd693c2d430656bc9f86987c121", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
246034047
pes2o/s2orc
v3-fos-license
Revisiting the Global Knowledge Economy: The Worldwide Expansion of Research and Development Personnel, 1980–2015 Global science expansion and the ‘skills premium’ in labor markets have been extensively discussed in the literature on the global knowledge economy, yet the focus on, broadly-speaking, knowledge-related personnel as a key factor is surprisingly absent. This article draws on UIS and OECD data on research and development (R&D) personnel for the period 1980 to 2015 for up to N = 82 countries to gauge cross-national trends and to test a wide range of educational, economic, political and institutional determinants of general expansion as well as expansion by specific sectors (i.e. higher education vs corporate R&D) and country groups (OECD vs non-OECD). Findings show that, worldwide, the number of personnel involved in the creation of novel and original knowledge has risen dramatically in the past three decades, across sectors, with only a few countries reporting decrease. Educational (public governance, tertiary enrolment and professionalization) and economic predictors (R&D expenditures and gross national income) show strong effects. Expansion is also strongest in those countries embedded in global institutional networks, yet regardless of a democratic polity. I discuss the emergence of ‘knowledge work’ as a mass-scale and worldwide phenomenon and map out consequences for the analysis of such a profound transformation, which involves both an educated workforce and the strong role of the state. Supplementary Information The online version contains supplementary material available at 10.1007/s11024-021-09455-4. Introduction Starting in the late 1960s, although much more evident since the late 1990s, the terms "knowledge economy" or "knowledge society" have been used to describe a fundamental shift in how modern societies view economic resources, value production and the bases of political, social and cultural life. Such a "post-industrial economy" would be characterized by a stronger premium on skills, innovation, research, development and university knowledge (Drucker 1969;Bell 1973;Frank and Meyer 2020;Stehr 1994;Välimaa and Hoffman 2008). Indeed, at least formally, the knowledge base of societies around the world has increased considerably in the post-World War II era indicated by rising numbers of universities, students, science associations and publications as well as rising levels of funding and efforts to scientize policymaking. These changes are accompanied by a political discourse that stresses scientific knowledge, innovation, and excellence as drivers of social and economic development making the notion of the "knowledge economy" both a sociological diagnosis and a political agenda (Drori et al. 2003;Moisio 2018). It seems that modern societies are rapidly moving towards the "schooled society" -a concept that describes societies' transformation toward social systems that are characterized by the expansion of education and science and whose signature is a profoundly reshaped economic system and labor market where nonmaterial goods and creative skills are becoming critical resources (Baker 2014;Goldin and Katz 2009;Wyatt and Hecker 2006;Zhou 2005). While many aspects of such a schooled and knowledge society have been analyzed, a crucial phenomenon has remained surprisingly implicit -the expansion of, broadly speaking, knowledge-intensive work and related personnel. I view the large-scale knowledge-based transformation of economies -as a consequence of the growing importance of education and science in modern societies that emerged at the end of the 20th and the beginning of the 21st century -as having strong implications on job content across economic sectors and countries. More precisely, this study focuses on the expansion of innovation-related personnel composed of "professionals engaged in the conception or creation of new knowledge, products, processes, methods and systems" (UNESCO 2015: 741). This R&D-oriented job description includes but moves beyond academic research and instead comprises a wide range of occupational tasks from (non) governmental sites to universities and for-profit corporations. All tasks reported here involve originality, creativity, and uncertainty as well as higher-order thinking commonly performed in a systematic and reproducible fashion. Such tasks cut across traditional categories of economic sectors, industries and even educational degrees, yet share a designation as research-related, non-routine, novelty-oriented and open-ended job content that explicitly includes aspects of basic, applied and experimental research (OECD 2015). Obscured by the omnipresence of the knowledge economy discourse, we know surprisingly little about the empirical evolution and driving factors of this knowledge-creating segment of contemporary job markets in a longitudinal and cross-national perspective. This article empirically revisits the knowledge economy by examining global trends in the evolution of R&D capacity and their underlying causes. Analyses draw on cross-national data on R&D-related personnel for the period 1980 to 2015, both aggregated and by sector (academic vs. corporate) for up to N = 82 countries (OECD 2020). I apply regression models with country panels of various sizes in order to test a number of hypotheses about potential determinants of research expansion. Drawing on several theoretical perspectives, these determinants reflect educational, economic, political and institutional arguments. Results show a striking increase in the number of knowledge personnel worldwide with only a few countries reporting decline. On average, OECD countries more than tripled their R&D personnel since the 1980s, while non-OECD countries see their number of professional knowledge workers doubling since the mid-1990s. Panel analyses for the post-1990 period suggest that educational predictors (especially public higher education governance and tertiary enrolment) show strong effects, yet economic factors, i.e., funding and GDP, also matter. Expansion is also strongest in countries with strong linkages to global institutional networks, particularly in non-OECD countries. Additional sector-specific analyses show that both corporate and academic expansion are mainly associated with educational and economic factors as well as global embeddedness. I argue that these findings indicate the global growth of a knowledge-intensive occupational field, which cuts across sectors and countries and which is becoming a central pillar of globally-embedded (although not necessarily liberal) schooled societies if aided by strong state support. The Expansion of Education and the Transformation of Work Education and science systems worldwide have seen dramatic expansion in recent decades, while economies and labor markets are being transformed toward a stronger premium on non-material goods, knowledge-intensive skills and non-routine tasks. In this section, I review the development of these twin trends and elaborate on how these are intertwined. The Expansion of Education and Science Education (particularly higher education) and science systems around the world have been undergoing a massive expansion, both historically and more recently. Novel academic (sub)disciplines and related faculty flourish and fuse, differentiate and establish themselves at a regular pace (Ben-David and Collins 1966;Brint et al. 2009;Frank and Gabler 2006;Wotipka et al. 2018). Higher education enrolment is experiencing unprecedented momentum worldwide, particularly in low-and middleincome countries, now accounting for a third of the global cohort with an increasing number of students continuing to the graduate level (Schofer and Meyer 2005;OECD 2015a;Trow 1999;UIS 2019). National scientific planning and policy has become a universal feature (Finnemore 1993) as have non-governmental national and international science associations that promote and protect the scientific cause (Schofer 1997). In the same vein, despite persistent disparities, government expenditures on higher education and R&D grow steadily in virtually all countries as do non-governmental and industry investments in research (EC 2018;OECD 2020). Such favorable conditions have translated into unprecedented levels of scientific output, sometimes described as "global mega science" (Powell et al. 2017) with the modern research university representing the pinnacle of the knowledge society (see also Bornmann and Mutz 2015;Välimaa and Hoffman 2008). Driven by a variety of political, cultural, social and technological factors, science increasingly resembles a world scientific system (Drori et al. 2003;Schofer, Ramirez and Meyer 2020). Concomitant with such expansive integration, the sites of research production multiply beyond the modern research university now including extrauniversity institutes, hospitals, governments, international organizations and, importantly, a growing industry capacity for R&D (Zapp 2017;Etzkowitz and Leydesdorff 2000;Kaiserfeld 2013). Such knowledge expansion and organizational diversification has implications for our understanding of how work is performed in modern economies, an argument to which I now turn. The Transformation of Work The educational and scientific revolution of the 20th century has had measurable consequences beyond the direct observation of expansive enrolment, organizational infrastructure and policy relevance. A distinct set of important arguments refers to the more complex relationship between (higher) education, science and their effects on labor markets. Since the late 1990s, policy and scholarly debates have stressed the advent of the knowledge-based economy, sometimes equated to the service or quaternary sector (e.g. OECD 1996;WB 2003). This shift would entail the educational and neurocognitive transformation of work and the emerging educated workplace with lifelong learning becoming a key in continuously upgrading skills (Zapp and Dahmen 2017;Baker 2009), while predicting the end of the post-industrial era and the onset of rapid obsolescence (Powell and Snellman 2004). To a large degree, these diagnoses are confronted with a daunting empirical challenge as most existing data do not adequately capture job content, let alone daily tasks and activities. Instead, conventional reporting assigns labor market segments to produced outputs to facilitate cross-national comparability (see Schofer, Ramirez and Meyer 2020; WB 2019 for a discussion). What we know is that there seems to be an increasing premium on skills and, particularly, higherorder thinking skills. Zhou (2005), for example, finds that occupation status (or prestige) is increasingly based on social recognition of how much the work content is related to the use of abstract reasoning and authoritative theoretical knowledge in addressing the fundamental nature of things. Greater amounts of these qualities are associated with higher status and might also help explain the co-evolution of educational attainment and wage structure in the United States through the 20th century (Goldin and Katz 2009). Similarly, the so-called Oxford Study on the Future of Employment (Frey and Osborne 2013) sees "knowledge work" as a growing segment of the U.S. labor market and -provided it depends on creative and social intelligence -as largely immune to computerization. Recent research also corroborates the assumption that the expansion of higher education and the growth of the economy and particularly the service sector (despite its heterogeneity) are tightly linked (Schofer, Ramirez and Meyer 2020). However, while these and other studies point to important changes in value creation, occupational and credential structure as well as skill formation, they usually rely on country-case studies or focus on particular sectors and industries instead of depicting the transformation as a global trend that, in addition, cuts across various economic segments. These studies also present cross-sectional or short-range perspectives while largely ignoring the long-term trends. Most importantly, these contributions add little to our understanding of the changing job content, that is, the actual daily activities and tasks. The following section explores the complexities of occupational transformation and presents hypotheses that outline the expansion of the knowledge economy as a multidimensional phenomenon. Explaining the Expansion of Professional Research With the quasi-omnipresence of scientific knowledge, research and development in modern economies, it is reasonable to expect the expansion of knowledgeintensive activities and associated personnel to occur everywhere in the world -even with such an expansion occurring at a varying pace. There is, however, little theorization about the causes of this process. I argue that such an expansion is mainly driven by a set of factors which are commonly associated with a (neo)liberal model of society with higher education expansion at its core. In addition, this broad institutional set includes economic and political determinants as well as factors of global embeddedness. These key institutions are, in theory, all conducive to facilitating the large-scale change described as the "knowledge economy". The goal of this study is to gauge the extent to which these determinants matter and whether they play out distinctively across sectors and country groups. This section reviews theoretical assumptions about the knowledge economy and builds hypotheses that inform my empirical analysis. Educational Factors: Higher Education Expansion, Governance and Professionalization Here, I draw attention to a first set of causes that reflect educational mechanisms and, more specifically, mechanisms related to the size, governance and professionalization infrastructure of the higher education system. Higher Education Expansion First, I rely on recent advances in the sociology of education where traditional arguments of simple human capital and demand-supply logics have been questioned. In these arguments, research expansion is not just determined by demographic pressure and related social mobility and stratification trends (the "pipeline logic") (Baker 2009;Collins 1979). It is true that with more people being certified to enter higher levels of academic training, they are likely do so and the recent surge in massabsorbing private higher education institutions worldwide has been explained by an unprecedented student demand otherwise not met by extant public institutions (Buckner and Zapp 2021). However, the pipeline and human capital argument can be extended in that academically-trained employees do not simply enter the labor market filling vacant positions which are variably described as R&D-related. Rather, these schooled individuals themselves transform job content instead of merely responding to market demand (Baker 2009). The science-driven rationalization of society and the economy reconceptualizes formerly noneconomic activities such as child care, health services, education and legal services as well as philanthropy as economic and monetarized activities (Schofer, Ramirez and Meyer 2020). Equipped with the analytical toolkit of academic training including creative, analytical, methodical, abstract as well as critical thinking skills, a growing number of graduates turn extant task profiles into opportunities for the creation of novel processes. Instead of reproducing routine, the reproducibility of knowledge creation becomes the routine and is increasingly associated with much prestige (Frey and Osborne 2013;Zhou 2005). H1a: Expansion of R&D personnel is stronger in countries with high postsecondary enrolment. Governance Second, I emphasize that the expansion of knowledge-intensive sectors requires strong political support, not only financial but also in a regulatory sense. The argument is somewhat complicated by the fact that the extent to which governments maintain control over the higher education system and innovation industries can, in theory, have either positive or detrimental effects on its expansion. While most state-dependent systems feature cost-free studying, many countries with strong public oversight also restrict access to specific academic fields, either through a quota or numerus clausus (e.g. for medicine in Ireland or Germany) or competitive national exams (as in the case of admission to the grandes écoles in France). By contrast, private systems are believed to produce more autonomous universities making them more successful at -and indeed in need of -creating linkages to industry research and in acquiring external funding, which might make them more resilient in times of stagnating public funding (e.g. Labaree 2017; Schofer and Meyer 2005). At the same time, most higher education and science systems in the world are heavily statebacked and a change in science policy can deeply impact the trajectory of R&D. Importantly, the argument extends beyond higher education. For example, the launch of large-scale theme-related research programs can sustainably alter the fabric of research (Zapp, Helgetun and Powell 2018). Both historically and in the more recent period, strong state intervention (e.g. space exploration in the U.S. and Soviet Union) and a favorable regulatory framework (e.g. genetics in China) creates the critical mass and necessary freedom to boost innovations which would otherwise hardly see the light of day. Similar efforts in research on cancer, climate change, educational assessment or the recent Covid-19 pandemic are also examples of "commissioned" agendas with countries as diverse as Singapore, Taiwan and South Korea proving that tight governance, strong public oversight of higher education systems and industry regulation do not necessarily mean a trade-off vis-à-vis dynamic innovation (Cantwell and Mathies 2012;Mok 2010;Wang 2018). Following this latter argument, I posit that the growth of the knowledge economy shows higher rates in countries with higher degrees of steering capacity as reflected in strong public higher education systems. H1b: Expansion of R&D personnel is stronger in countries with strong public higher education systems. Professionalization More than half a century ago, Wilensky (1964) noted wide-spread professionalization based on the creation, certification and protection of specific stocks of knowledge. Obviously, knowledge-based specialization is a key attribute of R&D. Disciplinary and professional associations and related journals not only fuel much internal boundary-drawing and differentiation in academia, they also establish professional identity, certify membership and signal legitimacy across industries and labor market segments (Meyer 1977;Brint 1994;Abbott 1988). While such professionalization is usually analyzed in terms of specialization benefits, there are wider consequences to the proliferation of university-trained professionals in that they systematically rationalize -applying analytical, systematic and replicable researchlike methods -in every domain in which they emerge (Drori et al. 2003;Frank and Meyer 2020). Although the data used in this study captures a wide range of knowledge-creating activities beyond traditional intra-mural or PhD-based tasks, academic training infrastructure capacity can be seen as the prerequisite for the expansion of knowledge-intensive personnel as it equips future employees with the analytical and creative skills thought to be at the core of the knowledge economy. As Ben-David (1971) and Brint (1994) noted, the institutionalization of the American graduate school laid the foundation stone for the subsequent professionalization of generations of researchers of which only very few (can) remain in academia (see also Fernandez et al. 2021 for a recent analysis). H1c: Expansion of R&D personnel, particularly academic research, is stronger in countries with higher numbers of doctorate-granting institutions relative to the size of the higher education system. Economic Arguments: Development and Investments Prevailing arguments hold that R&D is an important engine of economic growth and/or a response to growing societal complexity. In its classical formula, it resembles the tenets of human capital and differentiation theories (Schultz 1971;Becker 1994). In such a utilitarian "science for development" policy model, science is understood as a national, systemically-planned and economically viable tool to foster progress (Drori et al. 2003). More recently, such instrumental ideas of research creating spillover effects in the guise of academia-industry collaborations, patents, university spinoffs and the like have become part and parcel of national and international science policy discourses (see, for example, Etzkowitz and Leydesdorff 2000;Slaughter and Rhoades 2004). In this perspective, confronted with increasing complexity and differentiation in society and markets, academia adopts industry logics, whereas industry incorporates those from academia. It is important to note that this argument is complicated by the possibility of reverse or two-way causality if it is assumed that R&D is mostly occurring in higher education. For instance, Schofer, Ramirez and Meyer (2020) find that higher education expansion is associated with expanded economic activity overall, particularly in the service sector (see also Valero and Van Reenen 2019). In these accounts, the specific link between economic growth, wealth, education and innovation is specified via the strength of the service sector. However, the service sector is highly heterogeneous and crudely measured including both very high and low-skill jobs, often assigned to sectors of primary activity instead of based on task profiles. Additionally, by definition, the OECD based on the Frascati methodology collects data across sectors (see below). As this study is interested in R&D as a general feature of modern economies across sectors and industries, such unidirectional assumptions are of limited use. As a consequence, I formulate the hypothesis based on broader causal relationships. 1 H2a: Expansion of R&D personnel is stronger in economically-developed countries. Related arguments hold that R&D expansion is a function of investments into knowledge infrastructure. For academic R&D expansion, this line of explanation echoes so-called "externalist" arguments from the sociology of science whereby the growth of science is primarily determined by conditions outside of academia, notably funding (Cantwell and Mathies 2012;Elzinga 2012). Following this logic, richer countries dedicate more resources to their training and research systems, and can accommodate larger numbers of researchers who in turn, enter industry and transform jobs and labor markets. Large government-led research programs as 1 3 Revisiting the Global Knowledge Economy well as "excellence initiatives" also channel vast funds to both the academic and, indirectly, the corporate sector not only altering the course of scholarly thinking but vastly increasing the personnel base in the particular field (Zapp, Marques and-Powell 2018). In addition, a growing number of research councils (including the European Research Council) have designated funding streams targeting universityindustry collaboration in research, development and training (e.g. the EU's Marie-Curie Skłodowska Programme). In general, much of the OECD discourse on science expansion, for example, pushes for higher investments in national innovation systems through R&D expenditures (OECD 2018). If these R&D funds are channeled toward the higher education system, this functionalist discourse reflects the prevailing idea of universities as having a "third mission" -in technology transfer and wealth creation. 2 H2b: Expansion of R&D personnel is stronger in countries with higher levels of research and development expenditures. Democracy Historically, authoritarian regimes have had little interest in an educated citizenry (e.g. Khmer Rouge in Cambodia or 'bantu education' under apartheid South Africa) and science grew more rapidly in democratic regimes (Ben-David 1971;Merton 1968). For example, highly regulated admission and limited access had contained the growth of higher education in communist countries for decades (Ramirez 2002;Baker et al. 2004). Until the present day, universities continue to be the cradle of critical thought and resistance across the globe. They become subject to surveillance and oppression in illiberal regimes and even in liberal polities may suffer reputational damage and legitimacy loss due to state repression (Schofer, Lerch and Meyer 2018). At the same time, industrial R&D, especially of the applied kind, flourishes everywhere and even in states such as China that are otherwise oppressive. However, with private property (including intellectual property) and return on investment being more strongly protected in liberal polities, I form the hypothesis in its traditional variant. H3: Expansion of R&D personnel is stronger in democratic polities. Global Embeddedness Science and industrial R&D have always been highly internationalized professional fields and have become more so in recent decades with the growing mobility of researchers and the highly-skilled as well as the proliferation of digital media and international scientific and professional associations (Schofer 1997;Heilbron 2014). Here, science and R&D are understood both as a commodity and a global policy model of national and, more recently, global human development (Drori et al. 2003;Buckner 2017). I focus on international science and professional associations as promoters of research and innovation as a means to foster progress. Membership to international organizations has often been treated as a "receptor site" for global institutional change (Frank et al. 2000;Lerch 2019). Previous research on the expansion of secondary and post-secondary education enrolment, for example, has found strong support for the role of international discourses in such expansion (Schofer and Meyer 2005). The worldwide proliferation of science bureaucracies is also strongly associated with the work of international organizations like UNESCO (Finnemore 1993) as is the expansion of women faculty (Wotipka et al. 2018). These globally-operating associations advocate science and research and advise states and corporations on how to transform their internal policies. Linkage to these should lead to an accelerated uptake of R&D activities. H4: Expansion of R&D personnel is stronger in countries with strong linkages to international science and professional associations. Data Dependent Variables Data on personnel are provided by OECD and UIS/UNESCO and rely on surveys in accordance with the Frascati Manual (OECD 2015). For decades, the Frascati Manual has set forth the methodology and standards for collecting R&D data and these standards are widely used by countries worldwide as well as the UN and the EU (Godin 2005). The Frascati Manual defines research and experimental development as "creative and systematic work undertaken in order to increase the stock of knowledge -including knowledge of humankind, culture and society -and to devise new applications of available knowledge" (OECD 2015b: 28). R&D workforce, here, is defined as "professionals engaged in the conception or creation of new knowledge (who conduct research and improve or develop concepts, theories, models, techniques instrumentation, software or operational methods)" (OECD 2015b). It is important to emphasize that being counted as R&D personnel does not depend on a person's educational degree. Frascati statistics reflect the actual activities more than the formal qualification. Personnel defined as R&D-related can, in theory, hold any degree on UNESCO´s International Standard Classification of Educational Degrees (i.e. levels 1-8). The Frascati Manual distinguishes between three forms of research including basic research, applied research and experimental development. Analyses presented in this study do not distinguish between these three forms since such fine-grained data is mostly incomplete cross-nationally, yet all three forms of research include the Revisiting the Global Knowledge Economy five key criteria of R&D: "novelty, creativity, uncertainty, systematic, reproducibility and transferability" (OECD 2015b: 46f). The heterogeneity of R&D activities but also the internal complexity of many private enterprises makes it difficult to classify R&D personnel according to traditional economic sectors such as agriculture, manufacturing or services. As these sectors are themselves highly heterogeneous (especially the service sector), the Frascati Manual prefers to group activities according to a combination of either the main economic activity, the industry orientation, the product field or the knowledge domain. For example, research on improving the use of pesticides might still be classified as an agricultural product field. In general, private R&D activities can play out in any type of conventional sector or corporate setting (OECD 2015b). At the same time, R&D data in higher education is collected based on traditional disciplinary boundaries like natural sciences, engineering, medicine, agriculture, social sciences, and the humanities. To provide more examples of how Frascati definitions play out in data collection, consider the following cases. For example, an oral history project conducted by a religious charity would be classified as being basic research in the field of humanities, performed by a non-governmental, non-profit organization. Also included would be a doctoral student in a public university hospital who, besides getting training and providing health care, is explicitly involved in scientific R&D efforts. Other examples include the development of pilot plants, prototypes and researchbased industrial design and engineering (OECD 2015b). Consider also examples of what is excluded from R&D activities. These describe, for instance, the use of traditional knowledge in managing crops, the routine development of products based on traditional knowledge, the storage and communication of traditional knowledge as well as religious or cultural practices (OECD 2015b). For the industry sector, it also excludes, among others, after-sales service, patent and license work, routine tests and data collection. The advantage of the Frascati method is that it captures precise job content in daily tasks and concrete activities. This is unlike other studies where only official job descriptions, educational credentials, earnings and sectoral aggregations are recorded (see, for example, Goldin andKatz 2009 or Frey andOsborne 2013). This micro-perspective provides a more accurate estimate of the "knowledge turn" in job markets. At the same time, the OECD acknowledges that data collection is challenging due to overlapping activities, functions and sectors. Fully capturing all R&D-related activities may well be impossible, leaving the true extent of R&D expansion underestimated. This is especially true of the service sector (e.g. banking and finance) where blurry job activities are prone to surveying problems. Further, actual data collection is conducted at local institutional and national levels and then reported to the OECD. The OECD cannot ascertain the degree to which the data is accurate and complete. Despite these difficulties, R&D personnel (and expenditure) data based on the Frascati framework is the most reliable source available to date and has remained largely consistent in methodology since the 1980s. Analyses use R&D personnel per 1,000 employed at full-time equivalent (FTE) as the main dependent variable (OECD 2019; UNESCO 2019) comprising all sectors -in governmental and non-governmental, higher education and for-profit/corporate R&D. 3 FTE measures are especially useful for international comparisons. One FTE represents a one person-year. For example, a person who normally spends 30% of her time on R&D and the remaining time on other activities (such as teaching, university administration and student counseling) should be considered as a 0.3 FTE (OECD 2015b). 4 In addition to overall R&D personnel, analyses use two specific sectors as additional dependent variables, i.e. corporate R&D and academic or higher education R&D. I chose these two particular sectors for three reasons. First, data availability is greatest for these two sectors as compared to all others. Second, empirically, these two sectors account for the largest share of R&D personnel; in many countries, they account for over 70%, in some even 75% of all R&D personnel. In addition, scholarship has mostly focused on these two sectors. While this does not mean that the other sectors are to be neglected, I argue that the most pressing questions revolve around industry and higher education. Theoretically, one may argue that academic and industrial R&D represent very distinct spheres, with each requiring a different set of predictors. At the same time, they largely overlap in their embedding in wider society. For example, no higher education system could possibly absorb all those graduates through academic position. Further, both the academic and the corporate sector benefit from public support, either through funding, favorable legislation or (free or subsidized) education. All outcome variables are measured at 5-year intervals. Data coverage is limited for the pre-1990 period, especially for non-OECD countries and is further complicated by major political changes and associated reporting practices (as in post-Soviet countries). For the dependent variable, prior to 1990, sample size is N = 35 countries. For the 1990-2015 period, sample size is N = 82 countries, however, some countries are missing data for important predictors. I therefore run models with varying sample sizes. Predictors Educational Predictors I use time-varying tertiary enrolment rates, both male and female (UIS 2019), at t-5 assuming a lagged cohort effect. To measure state control, I computed a time-dependent share of public versus private universities (ISCED +5) based on the International Association of Universities' (IAU) World Higher Education Database of 17,129 universities (WHED 2017). 5 The public-private distinction is based on the standard definition of legal status of higher education institutions. I assume that a higher share of public institutions reflects stronger state control of the higher education system. In order to measure professionalization via doctoral training capacity, I compute the share of doctorate-granting institutions by country based on IAU's WHED (2017). This is a time-varying variable based on various editions of the WHED and the International Handbook of Universities, measured at t-5. Both datasets contain information on academic structure including universities' graduate programs. Alternately, I measured doctoral training capacity as a per capita measure and with varying lags (e.g. t-10), yet results show no changes (available upon request). Economic Predictors I control for country-level economic development and, by implication, social differentiation using a time-varying measure of gross national income per capita, from the World Bank (2019; logged to reduce skewness). I use time-dependent data on R&D expenditures as a percentage of GNI (UIS 2019; OECD 2019). Gross domestic expenditure on R&D (GERD) includes expenditure on research and development by business enterprises, higher education institutions, as well as government and private non-profit organizations. As I assume some time lag between input (expenditures) and output (expansion of personnel), the models include data for both indicators measured five years prior (t-5) to the dependent variable. Political Predictors I use a measure of a country's level of democracy from the Polity IV Index (Marshall and Jaggers 2017), which ranges from highly autocratic (-10) to highly democratic (10). I account for changes during the observation period by using the Polity Score as a time-varying measure. Global Embeddedness Predictors The Union of International Associations provides longitudinal data on national membership ties to international science and professional associations gleaned from the Yearbook of International Associations. The variable is logged. Descriptive statistics for all variables are presented in Table 1. Methods R&D capacity expansion is analyzed using panel regression models with timefixed effects as the DV consists in one observation per year with countries producing variation for each observation. The literature suggests a variety of models to analyze cross-national and longitudinal data including OLS regression with panel corrected standard errors, fixed and random effects, models addressing serial correlation (AR1), and models with cluster-robust standard errors (e.g., Beck and Katz 2011;Plümper, Troeger and Manow 2005). There has also been much innovation in estimating dynamic panel models (accounting for time-series effects; see Baltagi 2008) and I conduct a model accounting for growth in the prior period in which each panel includes a dependent variable measured at time t, and the lagged dependent variables measured five years earlier (t-5). Outputs in Appendix B (electronic supplementary material) show that results are consistent across all these different modeling approaches. Additional Collinearity and Robustness Checks I observe one problematic correlation between tertiary enrolment and gross national income (r 2 = 0.66; see Appendix Table B1, supplementary material) and run models without GNI and with enrolment and GNI as an interaction. As results show little variation, I decided to keep predictors as separate variables in the models since they reflect distinct theoretical arguments. Finally, although there are no outliers in the statistical sense (based on a boxplot examination), I also run a model without the top three minimum and maximum observations in the sample. Results show only minor changes (see Appendix Table B2, electronic supplementary material). 1 3 Results: The Global Expansion of Knowledge Work The following section presents descriptive data on expansion trends in general, and by groups and sectors in particular before results from the regression analyses are reported. Expansion Trends In the past four decades, the expansion of R&D capacity has seen considerable momentum worldwide growing from 3 FTE to 5.4 FTE per 1,000 employed in the period 1990-2015 (Figure 1). This growth is particularly strong in OECD countries where a first boost took place in the late 1980s and a second in the mid-1990s. In the period 1980-2015, the number of R&D-related jobs in OECD countries more than tripled (from slightly below 3 to just below 9 FTE). Over the same period, non-OECD countries doubled their R&D personnel, with their increase also beginning in the mid-1990s. While the expansive trend can be found in virtually all regions worldwide, some countries show stronger growth patterns than others and a small group of countries even report shrinking R&D personnel. Figure 2 presents the geographical variation in the changes of research capacity for the period 1990-2015 divided by four percentiles. The strongest expansion can be found in many European countries as well as East and South East Asia and Oceania -particularly in Singapore, Denmark, and South Korea -which increased their R&D personnel by a factor of ten and more. Middle range increases took place in China, North America, Argentina and some European countries, while many Latin American, African and some Asian countries report smaller growth rates. As a striking finding, a group of N = 14 countries report decreases in R&D capacity. For example, post-Communist Russia, Ukraine and Bulgaria show a 40% decrease during the observation period 6 , probably explained by significant emigration after 1990. The Gambia, Mali and Sri Lanka are the only African and Southern Asian countries with drops in knowledge personnel. Panama and Guatemala are the only Latin American countries to also report slight decreases (see Appendix A (electronic supplementary material) for country data). Table 2 specifies these findings with annual growth rates for R&D personnel by period, country group, and sector. Overall personnel capacity increased by 3% in both OECD and non-OECD countries and is slightly stronger in the 1990s compared to the last 15 years. Within the OECD, expansion occurs at a rate of 2.6%, whereas non-OECD countries expanded by 1.9%. However, while growth slows down in the OECD group in the more recent period, countries outside of the OECD catch up showing the same rate (2.6%) in the post-2000 period. As seen above, some countries reflect a particular phenomenon of contraction. Removing these outliers (N = 3) from the sample increases the growth rate considerably, up from 3.1 to 4.5%. Revisiting the Global Knowledge Economy Comparing sectors, expansion is primarily carried by the corporate sector where growth rates surpass higher education by more than a factor of three (at 3.5 and 1.1 respectively) although a slight convergence takes place in the more recent period and when outliers are removed from the sample. What Drives Research and Development Capacity? Turning to the regression analyses, models present first general expansion and then sector-specific estimates. For general expansion, I present a series of models that include predictors one by one. Model 1 includes only educational predictors, Model 2 economic predictors and Model 3 and 4 include political and embeddedness factors respectively. Model 5 includes all variables. Coefficients remain consistent throughout all models although effect size varies pointing to some degree of collinearity (see also Table B1 in the Appendix, electronic supplementary material). Higher enrolment rates and a higher share of public universities are associated with R&D increase. The same pattern can be found for higher levels of economic development, funding and global embeddedness. Interestingly, Models 1 and 2 show larger values for explained variance (R 2 = .61 and .72) as compared to Models 3 and 4, also supported by significant time effects in these models. Model 5 includes all variables. As before, educational and global embeddedness factors matter most. For example, growth in higher education enrolment (H1a) (by one standard deviation) is associated with an increase in R&D personnel by .05 standard deviations. Importantly, a strongly state-backed higher education system emerges as the most important predictor (H1b; B = 2.36), while doctoral training (H1c) shows no effect. Linkages to the global science associational network are also strongly correlated with R&D expansion, yet a democratic polity has no effect. Economic development and R&D expenditures (H2a & b) are both positively and significantly correlated with R&D growth. This supports arguments that the knowledge economy is also a high-income country and high-investment phenomenon (Table 3). Turning to specific sectors, Models 6 and 7 present separate analyses for higher education and industry R&D personnel (Table 4). Starting with Model 6, changes in higher education R&D is driven by student enrolment. Further, public systems are again much more likely to expand as are those with a larger doctoral training capacity. Innovation systems more strongly linked to the global associational network also expand more strongly. Interestingly, growth in academic R&D personnel is negatively associated with GNI and R&D expenditures (H2a & 2b). By contrast, knowledge-intensive jobs in the industry sector are more likely to flourish in economically more developed countries and in those countries where the levels of R&D expenditure are higher. Importantly, tertiary enrolment also contributes to private R&D and the public nature of the higher education system is, again, the strongest predictor. At the same time, doctoral training capacity is negatively associated with private R&D expansion suggesting that the private knowledge economy is largely decoupled from academic training. Membership in international professional and scientific associations also correlates with expansion, even stronger in the corporate than the academic sector. As in Model 5, democracy has no effect on this sector-specific R&D growth. As shown above in Figure 1 and Table 2, different temporal patterns can be observed for OECD and non-OECD countries. The two final models investigate whether predictors operate differently across these two country groups (Table 5). Indeed, some interesting differences can be identified. Educational predictors, namely tertiary enrolment and public governance show stronger associations with R&D expansion outside the OECD, while doctoral infrastructure exits both models (while being positive in non-OECD countries). Plausibly, the economic development measure is higher in OECD countries, yet investments play a stronger role in non-OECD countries. Interestingly, while democracy has no effect both for OECD and non-OECD countries, links to world society turn out to be highly negative inside the OECD, yet positive outside the OECD. Discussion: Bringing the State Back into the Schooled Society? Much of the discourse on the knowledge society has stressed that the constant creation of new knowledge is becoming a fundamental feature of late modern societies in transforming the economy, labor markets and other domains of public life. While this debate has been held with both enthusiasm and skepticism, it has also been held in the absence of solid empirical data that provides insights into the knowledge economy as a historical and worldwide phenomenon that is strongly reflected in the changing nature of work. Analyses presented in this article illustrate a notable increase in the number of knowledge personnel worldwide with only a few countries reporting a decline. Strikingly, the strongest contraction can be found in some post-communist countries, namely Russia, Bulgaria and Ukraine as well as Uzbekistan, Romania and Kazakhstan. As the observation period for these countries begins in 1990, their shrinking R&D capacity very likely reflects massive emigration waves. A welleducated highly-skilled workforce, particularly in fields such as science, technology and engineering, took an opportunity outside their then-struggling economies once such mobility was granted (Baker et al. 2004). More generally, OECD countries have more than tripled their R&D personnel since the 1980s. Strong increases occurred during the mid-1980s and after the mid-1990s, coinciding with an intensified public debate about the knowledge economy at that time (Stehr 1994;OECD 1996;WB 2003;Välimaa and Hoffman 2008). The growth of R&D personnel in OECD countries remains linear adding around 4% of knowledge-intensive jobs per year to labor markets and accounting for almost 1% of all jobs. This trend began later for non-OECD countries and since the late 1990s these countries have doubled the number of professional knowledge workers as growth rates for the last decade have converged between both OECD and non-OECD countries reporting a 2.6% annual growth rate. Such a delay supports arguments that the knowledge economy was a phenomenon of wealthy nations before it reached other societies, notably the emerging economies in East Asia and Latin America (Zapp 2017). The knowledge-based transformation of labor markets is also strongly driven by the private for-profit sector where growth is much stronger than in higher education and as predicted in early accounts (Bell 1973;Drucker 1969). At the same time, the role of education and strong state support appears to be crucial in understanding the large-scale process of economic transformation at work. Having a robust public higher education system emerges as the strongest and most consistent predictor of large R&D personnel growth. State-led postsecondary systems are consistently more likely to propel R&D capacity including in the private R&D sector. Within this group of strong state-backed systems, we find such diverse cases as Singapore, South Korea and Denmark which all prove that strong state oversight does not contradict the dynamic of R&D expansion (Cantwell and Mathies 2012;Etzkowitz and Leydesdorff 2000;Mok 2010). This finding is also in line with previous research which sees the state as an active agent in "interventionist" research governance (Zapp, Helgetun and Powell 2018;Elzinga 2012;Cozzens and Woodhouse 1995). The extent to which state involvement matters might come as a surprise and will require further scrutiny in future research. Schofer and Meyer (2005), for example, use the degree of political centralization as an alternative measure of state control, finding negative effects on higher education expansion. However, their variable is not time-varying (it represents the situation in 1970) and, more importantly, in many countries a decentralized structure does not mean low state involvement in educational matters (e.g. Canada, Belgium, Germany). It is likely that the variable used in this study, i.e. large share of public higher education, reflects multiple overlapping institutional characteristics. Instead of assuming strong states use education to control society (as in Communist regimes), we could argue that they use education to steer the economy. Inasmuch as countries can constrain access, they can also flood higher education with the same political will to leapfrog their economies. Interestingly, it is rarely acknowledged in the extensive literature on the booming private sector that, in parallel to the (undoubtedly impressive) growth in private higher education, the number of public institutions has also been growing at a similarly high rate and continues to accommodate the majority of students worldwide (Buckner and Zapp 2021). Further, higher tertiary enrolment, although weaker in its effect size, is associated with R&D growth across all models supporting the large literature on the transformative role of an increasingly educated society (e.g. Baker 2009). The effect of tertiary enrolment is higher outside the OECD where growth rates have been catching up with OECD countries and their earlier expansion (Schofer & Meyer 2005; UIS 2019). While previous research has established a relationship between higher skills (via higher education) and occupation status as well as earnings, this finding contributes to understand the more fundamental process at work as tertiary education might indeed drive the rationalization and "academization" of job content in general (Baker 2014;Goldin and Katz 2009;Wyatt and Hecker 2006;Zhou 2005). As university graduates carry their analytical skills to the job market, they increasingly import and apply cognitive templates of abstract and universalist thinking and transform what was once a routine task into an opportunity for knowledge creation (Schofer, Ramirez and Meyer 2020). At the same time, specific academic training like doctoral schools, matter only in the context of higher education. This finding makes sense, especially given that Frascati-data is collected regardless of educational degrees. The fact that the importance of academic professionalization is only modestly correlated with higher education R&D expansion and even negatively correlated with R&D expansion in the corporate sector might, in addition, point to a bottle-necked pipeline where large numbers of graduates queue up for a limited number of PhD-adequate positions (see Wotipka et al. 2018 for a similar argument). In this perspective, it appears that the knowledge economy is largely decoupled from traditional academic training at the doctoral level, an argument that might help update earlier accounts of professionalization processes (Abbott 1988;Brint 1994). Alternately, the reason for the negative relationship between R&D expansion and firms might be that firms are hesitant (and sometimes even resistant) to utilize employees' full skills potential and, instead, in an effort to increase productivity, prefer to routinize job tasks -increasingly supported by artificial intelligence. As recent studies found, such underutilization of skills actually leads to a decrease in productivity and decline in skills (see, for example, Lane and Murray 2015;Quintini 2011). 7 Without delving into the notoriously difficult discussion on how to measure skills mismatch and over-qualification, future research might benefit from this analysis to further investigate companies' (in)capacity to adequately utilize academic skills. The state also enters as a funder of R&D. High investments are consistent predictors for R&D expansion except for the academic sector where they become negative, which is probably explained by the separation between R&D and higher education funding in most countries. Large multi-year and multi-theme funding programs, which usually reach out to various sectors (including the corporate sector), are examples of how economic transformations can be propelled (Zapp, Marques and Powell 2018). The finding that these investments matter more outside the OECD might be explained by the higher growth rates in these countries (OECD 2019). Additionally, specific state-led research programs like excellence initiatives can now be found everywhere in the world (see Ramirez et al. 2016). Moreover, it seems that the knowledge economy grows concomitantly with development, yet the causal relationship is difficult to ascertain. Economic growth is a multidimensional phenomenon just like the expansion of R&D under study here. Whereas the prevailing argument holds that knowledge is the engine of development, it is unclear which sector is concerned with such development. The large service sector as included in additional models (Appendix B, electronic supplementary material) shows no significant effect and in more traditional arguments, knowledge is a response to growing social complexity and not its primary cause. Perhaps surprisingly, democracy shows no significance. As Figure 2 indicates, R&D growth is a global phenomenon including all kinds of polities and some countries have high growth rates yet low democracy scores (e.g. Singapore, China). This discrepancy illustrates that the occupational and economic transformation under study here -like others before -can again take place in the absence of a decidedly liberal political order. While a democratic system seems to favor higher education expansion (Ramirez 2002;Schofer, Lerch and Meyer 2018), for the business sector, the analysis even finds a negative association (albeit not significant). At the same time, INGO membership -as a barometer of openness to world society -matters in the more recent post-1990 period and across sectors supporting the argument that R&D expansion is closely tethered to particular forms of globalization. Science, research and, in general, any form of modern rational knowledge flourishes most in exchange with international collaborators (Adams 2013;Finnemore 1993;Heilbron 2014). Science associations are an indication of how well a country is connected to the global knowledge discourse and even though R&D does not seem to require a liberal polity, it seems to thrive most where expertise can flow freely (Schofer 1997). In this, the transformation of national economies into knowledge economies might represent a blueprint of national development, promoted by all kinds of international organizations and eagerly absorbed in national development plans by political decision-makers around the world, often despite the absence of evidence for how viable such a model actually is -as has been shown for many other policies and sectors before (Drori et al. 2003;Zapp and Dahmen 2017). Interestingly, across OECD and non-OECD countries effects of INGOs reverse with non-OECD countries showing strong association. This might suggest that such blueprints of the knowledge economy are imported from frontrunners via these international organizations (e.g. the WB and OECD) as transmitters of global models before they are taken up and translated into national economic policy. It is important to note, however, that all these concomitant factors ultimately depend on the interplay of an educated citizenry and the state as an open, i.e. embedded in world society, and strong, i.e. in terms of funding and higher education regulation, supporter, which together, provide fertile conditions for knowledge-creating jobs to emerge. Awkwardly, the results imply that future studies need to consider the role of the state in both its liberal and illiberal variants. This might come as a surprise as, historically, education and science have flourished most where individual liberty was strongest (Ben-David 1971;Merton 1968) and they remain targets of illiberal ideology worldwide up until today (Schofer, Lerch and Meyer 2018). Reminiscent of the varieties of capitalism (Hall and Soskice 2001) and the varieties of mass education expansion (Boli et al. 1985), this study suggests that there might well be varieties of the national knowledge economy, however, with a democratic polity being a non-essential condition for success. Conclusion and Outlook Analyses presented in this article suggest that the rise of the global knowledge economy is, since the mid-1980s up until today, indeed a tangible phenomenon reflected in the large-scale expansion of R&D-related jobs worldwide, yet particularly in highincome countries with strong public higher education sectors, high levels of enrolment and funding, and deep linkages to world society. While the knowledge economy is a heterogeneous transformation, it is ultimately characterized by a growing importance of original, creative, and non-routine tasks which are increasingly valued across forms of government, sectors, industries and institutional settings. Future studies might benefit from including knowledge-intensive personnel data presented in this paper as both dependent and independent variables. For the latter, the growing body of research interested in scientific output and collaboration, and also economic growth in general (Powell et al. 2017;Schofer, Ramirez and Meyer 2020), could increase precision by controlling for R&D personnel. Additional predictors could help improve the explanatory models presented here, in particular concerning shrinking capacity. Many former communist countries have seen major drops in R&D personnel, probably due to the emigration of a highly skilled labor force once borders opened up after 1990. As specific models cannot prove the relevance of a communist transition, the underlying mechanism more likely reflects emigration as a more general phenomenon. Although difficult to obtain, longitudinal data on international mobility of high-skilled workers could help in assessing whether and how strong outbound mobility impacts innovation systems and would provide a sounder basis for debates on brain gain and brain drain, both across countries and sectors. Finally, while the knowledge economy is usually celebrated as a cleaner, greener and healthier economic era, with opportunities for all, its aggregate effects on labor markets and particular job segments remain subject to debate (e.g. Frey and Osborne 2013). The knowledge economy may not imply a simple zero-sum game in which knowledge creators win and mere users and reproducers lose, yet it will be an important task for future research to identify the consequences of the "cognitive edge" for labor market access and income equality but also geopolitical positioning and the liberal order on a global scale. The ambiguous role of the state in steering these processes and reaping its benefits -just as in creating them -would then require particular attention.
2022-01-20T05:27:54.000Z
2022-01-18T00:00:00.000
{ "year": 2022, "sha1": "7705bbaf98b01ccb1bd3d648a46f5189b50165e3", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11024-021-09455-4.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7705bbaf98b01ccb1bd3d648a46f5189b50165e3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
153879077
pes2o/s2orc
v3-fos-license
The Neoliberalization of Development: Trade Capacity Building and Security at the US Agency for International Development This paper examines recent changes at the US Agency for International Development (USAID) regarding the connections between trade liberalization, development, and security. USAID has adopted “trade capacity building” as a framework for development, and, in conjunction with new US national security discourses, now operates under the assumption that underdevelopment is a source of state weakness that produces insecurity. I argue that these changes in how USAID understands and undertakes development constitute the neoliberalization of development. In accordance with these shifts, USAID has redefined critical aspects of its development mission, undergone internal restructuring, and altered its relationship with other US state institutions and capital. The actual prospects for achieving security or development are slim, however, as the agency remains wedded to definitions of both that suggest the only acceptable role for the state lies in facilitating further neoliberalization and promoting the stability of capitalist class relations. An overview of USAID’s historical development, and a closer examination of the place of food aid and food security in the agency’s development work, demonstrate this. INTRODUCTION In a January 2004 White Paper, the US Agency for International Development (USAID) outlined a program of internal change designed to redefine the objectives of its foreign aid programs, reassert the connections between development and security, and reinvigorate the agency's relevance within the US state's foreign policy structure. Following from longstanding criticisms of the agency in the context of changing geopolitical realities, USAID (2004b:7) noted that "development progress has been prominently recognized as a vital cornerstone of national security" in the post-9/11 world, with "weak states" providing the most critical points for "widening the circle of development" and combating global terrorism. This work addresses how USAID understands and institutes this connection between security and development, focusing on the role of "trade capacity building" in agency rhetoric and practice. Trade capacity building emerged from the 2001 Doha Round of WTO negotiations, and although state institutions in the US and many developing countries have adopted it as a development strategy, it remains vague and difficult to assess in practice. Trade capacity building plays an important role in ongoing efforts to reorient state institutions within a more general process of neoliberalization by marshalling state practices, discourses, and institutions of development in support of trade liberalization, capital internationalization, and US geopolitical and security objectives. USAID offers a rich case for examination in this respect. The US government's primary foreign development institution, USAID today holds a pivotal position in the material and ideological reorganization of development theory and practice along neoliberal lines. Though dwarfed by other US state institutions such as the Departments of Defense, State, and Agriculture in terms of budgets and personnel, USAID is strategically important both domestically and internationally. The agency bears responsibility for setting out, funding, and implementing US development assistance and disaster relief policies; in 2004, it directly managed a foreign aid budget of over $4.5 billion, and was part of the management and administration of $7 billion more in official assistance in collaboration with other US government agencies (Lancaster and Van Dusen 2005:14). The agency also has a considerable field presence, despite recent cutbacks, with almost 100 missions currently listed on its website (USAID 2007b). As the US is the largest provider of official development assistance, USAID plays a central role in shaping the international political and economic contexts of development, while its global reach and need for institutional partnerships makes it an important site for intra-state political battles. USAID's political fortunes thus affect those of many other state and non-state institutions, and shape the lives of millions of people in developing states. Two complimentary theoretical frameworks underpin my argument, each addressing different aspects of the changing relations between security, trade, and development instituted by and through USAID. The first, drawing on McMichael's (2000a) heuristic framework, identifies a transition between "hegemonic projects" of "development" and "globalization," marked by the changing role of the nation-state relative to capital in conjunction with the reproduction of the international state system. While this framework focuses on systemic shifts and should not be read as an all-encompassing metanarrative, it does group together under its broad headings multiple, perhaps incommensurate, projects and discourses, albeit in ways that emphasize the dynamic hegemony of particular historical blocs, institutions, and states within a global political economic totality. The trajectory of a major developmentalist institution such as USAID exemplifies the broad transition between hegemonic projects that McMichael (2000a) identifies as a defining feature of the international state system and of global capitalism. It likewise offers an illuminating example of how this transition has occurred within the US state as part of its own particular state project of neoliberalization. As Panitch (2000) and Panitch and Gindin (2003) point out, the failure to adequately engage with the specificity of the American state is a lamentable silence in many recent discussions of neoimperialism and neoliberalism. Still, it is important to build on McMichael's account by referring to the concrete, differential ways in which such projects have been instituted and struggled over, to prevent presenting neoliberalization (and its status as a hegemonic project) as a uniform or uncontested process. I temper McMichael's framework by recognizing what Castree (2006:4) calls the "contingently occurring processes and outcomes that may well have operated differently if the 'neo-liberal component' had not been present," and the "articulation between certain neoliberal policies and a raft of other social and natural phenomena." The second underlying framework therefore relies on Jessop's (1990Jessop's ( , 2001Jessop's ( , 2002a strategic-relational approach to the capitalist state. This approach centers the instituted nature of capitalist economy and society, the strategic political behavior of state institutions and non-state actors (particularly capital), and the state's polyvalent position relative to other institutional forms and processes, regarding the state as a contingently coherent ensemble of institutions with differing degrees of strategic selectivity, and concentrating on the state's role as both site for the articulation of strategies, and as a strategy itself. Or, as Glassman and Samatar (1997:167) argue, the state is understood to have a "triple identity" as "1) a site for strategies; 2) the generator of strategies; and 3) a product of strategies." Combining these frameworks allows for close examination of how exactly the political projects of particular class-relevant groups, such as internationalizing capitalist fractions, are elevated to the status of state projects, and in turn how these shape hegemonic projects that order the international state system and class relations on a global scale. Such projects, and the scalar configurations they produce and through which they operate, are reflexively constituted and constantly contested. It is important here to keep in mind Jessop's (2002a:42) definitions of state and hegemonic projects; the former refers to the specific ways in which the prevailing political project of dominant social forces "seeks to impose an always relative unity on the various activities of different branches, departments and scales of the state," while the latter constitute the means by which dominant social forces "seek to reconcile the particular and the universal." Neoliberalization comprises just such a potentially hegemonic project, conditioning the environment of political decision making at all scales; the various context-specific strategies of neoliberalization taking root in (and rooting out) different parts of the state apparatus, such as USAID, are equivalent to Jessop's definition of state projects. Restructuring and strategies changes at USAID should be understood in relation to broader changes in hegemonic (but contested) understandings and practices of development and security, which depend in part on the balance of political forces institutionalized in and through the state. USAID's adoption of trade capacity building constitutes a central part of a much broader reconfiguration of development and security that narrows both to functions of a global market logic enforced by and internalized within the state. This understanding of neoliberalization follows from that of Peck (2001), Peck and Tickell (2002), and Harvey (2003, 2005, in which neoliberalization no longer implies the simple weakening or dismantling of states as a general principle, but instead emphasizes institutional rollout and the reorientation of state institutions toward the facilitation of international market forces and away from wage-based social equality and downward redistribution. More broadly, as Peck (2001) demonstrates, neoliberalization alters the very character and logic of the strategic policy-making environment, including that related to international development, while Roberts et al (2003) articulate how neoliberalization has reproduced traditional realist geopolitical practices and imaginaries under the gloss of multilateral integration. USAID plays a vital but underexamined role in these processes, but has undergone restructuring itself, and redefined critical aspects of its mission and altered its external relations to center trade capacity building as a primary mechanism and goal of development. Recent shifts in US national security discourses have forced further changes in the agency's approach to development. The actual prospects for achieving security or development are slim, however, as the agency remains wedded to definitions of both that suggest the state's only acceptable role lies in facilitating further neoliberalization and promoting the stability of capitalist class relations. I begin by examining USAID's role in state and hegemonic projects of development and neoliberalization, emphasizing the relations between trade, development, and security within recent agency strategies. A discussion of trade capacity building, which centers good governance and access to global flows of capital as objectives of state development efforts, follows this section. I then examine how USAID has articulated trade capacity building as it repositions itself within the US state and relative to national security strategies. To provide a concrete, if necessarily incomplete, examination of neoliberalization driven by trade capacity building, I conclude by considering the place of food security within USAID strategies. This analysis is primarily focused on major strategic programs outlined by USAID, not all of which have been implemented fully, but which the agency has nonetheless used as a basis for restructuring and reworking its planning and allocation processes. I thus examine the agency's adoption of neoliberalizing approaches to development and security through analysis of key governmentissued texts supplemented by interviews with officials. This approach has limitations, as it does not provide a thorough examination of actual, on-the-ground processes within specific programs, but can provide insight into the institutional contours of neoliberalization as it relates to security and development. Likewise, it should not be assumed from this analysis that USAID presents a monolith, with a single endogenous set of interests and approach to development; reliance on agency-produced information here limits my study largely to the institution's outward face, and the agency's internal divisions are well documented (see Lancaster and Van Dusen 2005). USAID, 1961-2002 Without specifically referring to USAID, the 2002 National Security Strategy of the United States (NSS) asserts two goals that bear on the agency's mission. The first states that US national security is dependent on a "strong world economy," best built and maintained through "[e]conomic growth supported by free trade and free markets" (White House 2002:17). This, in turn, depends on promoting appropriately liberalized trade policies that "can help developing countries strengthen property rights, competition, the rule of law, investment, the spread of knowledge, open societies, the efficient allocation of resources, and regional integration" (White House 2002:19). The second major goal relevant to USAID highlights the security risks produced by underdevelopment, and calls for "an expanding circle of development…and opportunity" as both "a moral imperative and one of the top priorities of US international policy" and as a set of concrete practices, the hegemonic development project presented national elites in underdeveloped states with "little choice but to industrialize," with the success of state-building and industrialization "the measure of their success as political elites." While this does not mean a given state could choose only one of two paths -development or non-development -it does recognize that the road to development, as understood and instituted within the limits of a USdominated hegemonic project, was relatively narrow, and that the scaled power relations constituting, reproducing, and enforcing this project greatly constrained the strategic selectivity of state institutions. The limited range of acceptable development strategies in postcolonial states, as Glassman and Samatar (1997:181) argue, resulted from both the institutional capabilities of state managers and bureaucracies, and from the international alliances maintained by powerful class-relevant groups in those states, with non-class-based social hierarchies and the class consciousness of specific leadership groups also important. Within the context of this project's ideological and political hegemony, the 1961 Foreign Assistance Act established USAID by consolidating the technical and economic assistance and lending activities of several other state agencies (USAID 1964:5). The new agency became a central site within the US state for managing military, economic, and food aid to developing states, activities crucial to the interscalar coordination and reproduction of state and hegemonic projects and the international state system. The broad economic development and geopolitical objectives of USAID's mission, and the ideological link between the two, were clear from the agency's founding. USAID (1964:5) described its mission as "assist[ing] other countries that seek to maintain their independence and become self-supporting," linking the twin ideological foundations of capitalist economic growth and political resistance to international communism. USAID attempted to enact a totalizing vision of development in which national-scale capitalist growth, catalyzed by foreign assistance, and a political "rational humanism" would lead to a free society providing the conditions for "individual choice, initiative and development," all within a US-led international state system (USAID 1962:22). Yet USAID was at pains from its establishment to make clear that its foreign aid and development programs did not undermine US hegemony. First and foremost, the agency had to demonstrate that foreign aid did not drain American coffers, and that such expenditures would yield sizable economic and political returns, even if development meant the growth of protected and even state-run domestic industries capable of competing with or limiting US imports. To counter "misconceptions" about its work, USAID (1964:8) made two points prominent: 1. American dollars are seldom given directly to foreign countries. Most economic aid involves the financing of US goods and services for specific development activities. 2. Economic aid to the industrialized countries (Western Europe and Japan) was ended years ago. AID programs now are concentrated in the underdeveloped countries of Asia, Africa, and Latin America. USAID did not deliver handouts of taxpayer money to countries where such funds were no longer necessary to spur investment, growth, and political reform. Instead, foreign governments paid American sellers for military, industrial, and agricultural assistance with loans managed by USAID. Furthermore, the agency stated that economic and political development abroad was not an end in itself, but a boon for the United States. This was particularly true for trade relations, as USAID (1966:32) argued that "[d]eveloped countries are the best customers for American exports," citing Western Europe and Japan as examples. In terms of food aid, one of the agency's most important functions, USAID contended that development would help not only poor peasants and growing urban areas in the developing world, but also American farmers and food exporters. Food donations came from commodity stocks designated surplus by the Department of Agriculture (USDA), while industrialization and urbanization in the developing world demanded the US fill the subsequent gap in developing states' ability to feed themselves. Most important was USAID's firm statement that "the aid provided…cannot substitute for trade," a point reiterated even more strongly today (USAID 1963:6). Food aid programs were not to interfere with normal channels of private trade, and USAID presented food assistance as a path toward market development for future US exports. Food aid was not usually a direct government-togovernment donation, but a concessional sale made with long-term (up to 30 years), low-interest dollar loans from the US government. Thus were foreign exchange reserves built up, accounts balanced, and political allies rewarded, all as USAID (1963:18) assured US producers and exporters that this constituted an effective way to "maintain or expand present markets and to develop new outlets." Food assistance, in the form of surplus food reserves distributed to geopolitically strategic developing states, became a major component of US state and hegemonic development projects, and the basis for US "green power" (Garst and Barry 1990;Kodras 1993;McMichael 2000aMcMichael , 2000b. By the1970s, however, some in Congress began to criticize USAID for "having too many people in Washington," a situation remedied by transferring agency personnel to other US state institutions (Mustard 2003a:42;. While USAID maintained strong influence over the work of development specialists in other agencies both directly and indirectly, the loss of personnel should not be regarded as simple bureaucratic weight-shifting. Such moves signaled a serious challenge to USAID's mission and political standing, and to the hegemony of the development project as a whole. In light of persistent economic stagnation and US failures in Vietnam, the entire purpose and structure of US development efforts were challenged during this period. Criticism became so strong that "in 1971, the Senate rejected a foreign assistance bill authorizing funds for fiscal years 1972 and 1973," the first rejection of foreign aid authorization since before the Marshall Plan (USAID 2005a). FROM DEVELOPMENT TO NEOLIBERALIZATION During the 1980s, USAID faced withering criticism from those in government and business circles who favored greater trade liberalization and an end to the developmentalism of the previous three decades. In the context of neoliberalization, this became the potent basis for a redefinition of USAID's mission and a reworking of development's place in US foreign and economic policy. During the late 1980s and early 1990s, USAID was the subject of several government audits, and reports emphasizing the agency's ineffectiveness, diffuseness, and lack of direction piled up, with legislation introduced (but not passed) in both 1989 and 1991 to replace USAID with a restructured and more flexible executive agency (GAO 1993:19). Further review of US foreign aid and development programs argued that the agency was "buffeted by (1) the competing agendas of other federal agencies, (2) the role Congress has taken in programming decisions, (3) the lobbying efforts of outside special interest groups, and (4) fundamental differences among and within these groups on how foreign aid money should be spent and what it should accomplish" (GAO 1993:4). Under these conditions, and with concomitant changes in the global geopolitical balance, USAID had seen its programmatic emphases expand to cover a panoply of emerging issues for which it could not provide effective management, and which threatened to worsen already apparent fragmentation within the agency and hamper international development efforts (GAO 1993). With a muddled institutional structure, a growing set of objectives, and a changing balance of political forces within the US state, USAID found its strategic selectivity increasingly limited, a situation that made internallymanaged reform difficult, if not impossible. External criticism sharpened as trade liberalization, capitalist internationalization, and global economic growth became the paramount objectives of the US state and its particular project of neoliberalization. Agro-food capital was especially pointed in critiquing USAID, despite the fact that US agricultural producers had long found a ready outlet for surplus disposal in the agency's food aid programs. These sentiments were summarized by Richard Krajeck, vicepresident of the US Feed Grains Council, who stated before Congress that "there have been countless instances where AID agricultural programs have been counter to US agricultural interests… [and] objectives of increasing agricultural exports and eliminating trade barriers" (US Congress 1994:40). Krajeck expressed frustration with USAID's inability to align development and trade, and offered a blunt assessment of its future: The AID program is funded at $6.2 billion per year and has been primarily a foreign aid program that has been shaped by US political and strategic interests during the cold war. Those days are over and the mission of AID and its role in developing agriculture must be reviewed (US Congress 1994:106, emphasis added). Unable to navigate the pressures of internationalizing capital, which saw USAID's development work as a fetter to accumulation, and the neoliberalization of other more powerful US state institutions, the agency was caught in a powerful vice, and underwent major restructuring in the mid-1990s. Major structural adjustments centered on reductions in the agency's budget, workforce, and control over how funds could be used. The fiscal year (FY) 1996 budget of $5.7 billion was 13% less than that of the previous year, while Congressional and executive earmarks expressly directing how USAID funds could be used increased from 59.8% of the agency budget in FY1995 to 69.5% in FY1997 (GAO 1998:133-134). The agency also cut its global staff from 11,150 to 7,609 between 1993 and 1997, closing 24 overseas missions and implementing streamlined regulations for coordinating overseas and headquarters staff (USAID 1998:133). Restructuring also included the beginnings of consolidation with the State Department, which had long held strong influence over USAID, and placed the agency administrator below the Secretary of State in the bureaucratic hierarchy, indicating the agency's diminishing political stature. USAID offered in 1993 to become "a 'reinvention laboratory'" and "established five strategic goals to meet its agency mission of pursuing sustainable development in developing countries," the first two of which were "achieving broad-based economic growth" and "building democracy" (GAO 1998:134). These objectives were to be achieved through internal technical changes and improving the agency's "customer focus" and staff accountability (GAO 1998:134). In sum, the restructuring of the 1990s focused on aligning the agency's development mission with state and hegemonic projects of neoliberalization, most significantly by ensuring that development strategies supported liberalized economic growth and formal democratization. This was not fundamentally new, as USAID had long maintained the necessity of development progress based on liberal democracy and capitalist growth. Yet the "cartography of development" through which the relations between development, democracy, and economic growth were understood, and the mechanisms by which these were to be achieved, were quite different by the 1990s (Peet and Watts 1993). Keynesian modernization theories enshrined in the development project, which understood the state as a conducive or benign agent of modernist development, gave way to neoliberal understandings of the state as a rent-seeking intruder into globalizing market relations. Without correcting USAID's inability to achieve foreign development through neoliberalization (and thereby successfully internalize and institute neoliberal orthodoxy), it remained an agency working at cross-purposes with the political and economic objectives of the increasingly dominant cartography of development authored by and through the US state. Understood as a technical problem resulting from poor internal financial and information systems management (GAO 2003a(GAO , 2003b) and adherence to failed understandings and policies of development, restructuring continued by incorporating trade capacity building into the heart of development efforts. USAID AND TRADE CAPACITY BUILDING The term trade capacity building (TCB) is relatively new, and is meant to move the international system beyond the impasse between discredited but institutionally entrenched States and civil society must be brought into line with market mechanisms -civil society through active cultivation and states through limiting their functions to market facilitation and security provision. Phillips and Ilcan (2004) describe capacity-building as one of the primary political technologies through which neoliberal govermentality is constructed and spatialized. They define neoliberal governance as the "ways of governing populations that make individuals responsible for changes that are occurring in their communities," with responsibility exercised and enforced through markets, which increasingly emphasize "skill acquisition, knowledge-generation, and training programs" (Phillips and Ilcan 2004:397). This perspective highlights the ways in which discourses and practices of capacity building center on the creation and reproduction of social categories that mark off populations as either responsible members of open, market-based communities moving toward development, or irresponsible and potentially dangerous outliers (see Roberts et al 2003). Moving from the latter group to the former depends on acquiring the skills and knowledge that permit individuals to practice responsible behavior and allow for discipline via the marketplace. Diffusion of skills, knowledge, and training -investments in "social capital" and "human capital" -are the driving forces of neoliberalizing development (Rankin 2004). It is in this context, Jessop (2003) points out, that the networks praised by both Castells (1996) and Negri (2000, 2004) become a seductive but ultimately empty (and even celebratory) metaphor for understanding and challenging neoliberalization and neoimperialism. A more critical and useful analysis goes beyond recognizing the re-categorization of populations and places along axes of responsibility, and, as noted in the above discussion of the strategic-relational approach to the state, also considers the role of class-relevant social formations and struggles in the expansion and maintenance of political and economic power. A closer examination of how USAID has instituted trade capacity building, and what this means for state institutions' strategic selectivity relative to development and securitization, is one way to analyze the process of neoliberalization and its significance for development and security. This follows from and reinforces the idea that state-managed foreign aid and assistance, the staple of past USAID programs, must be supportive of, and not a substitute for, trade and economic self-help by developing countries. This position echoes what USAID proclaimed at the development project's height, as discussed above, and relies on the idea that "development progress is first and foremost a function of commitment and political will directed at ruling justly, promoting economic freedom, and investing in people" (USAID 2004b:11). USAID defines "ruling justly" as "governance in its various dimensions: voice and accountability, political stability and absence of violence; government effectiveness; regulatory quality; rule of law; and control of corruption," while "investing in people" involves bolstering "basic education and basic health" services (USAID 2004b:11, fn. 8). This language draws from existing discourses of social capital and state effectiveness long favored by Washington Consensus institutions such as the World Bank and IMF (Fine 2001;Peet and Hartwick 1999). How closely on-the-ground implementation of trade capacity building hews to these conceptualizations is rather more problematic. The invocation of political will, just rule, and state efficiency are hallmarks of neoliberal rhetoric, and suggests that trade capacity building is the latest in a long line of strategies designed to further capital internationalization and the reproduction of the US-dominated international state system. Yet the vague, catch-all character of trade capacity building in practice indicates that it is less a fully coherent strategic blueprint than the repackaging of existing development activities, meant to bring USAID in line with state and hegemonic projects predicated on the neoliberal doctrine of free trade and the neoconservative obsession with security. A USAID official remarked that initial attempts to institute trade capacity building cast a very wide net: [In the field] you would get these surveys from Washington, and they would say, we're trying to conduct an inventory of all our trade capacity building activities. And in the beginning -and I don't know how this has evolved -but in the beginning of those surveys, I mean, it was sort of ludicrous because virtually anything that we were doing in the economic growth sphere could be described as trade capacity building ( The danger USAID faces in so tightly intertwining itself with market-oriented state institutions and capital arises from the continual narrowing of the agency's strategic selectivityneoliberal doctrine serves as the basis for agency work, and further neoliberalization is the intended outcome. The benefit comes in the form of larger budgets and even the reproduction of USAID itself, and the agency has received large appropriations to implement trade capacity building (see Table 1). While these numbers still represent a small portion of its total budget, trade capacity building has moved quickly up the list of agency priorities, and has gained prominence as a guidepost for continued and intensified neoliberalization (USAID 2004a). It is important to note, however, that even as USAID funding for TCB projects has steadily increased, the agency's proportional share of overall US government spending on such activities has decreased, due to increases in TCB funding channeled into sector-specific trade facilitation activities or into WTO accession, areas where capital and USTR command greater expertise. Geographically, USAID has concentrated TCB funding in states where acceptable neoliberalization is already underway, in areas of geostrategic importance, particularly the Middle East, Eastern Europe, and the former Soviet Union (USAID 2001(USAID :6, 2003c:2), and in those countries eager to engage in free trade agreements. Since 2001, the agency's "TCB funding to countries with which the US is pursuing Free Trade Agreements (Morocco, the Andean Pact, CAFTA, and SACU) more than tripled," with much of this funding targeted at building institutions compatible with the requirements of WTO accession or specific features of bilateral and regional agreements with the US (USAID 2004a). 1 This differs from the geopolitical criteria previously underlying USAID development funding primarily in that trade policy has moved to the center of agency strategies, though this is complicated by emerging national security discourses focused on counter-terrorism and failing or failed states. SECURITY AND STATE WEAKNESS The second strategically and institutionally important change accompanying USAID's adoption of trade capacity building rests on the altered relationship between development and security, as outlined in the 2002 and 2006 NSS. Here, development bolsters "weak states" that might otherwise become havens for terrorist and criminal networks, which could then pose a threat to American interests abroad and domestically. USAID, the State Department, and the White House have therefore identified development, along with defense and diplomacy, as the three "pillars" of US security strategies (USAID 2004b:8;White House 2002, 2006. The focus on strengthening "weak states" in new development schema demonstrates how the neoliberal understanding of states as rent-seeking regulatory burdens on market relations becomes strategically intertwined with the security concerns and objectives of neoconservatism (USAID 2004b:12; on neoconservatism, see Lind 2004). Two points stand out here. First, recalling that neoliberalization does not only or even primarily imply the rolling back of the state apparatus, the emphasis on trade capacity building demands that "weak" states be strengthened by removing trade barriers and making economic and social policy sensitive to liberalized global market signals. Second, weakness here stems directly from states' inability or unwillingness to properly insinuate themselves into the networks, flows, and institutions of neoliberal capitalism. Distanciation and disconnectedness from internationalizing capital is not only economically wrongheaded, but is the source of political and social weakness, producing insecurity that threatens continued capitalist accumulation under the rubric of neoliberalization. Roberts et al (2003:889) thus identify an emphasis on "enforced reconnection" with the global capitalist system, "mediated through a whole repertoire of neoliberal ideas and practices." Trade capacity building offers a potential and enforceable technical fix for disconnectedness, as being outside neoliberalization is to be against neoliberalization, and thus to pose a security risk. USAID Administrator Andrew Natsios made this clear in a May 2003 speech: For countries that are marginalized, that are outside the international system, that are outside development, that are not developing, that are not growing economically, that are not democratizing, look at the different factors that lead to high risk in terms of conflict. Income level is one of the highest correlations between marginalized states and risks in terms of conflicts (USAID 2003a:n.p.). The agency's 2004 White Paper expanded on this to provide a more detailed strategic framework for development and aid programs, establishing a loose taxonomy of states according to the need for development assistance, the commitment to initiate neoliberalization, and the degree to which states are capable and "fair" partners in the use of development resources (USAID 2004b). This Table 2). In these frameworks, USAID identifies relatively weak institutions, particularly those necessary to establish and maintain market openness and political stability, as the crux of underdevelopment. Running across such taxonomies is a consideration of "strategic states," a designation that depends less on USAID objectives than on the geostrategic and foreign policy goals of the US executive and Congress. The agency recognizes that the determination of which developing states are considered strategic is a matter for other US state institutions, but also notes: Increasingly, the primary foreign policy rationale for assistance may be matched by or indistinguishable from the developmental or recovery objectives. Thus, the strategic allocation of ESF [Economic Support Fund] and like resources will begin to benefit from the same principles of delineation, selectivity and accountability proposed in this White Paper (USAID 2004b:21). 2 Incorporating developing states into networks of neoliberal globalization is, in this view, the essence of producing and maintaining security in line with US foreign policy objectives. This understanding of the link between development, trade, security, and state weakness is echoed in the strategies of other US state institutions, most notably USTR (see USTR 2001). USAID articulates development progress and improved security in terms of the facilitation of liberalized market relations by stable developing state institutions. While more candid interviews with USAID officials indicate that not everyone at the agency is on board with this approach, it has nonetheless become official strategy, and presents a serious contradiction, as development comes to depend on internationalizing and liberalized market forces, even as these remain dominated by predatory finance capital (Harvey 2005;McMichael 1999McMichael , 2000a. Internationalizing market relations are fundamentally unstable and, as a means of achieving security outside the narrow concerns of capitalist accumulation, completely insecure. A brief examination of how food security fits into USAID strategies regarding trade, security, and state weakness demonstrates this. USAID AND FOOD SECURITY Within state and hegemonic projects of development, the US state often wielded its "green power," instituted in USAID's various food aid programs, as a bludgeon. From its outset, USAID (1966:16) declared that food assistance should ensure "that America's great agricultural abundance is put to work alleviating hunger and malnutrition, encouraging social and economic progress, furthering international trade, and advancing the foreign policy interests of the United Focusing on Central America, Garst and Barry (1990) conclude that food aid has never systematically and effectively remedied food insecurity in developing and conflict-torn states because of the contradictory objectives which such aid was meant to fulfill, as well as the fact that other US policies have often produced or exacerbated food insecurity. Food aid was simultaneously supposed to alleviate hunger, build export markets, provide emergency humanitarian relief, improve agricultural production and efficiency, promote US foreign policy goals, stabilize currencies, reward political allies, and dump surplus commodities (Garst and Barry 1990:6). By the early 1990s, in the context of the stringent criticisms of USAID outlined above, agency food aid programs began to take on forms reflecting aspects of neoliberalization. One prominent example was the increased use of "food-for-work" programs, which pay aid beneficiaries in food commodities rather than money wages for labor on infrastructural and community improvement work (Garst and Barry 1990:131). Food-for-work programs remain common today, and USAID (2005b:14) has touted their success in defusing violence among Indonesia's urban poor during the 1998 economic crisis, providing jobs that proved more attractive than the cash payments extremist groups offered in their recruitment activities. Broadly speaking, such programs have not proven effective in alleviating hunger or fostering economic and institutional development in the long term, and often violate workers' legal right to fair wages. They do, however, fit neatly within neoliberal emphases on personal responsibility, institutional capacity, and workfare in the reconfiguration of development strategies, and ignore the fact that greater food insecurity is itself one possible (and perhaps likely) result of neoliberalization. Neoliberalization challenges developmentalist concepts and practices of food securitylong understood as the security of the individual's access to food, with food as an entitlement of the social contract built into the national state -but only to the extent that the national state becomes a market facilitator rather than a guarantor of such entitlements. As Watts (2000:204) argues, "[f]ood security or famine proneness are the products of historically specific networks of social entitlements," networks which come under intense pressure and must be reworked as development institutions such as USAID internalize and institute US-led state and hegemonic projects of neoliberalization, including the understanding of underdevelopment as a US national security risk. As part of its reorientation toward trade and security, then, USAID (2005b:4) has therefore made food security both a concrete objective of the agency's development mission and a proxy for measuring state fragility, arguing that "economic instability, food insecurity, and violent conflict…are usually symptoms of the failure of governance in fragile states." Conversely, food security is a product of good governance, itself the result of appropriately liberalized, market-based reforms and integration into global networks of capital and US security infrastructures. This has three primary implications for the future of USAID food aid and broader understandings of food security. First, it means declining reliance on programs such as PL 480, and a shift in the intent of such aid. Funding for PL 480 has decreased sharply in recent years, while more of the money allocated for food aid has been directed to emergency humanitarian relief (USDA 2004:6). 4 Second, it means that the trade-as-development orientation of USAID programs has altered the agency's strategic selectivity with regard to food security. Except in cases of extreme and acute food shortages, food security is to be achieved through personal responsibility exercised through market relations rather than through the management and distribution of food surpluses by the state. This is a hallmark of the "good governance" neoliberalization produces, despite the volatility of international markets. Finally, this understanding of food security exemplifies the narrow conception of security dominant within the US state, casting poverty and the absence of linkages to networks of capitalist accumulation as weakness, and this weakness as insecurity for US and capital interests around the globe. The reworking of the relations between security and development by and through state development institutions such as USAID produces a new cartography of development, both in the accepted relations between market, state, and civil society, and in the flows of resources and expertise to states targeted for development assistance. Such designation is not only a matter of need, as USAID (2004bUSAID ( , 2005bUSAID ( , 2006 makes clear that geostrategic concerns are an important factor in directing flows of aid, as is the presence of political will to undertake neoliberalization. The freedom to accumulate capital on a global scale, and to do so without the threat of state interference or social resistance, becomes the object of such security concerns, not the security of individuals and communities from hunger, depredation, and exploitation. CONCLUSION I have provided an examination of the way in which a particular US state institution, the US Agency for International Development, has acted as site and strategy for the reconfiguration of state and class-relevant practices regarding development, trade, and security. This reconfiguration, occurring in and through the structure and strategic selectivity of the US state, forms a key part of a broader transition between state and hegemonic projects, and produces a new cartography of development that centers internationalizing market relations (Jessop 1990(Jessop , 2001(Jessop , 2002McMichael 2002a;Peet and Watts 1993). While this is not to suggest that this transition is complete or total, it does emphasize the ways in which a relative unity of action and ideological commitment -in this case to trade liberalization and a particularly narrow definition of security -is enforced in and through state institutions. USAID, as a pivotal state agency with a great deal of power over dominant understandings and practices of development, demonstrates one way in which the production and maintenance of neoliberalization occurs -in this case, through the adoption of trade capacity building, which posits liberalized trade as the only appropriate path to economic development, and a security discourse that casts underdevelopment as a national security threat. The example of food aid and food security begins to illustrate how this configuration centers the needs of internationalizing capital and the geopolitical concerns of the US state -not new in the work of USAID, but to be achieved in markedly new ways. This paper is an initial foray into identifying specific ways in which class-relevant struggles have coalesced around USAID's internal constitution and external relations; the next task is to identify how these struggles may be advanced to promote something beyond the democracy of the marketplace and the security of capitalist accumulation. ACKNOWLEDGMENTS The author would like to thank Scott Kirsch, Tod Rutherford, Noel Castree, and two anonymous referees for comments on previous versions of this paper. NOTES 1 These are three bilateral and regional FTAs the US is currently pursuing or on which it has completed negotiations. The Andean Pact includes Bolivia, Colombia, Ecuador, and Peru. CAFTA-DR, the Central American Free Trade Agreement, includes Costa Rica, the Dominican Republic, El Salvador, Guatemala, Honduras, and Nicaragua. SACU, the Southern African Customs Union, includes Botswana, Lesotho, Namibia, South Africa, and Swaziland. 2 The Economic Support Fund is an aid account managed by the State Department and implemented by USAID. Writing for the Brookings Institution, Lancaster and Van Dusen (2005:15) describe ESF as "grant aid originally intended to ease the burden of security expenditures for US friends and allies abroad -especially in the Middle East -but increasingly used to fund a number of other activities associated with relief and reconstruction, development, and democracy promotion." Transforming Countries Low or lower middle income states; US assistance to build good governance and sustainable progress Developing Countries Low or lower middle income states; US assistance to strengthen democratization and basic economic growth and poverty reduction Rebuilding Countries States in or emerging from and rebuilding after internal or external conflict; US assistance to stabilize governance and lay foundations for development progress Restrictive Countries States of concern where there are significant governance issues; US assistance to empower civil society and reduce such states' potential negative impacts on global and regional stability Global or Regional Programs Activities that advance the five aid objectives, transcend a single country's borders, and are addressed outside country strategies; US assistance to achieve generalized development goals Source: USAID 2006
2018-12-11T09:00:49.931Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "a922c78a3ea76f79c040401940e59113c39a15d1", "oa_license": "CCBYNC", "oa_url": "https://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=1002&context=poliscipub", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "d75b8d2d6d9764d1960d2e07745c682920900867", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233701741
pes2o/s2orc
v3-fos-license
Understanding Social Insurance: Risk and Value Pluralism in the Early British Welfare State This article seeks to make two contributions to the understanding of social insurance, a central policy tool of the modern welfare state. Focusing on Britain, it locates an important strand of theoretical support for early social insurance programs in antecedent developments in mathematical probability and statistics. While by no means the only source of support for social insurance, it argues that these philosophical developments were among the preconditions for the emergence of welfare policies. In addition, understanding the influence of these developments on British public discourse and policy sheds light on the normative principles that have undergirded the welfare state since its inception. Specifically, it suggests that the best model, or normative reconstruction, of social insurance in this context is a value-pluralist one, which pursues efficiency and equality or solidarity, grounded in group-based perceptions of risk. Social insurance-the provision of event-conditioned benefits through a publicly operated system of contributions and distribution-has long been a central policy tool of the modern welfare state. Scholars have advanced a number of normative explanations for the practice, including its ability to promote economic efficiency (Barr 1989;Heath 2011), its expression of relational or distributive equality (Anderson 2008;Dworkin 2000;Landes and Néron 2015), and its role in cultivating social solidarity (Lehtonen and Liukko 2015;Liukko 2010). While these aims are not mutually exclusive, prioritizing one or another can lead to different policy outcomes. For instance, a social insurance system that aims to promote efficiency by satisfying individual preferences for security will not necessarily lead to egalitarian results (Heath 2006, 346-48), while a system that aims for solidarity by providing uniform benefits to all may not satisfy the demands of wealthier citizens to insure themselves at desired levels of consumption (Ebbinghaus and Gronwald 2011;Korpi and Palme 1998). As Joseph Heath puts it, "even among the most enthusiastic supporters of the welfare state there are several different theoretical reconstructions of the normative commitments that are taken to underlie it, all of which are in tension with one another" (Heath 2011, 14). Acknowledging the force of this observation, this article seeks to make two contributions to the understanding of social insurance. First, it argues that the best normative account of this practice is a value-pluralist one, which pursues efficiency as well as equality or solidarity, grounded in group-based perceptions of risk. Focusing on the emergence of social insurance in Britain, it argues that a plurality of principles found expression in public debates and policies at this time. Such a pluralist reconstruction better characterizes the emergence of the practice and its purposes than a model that focuses solely on any one principle. Second, the article locates an important but little remarked strand of theoretical support for early social insurance programs in antecedent developments in mathematical probability and statistics. This strand of thinking originated with frequentism, an interpretation of probability that emerged in the mid-nineteenth century and enjoyed particular prominence in Britain. Frequentism helped to justify a collective response to uncertainty, and was moreover linked with both utilitarianism and developments in statistical thinking. While by no means the only source of support for social insurance, these philosophical influences were among the preconditions for its emergence. Moreover, understanding the character of these influences sheds light on the normative principles that have undergirded key elements of the welfare state since its inception. While others have well noted the plurality of values or aims served by welfare state institutions (see Goodin et al. 1999), this paper adds an additional source of support for such pluralism in accounts of the interpretation and quantification of risk itself. Focusing on the emergence of social insurance in Britain has two advantages for the project of normative reconstruction. First, it offers a valuable test case for a public goods or efficiency model of the welfare state, which contends that the primary purpose of a major set of welfare policies is to provide goods that citizens demand but that are undersupplied or imperfectly supplied by private markets (Heath 2011). Since today the United Kingdom is generally regarded as a liberal welfare regime, in which the market plays a dominant role and the decommodifying effects of social transfers are limited (Esping-Andersen 1990), it should be an exemplar of the market-failures rationale for government intervention. If, however, it turns out that the emergence of welfare state institutions there cannot be explained exclusively or even principally in terms of efficiency, this could provide evidence against a strict public goods view. 1 Second, if we are able to identify a plurality of normative principles in the practice of social insurance itself, this makes it more likely that normative pluralism characterizes the welfare state as a whole, given that the latter comprises a range of other programs whose aim is more explicitly egalitarian than social insurance. The argument proceeds as follows: the first section offers an introduction to three seminal social insurance policies enacted in Britain around the turn of the twentieth century, as well as the variety of arguments invoked to support them. The three ensuing sections examine the history of probability theory, introducing frequentism and its implications for accounts of insurance, as well as echoes of its class-based approach to risk management in subsequent statistical thought. 2 The final section returns to the realm of practice, focusing on reverberations of these developments in British economic discourse and public policy. 3 This intellectual history reveals that even risk-pooling justifications for welfare reflect a plurality of normative aims. The conclusion draws lessons from this historical analysis for contemporary thinking about the welfare state, suggesting that a market-failures reconstruction may not fully account for the claims of equality and solidarity that have long been prominent in defenses of social insurance. The Emergence of Social Insurance Before turning to the development of social insurance, it is important to explain why examining the history of welfare policies is crucial to the contemporary project of normative reconstruction. As Heath puts it, the task of normative reconstruction is to offer an "account of the normative purposes that are already implicit in the practices of the welfare state." Such an account should be informed by what the state currently does as well as why it does those things. Following Jürgen Habermas, Heath evaluates each of the contending models using a standard he calls "expressive adequacy," which comprises three prongs: first, whether major welfare state activities can be described as serving a particular normative purpose; second, whether that normative purpose played a role in the emergence of the relevant policies or institutions; and third, whether the model enhances our "normative selfunderstanding," thereby enabling us to better achieve our goals (Heath 2011, 14, 28). This standard makes clear that understanding the emergence of various institutional forms, and in particular their guiding principles and aims, is central to the effort to normatively reconstruct them. We therefore begin with a brief survey of some of the purposes that drove early British social insurance programs before turning to the philosophical and technical developments that helped to support them. The idea of social insurance did not originate in the nineteenth century. A number of prominent proposals date to the time of the French Revolution (Condorcet [1795(Condorcet [ ] 2012Jones 2005, 34-36), and the idea of mutual provision against contingency goes back even further (Cordery 2003, 13-21;Ismay 2018, 23-46). In Britain, guilds had long been a source of aid for working people, to be succeeded by cooperative associations, trade unions, and friendly societies, which typically provided benefits in the event of poor health or death (Cordery 2003;Ismay 2018). The earliest plans for social insurance proposed to extend the logic of mutual provision to larger groups or to society as a whole, employing the newly developed calculus of probabilities (Jones 2005, 17-35). 4 These proposals grew out of concern for the implications of new economic realities, in particular, their influence on the working poor, reliant on wage labor and consequently vulnerable to any number of disruptions. Probabilistic mutual insurance, with its premiums based on mathematical likelihoods, promised to reduce this vulnerability by pooling risks: provided enough similarly situated individuals join together, creating a large common fund from their many small contributions, they may equitably share the burdens of a misfortune that happens to strike any one of them (Laplace [1825(Laplace [ ] 1994. 5 The enactment of full-fledged social insurance policies would have to wait until the 1880s, however, when several European countries instituted national-level schemes and inaugurated a trend that would spread across industrialized states (Baldwin 1990, 55-106). As François Ewald has shown in the context of the French welfare state, many of these developments drew support from probabilistic and statistical thinking of the time (Ewald 1986). 6 As governments increasingly collected information about a variety of economic and social phenomena, and as mathematical developments allowed for sophisticated uses of those data, the notion that the state could insure its citizens against various forms of misfortune became increasingly prominent and plausible. In Britain, these trends helped set the stage for three seminal pieces of legislation: the Workmen's Compensation Act of 1897, the Old Age Pensions Act of 1908, and the National Insurance Act of 1911, which provided a significant expansion of health insurance coverage and (as will be the focus here) a more limited unemployment insurance scheme. Many factors precipitated the enactment of these laws, including greater awareness among the middle classes of the reality of working poverty; changing elite views about the role of the state in addressing insecurity; and the growing role of labor and working-class organizations, to which elites had new incentives to respond (Hay 1975, 25-29;Orloff 1993, 153-171). The philosophy of liberalism had also evolved, moving away from the ideal of self-help and the harsh deterrence of the poor law system toward a new fusion of individualist and collectivist values known as the New Liberalism (Freeden 1986;Vincent and Plant 1984;Weinstein 2007). What is noteworthy for our purposes is that all three of the schemes in question invoked the logic of insurance in some respect. They also garnered support from both economic and egalitarian arguments. Workers' compensation was at various points framed both as a form of recompense for service to national productivity, particularly in the face of foreign competition, and as a tool for a kind of redistribution, shifting "a fair portion of the loss sustained" from the injured to his employer, as one government report put it (Great Britain-Home Office 1904, 13; see also Bartrip and Burman 1983;Moses 2018, 129). Pensions were justified as promoting workers' personal well-being, due to the fact that the young may not save for old age, and as a means to protect the especially vulnerable from devastation (Churchill [1909(Churchill [ ] 1973bHarris 2004, 157;Macnicol 1998, 162-63). Finally, unemployment coverage was framed both as a means to help individual workers cope with unpredictable risks and as an expression of solidarity, a recognition that any individual's own welfare cannot be promoted without regard for the larger collective of which he is a part. The following three sections aim to show that thinking about probability and statistics in Britain around this time lent support to both sets of normative arguments. On one hand, such thinking justified social insurance as a means to reduce economic uncertainty for individuals, ensuring that they could sustain themselves in times both good and bad. On the other hand, it affirmed the distributive fairness of insurance and the importance of mutual support among groups of risk-prone workers. While developments in the interpretation and calculation of risk were not the only source of support for such arguments, they were one such source, justifying social insurance both as a reflection of personal prudence and as a means to fairly distribute the burdens of industry within and among various groups. It is important to note that the argument presented here does not purport to satisfy a test of direct influence between developments in probability theory and social policy outcomes. One such test, proposed by Quentin Skinner, would in this case require showing that those responsible for policy decisions had studied frequentist views, could not have found the relevant doctrines anywhere but in frequentism, and could not have arrived at those doctrines independently (Skinner 2002, 75-76; see also Toye 2010, 162-63). It is true that many prominent economists and political actors involved in formulating welfare policy in Britain at this time were familiar with frequentist views or with the statistical advances made in their wake. Nevertheless, the more modest claim here is that one finds echoes of frequentism's flexible classbased approach to risk management in the discourse surrounding social insurance, particularly among policymakers and politicians. Developments in probability and statistics ought therefore to be considered among the sources of support for welfare policy at this time. The Collective View of Risk Frequentism offered a novel interpretation of the relationship between personal judgments about risk and the probability values on which insurance arrangements are based. Prior to this time, thinkers had noted the possibility that personal or subjective probability estimates may not align with the averages calculated for groups. Many had set out to address this problem with the concept of moral expectation, influentially proposed in 1738 by Swiss mathematician Daniel Bernoulli. Bernoulli had argued that an individual with fewer initial assets will be warier of a loss than someone who starts out with a greater fortune, and that for any individual, the pain of losing money will exceed the pleasure of winning it (Bernoulli [1738(Bernoulli [ ] 1954. In the context of insurance, moral expectation was thought to measure the utility that the individual derives from being insured, including whatever exceeds the strict monetary value of the contract itself (Laplace [1825(Laplace [ ] 1994). This concept thus allowed thinkers to reconcile the inherently aggregative character of group insurance with the need to justify it to each individual participant: since the insured derives personal utility from the contract, it is not necessarily unfair for her to contribute more for her coverage than the strict cost of her own risk. With the emergence of the frequency theory in the 1840s, earlier accounts of probability and insurance faced a philosophical challenge. Frequentism was initially worked out during the late 1830s and early 1840s, during a rising tide of philosophical empiricism in Britain, France, and elsewhere. In Britain, this period saw the creation of a number of influential statistical societies, including those in Manchester and Liverpool, as well as the Statistical Society of London (now the Royal Statistical Society) in 1834. The latter was founded by a group of Cambridge-affiliated scholars who, as Lawrence Goldman argues, sought to create a new, inductive style of social and economic analysis (Goldman 1983). It was before this society that Charles Booth presented his pioneering research on London poverty (Booth 1887). Hubert Llewellyn Smith, whose work on unemployment insurance will be taken up below, was among the researchers who helped carry out Booth's study. Frequentism both reflected and furthered this new inductive spirit. Its earliest expositors, including Robert Leslie Ellis and John Stuart Mill in Britain and Antoine Augustin Cournot in France, began with an apparently straightforward proposition: a probability value is the observed ratio of successes over the total number of trials in a series of relevantly similar occurrences (Porter 1986, 77-78). In contrast to many prior accounts, then, frequentists held that probabilities do not reflect the individual's state of mind but rather measure occurrences in the world (see Laplace [1825Laplace [ ] 1994. Several features stand out as broadly distinguishing the frequentist understanding of probability as it emerged and developed in the latter half of the nineteenth century. These include an emphasis on empirical observation over subjective belief, a reluctance or refusal to assign probability values to single instances, and an understanding of randomness as a property of events within a properly defined series. Ultimately, what makes frequentism significant for the history of social insurance is its acknowledgment that all probability estimates are conditional on the prior identification of a series or class. As a result, frequentism supported a class-based approach to insurance, prioritizing groups of citizens over individuals and demoting claims about personal responsibility for some misfortunes. Those who adopted a frequentist interpretation at this time expressed a range of epistemological views. Ellis, for example, a Cambridge-educated mathematician and editor of the works of Francis Bacon, adopted an idealist interpretation of the foundations of probability; John Venn, perhaps the most prominent frequentist of the era, rejected claims of a priori truth (Verburgt 2014). Nevertheless, they shared an insistence that a probability value is a ratio or frequency derived from a series of occurrences, each of which is uncertain in isolation. In Ellis's words, our judgments of probabilities depend not on the "fortuitous and varying circumstances of each trial" but rather on the natural fact that "on the long run, the action of fortuitous causes disappears" (Ellis [1842] 1849, 3; see also Cournot 1843, 185;Venn [1888Venn [ ] 2006. Frequentism consequently called into question the relevance of probability values to individual events. As Venn put it, for bearing in mind that the employment of probability postulates ignorance of the single event, it is not easy to see how we are to justify any other opinion or statement about the single event than a confession of such ignorance. (Venn [1888(Venn [ ] 2006 In assigning a mathematical expectation to any individual, he explained, the frequentist intends "nothing more than to make a statement about the average of his class" (151). Although Venn denied the existence of fixed natural types, he did suggest that imperfect series exist in nature, and that these could be replaced with refined or idealized versions for the purposes of statistical analysis (Hájek 1996, 218-19;Verburgt 2014, 191-93). Frequentism was also accompanied by a distinctive understanding of what it means for events to be random. Specifically, it supported a view of randomness as a uniform distribution of individual trials within a properly defined series. As Venn explained, randomness presumes some agent, human or other, operating within a set of limits such that it is as likely to generate any given outcome as any other (Venn [1888(Venn [ ] 2006. Charles Sanders Peirce, the American pragmatist philosopher, similarly defined a random sample as one taken according to a precept or method which, being applied over and over again indefinitely, would in the long run result in the drawing of any one set of instances as often as any other set of the same number. (Peirce 1883, 152; see also Keynes 1921, 331-32) Frequentism and Insurance frequentism as it was understood at this time entails a rejection of personal utility as a justification for decisions about insurance. Rather, it is that frequentism addressed the perceived need for such a value by first aligning subjective and objective expectations via the epistemic priority of the class. In addition, in Venn and Cournot, the rejection of moral expectation was linked to an aggregative approach to social welfare, which focused more on the distributive effects of insurance as an institution than on the fairness of any individual contract (Cournot 1843, 334-37;Venn [1888Venn [ ] 2006. Venn recognized that for insurance to remain personally reasonable on a frequentist view, it would require a different justification from the one offered by many previous accounts. One possible approach is for the individual to consider his own actions as a series, and to find that the "equalization of his gains and losses, for which he cannot hope in annuities, insurances, and lotteries taken separately, may yet be secured to him out of these events taken collectively" (Venn [1888(Venn [ ] 2006. This approach is problematic, however, with respect to events that the individual will experience infrequently or only once. More promising in such cases is Venn's suggestion to "suppose the existence of an enlarged fellow-feeling," or an identity with the other members of one's group (149). On this account, the reasonableness of insurance hinges on each person's ability to see himself first and foremost as a member of the series or class, and to enlarge his own interest to encompass the group. Peirce, for his part, explained that because "probability essentially belongs to a kind of inference which is repeated indefinitely," "there can be no sense in reasoning in an isolated case, at all" (Peirce [1878(Peirce [ ] 1955. He gave the example of choosing a card from a pack of twenty-five red cards and one black, or twenty-five black and one red. If choosing red will bring lasting happiness and black everlasting sorrow, one will clearly opt to pick from the first pack. Yet on Peirce's view, there is no valid inference that justifies that choice. Because the exercise is only repeated once, there is no "real fact" whose existence gives truth to the statement that if he draws from one pack, a particular color will likely appear. The example, he argues, therefore illustrates that only by having enlarged interests-by caring "equally for what was to happen in all possible cases of the sort"-is it possible to act logically in choosing from the red pack (160-61). Frequentism thus implied the epistemic priority of the class, variously defined. The truly reasonable person will ground her decisions in the "social principle," caring for every other similar case in the same way that she cares for her own. Such class-based solidarity rests on a kind of interpersonal identity rather than an individualized risk. It is also flexible, based on an admission that the insured's designated reference class can vary according to the insurer's needs and available information, as well as over time (Venn [1888] 2006, 224-31). As we will see, this claim found resonance in statistical developments that followed in frequentism's wake and helped support early welfare interventions. There is also a historical connection of note between probability theory and the early welfare state. Utilitarian philosophers and political economists, particularly in Britain, took considerable interest in the foundations of probability. Like frequentism, their approach to political economy took an aggregative approach to individuals in the name of a common or collective good, and rested on an abstract assumption of equality that allowed for such aggregation. This is not to say that a frequentist view of probabilities necessarily lends itself to utilitarian economics. Rather, the point here is that these two families of ideas were worked out in close proximity, with important overlaps between them. 7 Thus, in his final addition of the Logic, Venn recommended utilitarianism as the successor to and fulfillment of the concept of moral expectation, in that it answers the question of which "distribution of wealth tends to secure the maximum of happiness" (392-93). For example, if the disutility of a losing gambler exceeds the utility of the winner, then overall happiness has decreased, and what is proved is that "inequality is bad, on the ground that two fortunes of £50 are better than one of £60 and one of £40" (390). The real problem with gambling, therefore, is "its tendency to the increase of the inequality in the distribution of wealth," a conclusion that recommends the "Socialist's ideal as being distinctly that which tends to increase happiness" (391, 392). Arthur Pigou, whose analysis of market failures would prove influential in justifying certain forms of welfare policy, also invoked diminishing marginal utility of income in a later work to argue that significant inequalities of wealth entail overall social losses (Pigou 1935, 121; see also Harris 2012, 87-88;Medema 2010). In making his remarks, Venn credited political economist and statistician Frances Ysidro Edgeworth for having discovered the theoretical successor to moral expectation, a reference that is revealing of the intersection between probability theory and political economy in the latter decades of the nineteenth century (Mirowski 1994, 46-47). Edgeworth expressed some reservations about frequentism, but like Venn, he insisted that probabilities should "rest upon precise experience" if they are to be measurable (Edgeworth 1884, 235; see also Porter 1986, 97;Stigler 1986, 309-10). He also explicitly related the epistemology of probability to utilitarian ethics, explaining that both make commonsense assumptions about which cases or events can be considered equal for the purposes of calculation (Edgeworth 1887, 484; see also Sidgwick 1874, 387). This assumption of equality serves the practical needs of the utilitarian calculus in the same way that it serves those of scientific endeavor: it provides "an hypothesis which may serve as a starting point for further observation" and calculation (Edgeworth 1884, 233;Mirowski 1994, 25-27, 40). In conjunction, Edgeworth also proposed an alternative psychological foundation for economic order. "Selfregarding self-interest, the gospel of Adam Smith, is not alone sufficient for industrial salvation: a leaf must be taken from his older and less familiar testament, of which the cardinal doctrine was sympathy." In analogizing intellectual probability to the utilitarian assumption of equality, Edgeworth was explicit that both must be founded on experience and revised in light of continued observation. Such learning would ultimately promote the utilitarian ideal, wherein the individual recognizes that her own happiness carries equal weight to that of everyone else (Edgeworth 1904, 218). As he later put it, "A man, say, buys a life annuity, insures his life on a railway journey, puts into a lottery, and so on." It may be expected, I think, that the class of actions which cannot be regarded as part of a "series" will diminish with the increase of providence and sympathy. (Edgeworth 1922, 260) Insurance and Statistical Innovation The argument of the preceding section focused on the frequentist justification for class-based risk pooling. This section argues that contemporary developments in statistical reasoning confirmed the frequentists' flexible approach to defining population groups and the risks they face, and that these developments in turn found expression in thinking about insurance. Probabilistically informed arguments for insurance had long held that there is safety in numbers, both for the insurer who spreads his risks and for the insured whose collective experience manifests a reliable average (Laplace [1825(Laplace [ ] 1994). Yet the explosion of economic and social data over the first half of the nineteenth century had led thinkers to search for more precise ways to quantify these effects. As Stephen Stigler has shown, Edgeworth played a pivotal role in these developments, setting out to apply techniques that had been developed with regard to physical observations to the phenomena of the social world. Whereas "the mean of observations is a cause, as it were the source from which diverging errors emanate," he explained, the "mean of statistics is a description, a representative quantity put for a whole group, the best representative of the group" (Edgeworth 1885b, 139-40;Stigler 1986, 309). If one must select a single quantity to represent many different outcomes, then, the statistical mean is the value that results in the least possible error in doing so. Edgeworth's first major advance, published in 1885, was to devise a basic significance test for determining whether an observed difference between two proposed population means is a product of chance or some other cause. Applying a path laid out by Francis Galton, who had used similar methods to study heredity, Edgeworth developed a method for testing the differences between groups using estimates of the variability or dispersion within each. At the foundation of his discussion was what he called the "law of error, or probability-curve," which represents the degree of divergence between each member of a set of observations or statistics and a central point or mean. While the curve may be more or less spread out from its center in accordance with what Edgeworth called the "modulus"the square of which, Stigler points out, amounts to twice the variance in modern terms-if a set of statistical numbers fulfills the law of error, then it is "exceedingly improbable" that any member of the set taken at random will deviate from the mean by twice the modulus (Edgeworth 1885a, 183-85;Stigler 1986, 308-13). Edgeworth explained that the law of error is often reflected in nature, including in the errors of physical observations, such as measurements of the location of a star or samples of balls taken at random from an urn. Yet whereas many before him had mistakenly focused on whether a set of observations manifests a normal distribution, Edgeworth insisted that it is not necessary for statistical analysis that the "raw material of our observations should fulfill the law of error." Rather, what is essential is "that they should be constant to any law." Edgeworth went on to examine various cases in which, by manipulating the observations in some way-rearranging groups to increase the number in each, or dividing a larger set into subsets-"art" can facilitate the "elimination of chance" (Edgeworth 1885a, 187). Thus, even where the law of error is not fulfilled in nature, the statistician can compare different artfully created groups to distinguish between the effects of chance and the work of other forces. Edgeworth's approach to classifying and subdividing observations offers another justification for flexible risk management and found echoes in other contemporary analyses. For example, Arthur Bowley 1901 Elements of Statistics, which a contemporary reviewer praised as the "latest and best summary of [the] methods" of mathematical statistics at the time, explicitly followed Edgeworth's writings (Bowley 1901, 262;Ford 1901, 444). The text went through six editions between its first publication and 1937, by which time Bowley had also authored statistical studies on the causes of economic insecurity (Boyer 2018). In the introduction to his account of the applications of probability to statistics, he noted that he would assume readers' familiarity with Venn's Logic and cited Mill in support of the claim that politicians and economists investigating a phenomenon "are as a rule concerned with its effect on the whole mass, not on the individuals in particular" (Bowley 1901, 262-63). In his brief remarks on insurance, Bowley echoed Venn in noting that "a thing happens by chance, when its occurrence is influenced by many independent causes whose separate effects we cannot trace, as when we draw a card from a thoroughly shuffled pack." Thus, "if we consider a man's death from the point of view of an insurance office," we ignore the individual causes of each instance and speak instead about the overall frequency, or average result of those causes, for the group (267-68). Such analyses reveal the influence of the currents we have been considering and provide a link to social policy developments of the time. Bowley testified before the Royal Commission on the Poor Laws in 1907, discussing the problem of unemployment. While his proposed solution focused on government relief works and the creation of labor exchanges, he employed statistical analysis to distinguish between unemployment caused by seasonal fluctuations and that caused by periodic change, and argued that public aid should be limited to the latter, secured by allocating funds in "fat years" to "provide for the lean" (Bowley 1907; Great Britain-Royal Commission on the Poor Laws and Relief of Distress 1910, 467). As we shall see in greater detail below, the discourse surrounding unemployment and the ultimate adoption of nationwide insurance against it constitute a powerful example of the resonance of such probabilistic and statistical arguments in the early welfare state. Risk in the Early British Welfare State The preceding sections highlighted the character of the frequentist interpretation of probability, its implications for thinking about risk-based solidarities, and the developments in statistical methods that extended many of those insights to social and economic investigations. This section returns to the realm of policy, arguing that the development of social insurance in Britain reflects, among other factors, these intellectual currents. Probabilistic arguments lent support to policymakers seeking to justify their interventions on broadly liberal grounds, and thus in a way that could appeal to wide constituencies while addressing working-class resistance to the harshness of poor relief (Harris 2004, 154-57;Orloff 1993, 153-54). In Britain, workers' compensation laws initially grew out of the common law of torts. Yet while judges had experimented with new legal remedies and some jurists had attempted to revise standards of care to protect individuals from harm, it was ultimately the legislature that made the most significant advances toward addressing the dangers of industrialization (Lobban 2010). The Workman's Compensation Act of 1897 provided compensation for accidents occurring in the course of employment. As Julia Moses notes, courts often chose to interpret the term "work" broadly, resulting in outcomes more favorable toward workers than a strict interpretation of the law would allow. Nevertheless, workers were still required to establish a direct link between their workplace and a particular harm (Moses 2018, 128-41). This made it difficult to address the problem of industrial illnesses. As a 1904 governmental report put it, citing the testimony of factory medical inspector Thomas Legge, the "question of how and when the disease was contracted . . . would in the great majority of cases be so obscure and uncertain that it would probably lead to much dispute and litigation." Although there was considerable evidence of a connection between certain diseases and particular occupations, Legge concluded that a system of sickness insurance, "where the cause of the ailment would be immaterial," was a more fitting response than accident liability (Great Britain-Home Office Departmental Committee on Workmen's Compensation 1904, 45-46). The prevailing approach changed in 1906, when a newly elected Liberal government pushed for the inclusion of industrial diseases within the compensation scheme, starting with six and eventually expanding the list to thirty. With this, according to Moses, Britain "signaled a move away from linear thinking about occupational risk," toward a "more expansive and flexible" understanding of the concept (Moses 2018, 141). It also shifted further toward the principle of insurance, which conditions benefits on the occurrence of an event rather than the causal story behind it. We have seen that accounts of statistical insurance emphasized the predictability of the aggregate rather than causal knowledge of the individual case. The evolution of workers' compensation shows how a statistical understanding of certain phenomena can militate in favor of social insurance for groups of citizens demonstrably affected by them. 8 Compulsory pensions, enacted in 1908, also took a flexible approach to defining risk and managing it for the benefit of particular groups. The scheme was financed by general taxes and offered payments to every citizen over the age of seventy years with an income of less than a certain amount per year. As E. P. Hennock shows, the National Committee of Organized Labour for Promoting Old Age Pensions for All, which succeeded in electing several members of parliament in 1906, had advocated universal, noncontributory pensions, although in the end, financial concerns, among others, meant that benefits were granted only to the poorer segments of the population and those who passed certain character tests (Hennock 2007, 221-25; see also Thane 1984, 896). 9 The choice of tax financing was partly based on the view that the poor could not be expected to save for themselves and partly a capitulation to friendly societies, who feared that a contributory system would deflect working-class savings and reduce their own ranks (Gilbert 1965; see also Baldwin 1990, 99-100;Harris 2004, 159;Heclo 1974, 175). Despite their coverage limitations and design, however, the pension scheme can be seen as part of the broader turn away from poor relief and toward social insurance principles. A royal commission had examined the possibility of public pensions in the context of poor law reform, yet by this time, developments in Denmark, Germany, and elsewhere had inspired policymakers to turn to insurance-based solutions instead (Boyer 2018, 214-15;Hennock 1981). Labor groups had also rejected poor relief, which operated on a logic of deterrence that included the severe threat of the workhouse (Hennock 2007, 225;Macnicol 1998, 138-39;Orloff 1993, 7-8, 153). Finally, the argument for pensions received support from recent statistical findings, including those of Booth, which confirmed that large percentages of the elderly were compelled to seek poor relief and that most had not previously done so, belying the claim that poverty was somehow their fault (Booth 1891, 632;Booth 1899, 214). It is also worth recalling in this context that social insurance programs have traditionally allowed for a variety of financing arrangements and do not require any particular relationship between individual contributions and benefits. Many rely on general tax financing to a degree, and some do not require worker contributions (see Baldwin 1990, 63-65;Burns 1949, 29-31). It is therefore difficult to draw a bright-line distinction between eventconditioned coverage that is tied to modest or nonexistent worker contributions and coverage that is financed by general tax revenues, particularly given that all citizens are in principle responsible for contributing at some level to the latter. In addition, a number of prominent arguments at this time presented entitlement as deriving from workers' service to society, a form of contribution "in kind" (Orloff 1993, 157, 178). It is therefore more instructive, in analyzing the emergence of the welfare state during this period, to focus on the trend toward offering event-based provision to groups of vulnerable citizens and thereby removing them from the purview of the poor laws (Burns 1943, 518). Indeed, George Boyer points out that the authors of the pension scheme took pains to ensure that payments were not associated with the stigma of poor relief, including by distributing payments through the post office rather than poor law authorities (Boyer 2018, 195). In addition to addressing the needs of vulnerable groups, the design of the pension scheme shows how citizens' perceptions of a risk can influence the shape of policy. Winston Churchill, then president of the Board of Trade, noted that the possibility of attaining old age "seems so doubtful and remote to the ordinary man, when in the full strength of manhood, that it has been found in practice almost impossible to secure from any very great number of people the regular sacrifices" to provide for that eventuality. By contrast, "unemployment, accident, sickness, and the death of the breadwinner are catastrophes which may reach any household at any moment," making employee contributions more palatable (Churchill [1909(Churchill [ ] 1973b. One implication of this observation is that when citizens recognize their equal vulnerability to a risk-or in probabilistic terms, when they are able to understand themselves as members of the same "series"-they are more willing to regard the mutual protection of social insurance as serving their personal advantage alongside the demands of distributive fairness. Unemployment insurance, enacted in 1911, offers another illustration of the features associated with probabilistic thinking of the time, as well as the plurality of principles supporting social insurance policies. The Royal Commission on the Poor Laws had considered unemployment insurance in depth, noting both market failures and solidarity as justifications for state provision. The commission report explained that insurance had long been the "only possible way of providing against the miseries of unemployment," but that many workers, particularly the unskilled and unorganized, found themselves without protection (Great Britain-Royal Commission on the Poor Laws and Relief of Distress 1909, 332). Given the limited reach of trade unions in providing benefits, the report considered whether a state subsidy to poorer unions and friendly societies might enable greater coverage. 10 Its findings were favorable toward subsidized union coverage, but doubtful that existing friendly societies, which lack the "sense of solidarity amongst the men within a trade," would undertake such insurance even with the help of a subsidy . The report concluded that unemployment insurance would only be possible within a new type of society, comprising workers who knew one another's circumstances and could see themselves as equals with respect to the risk. Nevertheless, the unemployment insurance scheme adopted in 1911 rested on a broader vision of solidarity than the one identified by the commission. Recent economic depressions had confirmed to the public that certain types of unemployment were the product of forces more powerful than any individual worker (Churchill [1908(Churchill [ ] 1973aDe Swaan 1988, 183-84, 196). The resulting scheme focused on a number of industries and financed the claims of the unemployed through contributions from those currently working, together with employers and the state (Boyer 2018, 201-2;Gilbert 1966, 281). William Beveridge, one of the main proponents of the policy-and later an architect of the postwar welfare state-emphasized the interdependence and equal vulnerability of those covered. "The regular workman must admit a certain solidarity . . . with the irregular workman, since without the latter the industry by which the former lives could not be carried on" (Harris 1977, 173). One might suspect that frequentism, which defines probability with reference to a series of similar occurrences, would not support risk pooling that extends beyond actuarial classes in this way. Yet a vision of solidarity beyond narrow groups is not incompatible with a frequentist view. Rather, as we have seen, frequentism acknowledged the relativity of reference classes and allowed for their flexible definition in light of available information. Thus, a reference class could incorporate a narrow group or an entire nation, depending on the risk in question and the state of knowledge about it (see Venn [1888Venn [ ] 2006. In the case of unemployment, as many observed at the time, the lack of reliable statistical information made it difficult to rigorously calculate how much members of the different trades should pay for their own protection (Great Britain-Royal Commission on the Poor Laws and Relief of Distress 1909, 416). While this observation could be seen to militate in favor of restricting coverage to each trade, as the commission concluded, it could also support an enlarged reference class, in which those who cannot clearly establish their own odds are more likely to see themselves as equally vulnerable to the harm. Moreover, a collective understanding of probability need not preclude cooperation among distinct risk classes, either for the sake of some direct advantage to each or based on claims about the fairness of distributing the costs of the risk more broadly. The key point here is that frequentism and associated statistical developments supported a novel focus on such classes as the targets of social policy and, increasingly over time, as the subjects demanding them as well (see also Baldwin 1990;Lindert 2014). The theoretical foundations of the unemployment scheme were articulated in a 1910 article by Hubert Llewellyn Smith, then commissioner of the newly formed Labour Department of the Board of Trade, who worked closely with Beveridge in preparing the scheme (Harris 1977, 169-85). Llewellyn Smith identified a range of causal factors at work in unemployment. Examining each in turn, he found that, at least in the major trades, the effects of personal characteristics were on average less significant than the effects of broader economic conditions over which the individual has no control. This statistical analysis, eschewing individualized causal inquiry, led to the conclusion that insurance is an appropriate solution for such risks. Even where a given worker's personal inadequacies lead him to be selected for unemployment over another, "it does not necessarily follow that these defects are a principal or even a contributory cause of his unemployment" (Smith 1910, 525). In making his case, Llewellyn Smith also invoked a crucial element of what is now known as the market-failures rationale for social insurance, namely the importance of the "distribution of income in respect of time" (516; see also Gruber 2016, 338-42). A regular income of a certain amount per month will have a different value to the individual than the same total sum received at irregular intervals or all at once. To the extent that markets or private initiative do not allow individuals to achieve such intertemporal security, there may be a case for government to provide it in the form of compulsory insurance. Yet Llewellyn Smith also suggested that economic reasoning could not on its own decide the question. ". . . [T]here is a noble as well as an ignoble ideal of security, and the great problem that lies before us in the future is to distinguish rightly between them and to direct our national policy accordingly" (Smith 1910, 517). Indeed, the essay concluded, the aim of promoting regularity in working-class incomes is but one policy objective among many, to be weighed alongside others that stake a claim to public resources. This article has suggested that a version of distributive fairness and solidarity, understood as the sharing of burdens among those who are vulnerable to a given risk, provided an important normative complement to this logic. We have thus seen how a statistical understanding of risk allowed for observations about groups of individuals without regard for the causal lineage of each outcome. We have also seen how the frequentist interpretation of probability supported a perception of equal vulnerability and with it a form of solidarity within groups affected by specific hazards. While this article has not intended to provide definitive evidence of the influence of frequentist ideas on social policy developments, the analysis has shown that early social insurance rested on an awareness of statistical classes as the targets of state interventions and on a willingness to define those classes flexibly in light of available information and social needs. Moreover, explanations of these policies emphasized both their economic advantage for individuals and the fairness of redistributing the costs of development within and among groups. These observations suggest that efficiency-based and distributive arguments for social insurance are not mutually exclusive but have long coexisted, and that probabilistic arguments for risk pooling may lend support to both. Contemporary Implications In emphasizing the links between probabilistic and statistical developments and a number of early social insurance programs, the foregoing has intended to highlight an aspect of political discourse that has been neglected by many scholars of welfare policy development. While it is well known that economic and social policy took a more collectivist turn in a number of countries around this time, students of the welfare state have not sufficiently appreciated the role of changing conceptions of probability and risk in enabling and supporting that shift. This argument is of more than historical interest, however. Recently, the notion that risk groups are responsible for generating social policy has gained prominence in the history and political economy of the welfare state (see Rehm 2016;Rehm, Hacker, and Schlesinger 2012). In a similar vein, Robert Goodin has argued that the virtue of understanding welfare as insurance is that "redistribution, of a sort, is thus justified without any appeal to old-style and increasingly unfashionable values" such as altruism or social citizenship (Goodin 2003, 216). The preceding analysis adds further evidence and detail to support this view, and also clarifies the distinct but compatible normative principles that underlie it. While a market-failures reconstruction of the welfare state, and social insurance specifically, has much to recommend it, it does not fully account for the explicitly egalitarian and solidaristic justifications also offered for such policies over the course of their history. Focusing on the philosophy of probability and statistics allows us to perceive the close conceptual connections between these arguments and the ways in which social insurance reflects and advances them. Returning to the standard of "expressive adequacy," this paper has argued that a plurality of normative concerns better explains the emergence of social insurance than any single principle. Moreover, as Goodin's remark attests, social insurance can still be plausibly described as serving both efficiency and equality goals, grounded in a flexible account of risk-based solidarities. In some cases, this shared risk pool has included the population as a whole, while in others, it has been constituted by one or more subgroups that face similar hazards and among whom pooling is seen to be mutually beneficial or fair. What remains to be shown is how such a pluralist or "mixed" model of social insurance can further our normative self-understanding and the achievement of social or political goals. While a full demonstration of this point is not possible here, it is likely that the capacity of social insurance to accommodate distinct principles and aims is its greatest strength as an institution. While correcting some forms of market failure, it also expresses an understanding of mutual responsibility grounded in a perception of shared vulnerability to harm. The degree to which different social insurance programs further these normative purposes will differ from place to place and over time. Yet by appreciating how social insurance accommodates, and to some degree harmonizes, such principles, we will be better situated to understand its potential and its distinct role in the modern welfare state. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors would like to thank the Edmond J. Safra Foundation for generous financial assistance that aided in the writing of this article. ORCID iD Rachel Z. Friedman https://orcid.org/0000-0003-1595-3042 Notes 1. As invoked here, the designation "liberal" is intended not to characterize the early British welfare state but rather to serve the contemporary project of normative reconstruction, which begins with existing institutional arrangements and explains them with reference to their origins and evolution. Although the post-World War II period reveals some deviations from the liberal paradigm as Esping-Anderson portrays it (Esping-Andersen 1990, 25-27;53), the point here is that a regime now regarded as liberal is more likely than others to be associated with a market-failures rationale. As a result, evidence of values besides efficiency at its foundation may cast doubt on the adequacy of that reconstruction. 2. Portions of this argument draw upon and extend claims made in Friedman (2020, esp. 105-21). 3. It is beyond the scope of this article to consider how the intellectual developments discussed herein influenced the development of social insurance outside Britain. Given the international policy learning that took place at this time (see Orloff 1993, 162), I believe it is reasonable to assume such an influence, but that claim is not defended here. 4. The late-eighteenth century also saw the emergence of a movement to reform friendly societies on actuarial lines. While only partially successful, the ideas that motivated this movement stemmed from many of the same sources that inspired early proposals for social insurance (Friedman 2020;Ismay 2018). 5. Throughout this paper, the term "likelihood" is used in its colloquial (rather than its technical, Bayesian) sense to denote a quantified uncertainty or probability value. 6. My understanding of these developments is generally indebted to Daston (1988); Hacking (1990Hacking ( , 2006; Krüger, Daston, and Heidelberger (1990). 7. The New Liberalism also owed a significant debt to utilitarianism, particularly that of John Stuart Mill (Weinstein 2007). 8. The 1911 National Insurance Act further expanded medical and invalidity provision. According to Hennock, it thus reveals changing attitudes toward state compulsion: while the 1897 act did not set out to prescribe any type of insurance, the latter mandated coverage for sickness and disability through government-approved friendly societies and other insurance organizations (Hennock 2007, 229-36). 9. As Macnicol points out, several of these eligibility restrictions were of limited impact. Some, such as the exclusion of previous recipients of poor relief, soon lapsed or underwent modification; others, such as the exclusion of those who habitually failed to work, were poorly enforced from the beginning (Macnicol 1998, 157-61). This observation supports the claim advanced here that the pensions represented a shift toward the type of event-based coverage more characteristic of social insurance than of poor relief. 10. Similarly, in their minority report for the Commission on the Poor Laws, Sidney Webb and Beatrice Webb (1909, 288) noted that while trade union provision against unemployment offered many advantages, it came at a heavy cost, and "has been found, so far, beyond the means of any but a small minority of the better paid artizans."
2021-05-05T00:09:39.209Z
2021-03-11T00:00:00.000
{ "year": 2022, "sha1": "d437e293dde333202aae11675600d52a9b556f85", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1065912921994675", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "a46348d30f2eb7598a081fcb461f2091f66a100a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233241232
pes2o/s2orc
v3-fos-license
Robust Generalised Bayesian Inference for Intractable Likelihoods Generalised Bayesian inference updates prior beliefs using a loss function, rather than a likelihood, and can therefore be used to confer robustness against possible mis-specification of the likelihood. Here we consider generalised Bayesian inference with a Stein discrepancy as a loss function, motivated by applications in which the likelihood contains an intractable normalisation constant. In this context, the Stein discrepancy circumvents evaluation of the normalisation constant and produces generalised posteriors that are either closed form or accessible using standard Markov chain Monte Carlo. On a theoretical level, we show consistency, asymptotic normality, and bias-robustness of the generalised posterior, highlighting how these properties are impacted by the choice of Stein discrepancy. Then, we provide numerical experiments on a range of intractable distributions, including applications to kernel-based exponential family models and non-Gaussian graphical models. Introduction A considerable proportion of statistical modelling deviates from the idealised approach of fine-tuned, expertly-crafted descriptions of real-world phenomena, in favour of default models fitted to a large dataset. If the default model is a good approximation to the data-generating mechanism this strategy can be successful, but things can quickly go awry if the default model is misspecified. Generalised Bayesian updating (Bissiri et al., 2016), and in particular using divergence-based loss functions (Jewson et al., 2018), has been shown to mitigate some of the risks involved when working with a model that is misspecified. Unlike other robust modelling strategies, these methods do not change the statistical model. Instead, they change how the model's parameters are scored, affecting how "good" parameter values are discerned from "bad" ones. This is a key practical advantage, as it implies that such strategies do not require precise knowledge about how the model is misspecified. This paper considers generalised Bayesian inference in the context of intractable likelihood. An intractable likelihood, in this paper, takes the form p θ (x) = q(x, θ)/Z(θ), where q(x, θ) is an analytically tractable function and Z(θ) is an intractable normalising constant, each depending on the value of the unknown parameter θ of interest. Classical Bayesian posteriors resulting from intractable likelihood models are sometimes called doubly intractable, due to the computational difficulties they entail (Murray et al., 2006). For example, standard Markov chain Monte Carlo (MCMC) methods cannot be used in this setting, since they typically require explicit evaluation of the likelihood. Doubly intractable posteriors appear in many important statistical applications, including spatial models (Besag, 1974(Besag, , 1986Diggle, 1990), exponential random graph models (Park and Haran, 2018), models for gene expression (Jiang et al., 2021), and hidden Potts models for satellite data (Moores et al., 2020). This paper proposes the first generalised Bayesian approach to inference for models that involve an intractable likelihood. To achieve this, we propose to employ a loss function based on a Stein discrepancy (Gorham and Mackey, 2015). As such, this research can be thought of as a Bayesian alternative to the minimum Stein discrepancy estimators of Barp et al. (2019). The methodology is developed for a particular Stein discrepancy called kernel Stein discrepancy (KSD), and we call the resulting generalised Bayesian approach KSD-Bayes. It is shown in this paper that KSD-Bayes (1) provides robustness to misspecified likelihoods; (2) produces a generalised posterior that is tractable for standard MCMC, or even closedform when an appropriate conjugate prior (which we identify) is used together with an exponential family likelihood; (3) satisfies several desirable theoretical properties, including a Bernstein-von Mises result which holds irrespective of whether the likelihood is correctly specified. These results appear to represent a compelling case for the use of KSD-Bayes as an alternative to standard Bayesian inference with intractable likelihood. However, KSD-Bayes is no panacea and caution must be taken to avoid certain pathologies of KSD-Bayes, which we highlight in Section 3.5. The paper is structured as follows: Section 2 contains necessary background on generalised Bayesian inference, Stein discrepancy, and robustness in the Bayesian context. Section 3 presents the KSD-Bayes methodology, including conjugacy of the generalised posterior under an exponential family likelihood. Section 4 elucidates the robustness and asymptotic properties of KSD-Bayes. Guidance for practical application of KSD-Bayes is contained in Section 5. The experimental results and empirical assessments are outlined in Section 6, and we draw our conclusions in Section 7. Code to reproduce all results in this paper can be downloaded from: https://github.com/takuomatsubara/KSD-Bayes. Background First we provide a short summary of generalised Bayesian inference and Stein discrepancies, putting in place a standing assumption on the domains in which data and parameters are contained: Standing Assumptions 1: The topological space X , in which the data are contained, is locally compact and Hausdorff. The set Θ ⊆ R p , in which parameters are contained, is Borel. Notation Measure theoretic notation: For a locally compact Hausdorff space such as X , we let P(X ) denote the set of all Borel probability measures on X . A point mass at x is denoted δ x ∈ P(X ). If X is equipped with a reference measure, then we abuse notation by writing p ∈ P(X ) to indicate that the distribution with p.d.f. p is an element of P(X ). For P ∈ P(X ), we occasionally overload notation by denoting by L q (X , P) both the set of functions f : X → R for which f L q (X ,P) := ( X |f | q dP) 1/q < ∞ and the normed space in which two elements f, g ∈ L q (X , P) are identified if they are P-almost everywhere equal. If P is a Lebesgue measure, we simply write L q (X ) instead of L q (X , P). Let P S (R d ) be the set of all Borel probability measures P supported on R d , admitting an everywhere positive p.d.f. p and continuous partial derivatives x → (∂/∂x (i) )p(x). Real analytic notation: The Euclidean norm on R d is denoted · 2 . The set of continuous functions f : X → R is denoted C(X ). We denote by C 1 b (R d ) the set of functions f : R d → R such that both f and the partial derivatives x → (∂/∂x (i) )f (x) are bounded and continuous on R d . We also denote by C 1,1 b (R d × R d ) the set of bivariate functions f : R d × R d → R such that both f and the partial derivatives (x, x ) → (∂/∂x (i) )(∂/∂x (j) )f (x, x ) are bounded and continuous on R d × R d . For an arbitrary set S(X ) of functions f : X → R, denote by S(X ; R k ) the set of R k -valued functions whose components belong to S(X ). Let ∇ and ∇· be the gradient and the divergence operators in R d . For functions with multiple arguments, we sometimes use subscripts to indicate the argument to which the operator is applied (e.g. ∇ x f (x, y)). For f an R d -valued function, [∇f (x)] (i,j) := (∂/∂x (i) )f (j) (x) and ∇ · f (x) := d i=1 (∂/∂x (i) )f (i) (x). For f an R d×d -valued function, [∇f (x)] (i,j,k) := (∂/∂x (i) )f (j,k) (x) and [∇ · f (x)] (i) := d j=1 (∂/∂x (j) )f (i,j) (x). Generalised Bayesian Inference Consider a dataset consisting of independent random variables {x i } n i=1 generated from P ∈ P(X ), together with a statistical model P θ ∈ P(X ) for the data, with p.d.f. p θ , indexed by a parameter of interest θ ∈ Θ. The Bayesian statistician elicits a prior π ∈ P(Θ), which may reflect a priori belief about the parameter θ ∈ Θ, and determines their a posteriori belief according to (1) In the M-closed setting there exists θ 0 ∈ Θ for which P = P θ 0 , and the Bayesian update is optimal from an information-theoretic perspective (see Williams, 1980;Zellner, 1988). Optimal processing of information is a desirable property, but in applications the assumption of adequate prior and model specification is often violated. This has inspired several lines of research, including (but not limited to) strategies for the robust specification of prior belief (Berger et al., 1994), the so-called safe Bayes approach (Grünwald, 2011(Grünwald, , 2012, power posteriors (e.g. Holmes and Walker, 2017), coarsened posteriors (Miller and Dunson, 2019) and Bayesian inference based on scoring rules (Giummolè et al., 2019). A particularly versatile approach to robustness, which encompasses most of the above, is generalised Bayesian inference (Bissiri et al., 2016) (see also the earlier work of Chernozhukov and Hong, 2003). This approach constructs a distribution, denoted π L n , using a loss function L n : Θ → R, which may be data-dependent, and a scaling parameter β > 0, according to The so-called generalised posterior π L n coincides with the Bayesian posterior π n when β = 1 and the loss function is the negative average log-likelihood; L n (θ) = − 1 n n i=1 log p θ (x i ). As discussed in Knoblauch et al. (2019), generalised Bayesian inference admits an optimisationcentric interpretation: where KL(ρ π) denotes the Kullback-Leibler (KL) divergence between two distributions ρ, π ∈ P(Θ). This perspective reveals that the standard Bayesian posterior is an implicit commitment to a particular loss function -the negative log-likelihood -and that the weighting constant β controls the influence of this loss relative to the prior π. In particular, under mild conditions L n (θ) a.s. → KL(P P θ ) + C as n → ∞, for a constant C independent of θ, which reveals that standard Bayesian posterior concentrates around the value of θ that minimizes the KL divergence between the data-generating distribution P and the model P θ . Outside of the M-closed setting such concentration is problematic, often leading to over-confident predictions (Bernardo and Smith, 2009). The use of alternative, divergence-based loss functions has been demonstrated to mitigate the negative consequences of a misspecified statistical model, as pioneered in the work on αand β-divergences in Hooker and Vidyashankar (2014); Ghosh and Basu (2016) and extended to γ-divergence in Nakagawa and Hashimoto (2020). The properties of the divergence, including any potentially undesirable pathologies associated with it, determine the properties of the generalised posterior (Jewson et al., 2018;Knoblauch et al., 2019). These compelling theoretical results have led to considerable interest in generalised Bayesian inference with divergence-based loss functions, yet the divergences that have been considered to-date cannot be computed in the important setting of intractable likelihood. Stein Discrepancy In an independent line of research, Stein discrepancies were proposed in Gorham and Mackey (2015) to provide statistical divergences that are both computable and capable of providing various forms of distributional convergence control. The approach is based on the method of Stein (1972), which requires the identification of a linear operator S Q : H → L 1 (X , Q), depending on a probability distribution Q ∈ P(X ) and acting on a Banach space H, such that E X∼Q [S Q [h](X)] = 0 ∀h ∈ H. (4) Such an operator S Q is called a Stein operator and H is called a Stein set. Given a distribution Q ∈ P(X ), there are infinitely many operators S Q satisfying (4). A convenient example is the Langevin Stein operator (Gorham and Mackey, 2015), where q is the p.d.f. of Q. Under suitable regularity conditions on ∇ log q and H, the Langevin Stein operator satisfies Equation 4; see Gorham and Mackey (2015, Proposition 1). Given P, Q ∈ P(X ) and a Stein operator S Q : H → L 1 (X , Q) whose image is contained in L 1 (X , P), the Stein discrepancy (SD) is defined as where the last equality follows directly from (4). Under mild assumptions, SD defines a statistical divergence between two probability distributions P, Q ∈ P(X ), meaning that SD(Q P) ≥ 0 with equality if and only if P = Q; see Proposition 1 and Theorem 2 in Barp et al. (2019). Under slightly stronger assumptions SD provides convergence control, meaning that a sequence (Q n ) ∞ n=1 ⊂ P(X ) converges in a specified sense to Q whenever SD(Q Q n ) → 0; see Gorham and Mackey (2015, Theorem 2, Proposition 3) and Gorham and Mackey (2017, Theorem 8, Proposition 9). An important property of SDs that we exploit in this work is that, unlike other divergences, SDs can often be computed with an un-normalised representation of Q. For example, the Stein operators in (5) depend on Q only through ∇ log q, which can be computed when q is provided in a form that involves an intractable normalisation constant. The suitability of SD for use in generalised Bayesian inference has not previously been considered, and this is our focus next. Methodology Highly structured data, or data belong to a high-dimensional domain X , are often associated with an intractable likelihood. Moreover, the difficulty of modelling such data means that models will typically be misspecified. Thus there is a pressing need for Bayesian methods that are both robust and compatible with intractable likelihood. To this end, in Section 3.1 we introduce SD-Bayes, a generalised Bayesian procedure with a loss function based on SD. There are numerous SDs that can be considered, and in Section 3.2 we focus in detail on KSD due to the possibility of performing fully conjugate inference in the context of exponential family models, as described in Section 3.3. Non-conjugate inference and its computational cost are discussed in Section 3.4. However, all statistical divergences have their pathologies, and one must bear in mind the pathologies of KSD when using KSD-Bayes; see the discussion in Section 3.5. SD-Bayes Suppose we are given a prior p.d.f. π ∈ P(Θ) and a statistical model {P θ | θ ∈ Θ} ⊂ P(X ). Let {x i } n i=1 be independent observations generated from P ∈ P(X ) and let P n := 1 n n i=1 δ x i be the empirical measure associated to this dataset. In this context, the SD-Bayes generalised posterior can now be defined: Definition 1 (SD-Bayes). For each θ ∈ Θ, select a Stein operator S P θ and denote the associated Stein discrepancy SD(P θ ·). Let β ∈ (0, ∞). Then the SD-Bayes generalised posterior is defined as where θ ∈ Θ. Here the 'D' superscript stands for discrepancy. Comparing (7) to (2) confirms that SD-Bayes is a generalised Bayesian method with loss function L n (θ) = SD 2 (P θ P n ). There is an arbitrariness to using squared discrepancy, as opposed to another power of the discrepancy, but this choice turns out to be appropriate for the discrepancies considered in Section 3.2, ensuring that fluctuations of L n (θ) about its expectation are O(n −1/2 ), analogous to the standard Bayesian loss, and permitting tractable computation (Section 3.3) and analysis (Section 4). A discussion of how the weight β should be selected is deferred until after our theoretical analysis, in Section 5. KSD-Bayes Compared to other Stein discrepancies, KSDs are attractive because they enable the supremum in (6) to be be explicitly computed. To define KSD, we require the concept of a (matrix-valued) kernel K : X ×X → R d×d ; the precise definition is contained in Appendix A. For our purposes in the main text, it suffices to point out that any kernel K has a uniquely associated Hilbert space of functions f : X → R d , called a vector-valued reproducing kernel Hilbert space (v-RKHS). This v-RKHS constitutes the Stein set in KSD, and we therefore denote this v-RKHS as H. The associated norm and inner product will respectively be denoted · H and ·, · H . Let S Q be a Stein operator and denote the action of S Q on both the first and second argument 1 of a kernel K as S Q S Q K. The following result is a generalisation of the original construction of KSD (Chwialkowski et al., 2016;Liu et al., 2016) to general Stein operators. Assumption 1. Let H be a v-RKHS with kernel K : X × X → R d×d . For Q ∈ P(X ), let S Q be a Stein operator with domain H. For each fixed x ∈ X , we assume h → S Q [h](x) is a continuous linear functional on H. Further, we assume that E X∼P [S Q S Q K(X, X)] < ∞. Proposition 1 (Closed form of SD). Under Assumption 1, we have where X and X are independent. The proof is in Appendix B.1. Note that it is straightforward to verify the assumption that h → S Q [h](x) is a continuous linear functional for each fixed x ∈ X once the form of S Q is specified; see Appendix B.1.2. KSD is attractive for SD-Bayes since it enables the generalised posterior in Definition 1 to be explicitly computed: The resulting generalised posterior will be referred to as KSD-Bayes in the sequel. The explicit form of S P θ S P θ K depends on S P θ . The case of X = R d and the Langevin Stein operator in (5) is given by where p θ is a p.d.f. for P θ ∈ P S (R d ). Clearly, this expression is straightforward to evaluate 2 whenever we have access to derivatives of the kernel and the log density. If the derivatives are analytically intractable, the expression above is amenable to the use of automatic differentiation tools (Baydin et al., 2018). Whether KSD-Bayes is reasonable or not hinges crucially on whether KSD is a meaningful way to quantify the difference between the discrete distribution P n and the parametric model P θ . Sufficient conditions for convergence control have been established for the Langevin Stein operator, under which the convergence of KSD(P θ P n ) implies the weak convergence of P n to P θ (Gorham and Mackey, 2017, Theorem 8). This provides some preliminary assurance that KSD-Bayes may work; we present formal theoretical guarantees in Section 4. These theoretical results motivate specific choices of K for use in KSD-Bayes, which we discuss in Section 5. 2 For maximum clarity, the vector calculus notation is expanded as follows: Conjugate Inference for Exponential Family Models The generalised posterior can be exactly computed in the case of an natural exponential family model when a conjugate prior is used. Let η : Θ → R k and t : X → R k be any sufficient statistic for some k ∈ N and let a : Θ → R and b : X → R. An exponential family model has p.m.f. or p.d.f. (with respect to an appropriate reference measure on X ) of the form This includes a wide range of distributions with an intractable normalisation constant exp(a(θ)), used in statistical applications such as random graph estimation (Yang et al., 2015), spin glass models (Besag, 1974) and the kernel exponential family model (Canu and Smola, 2006). The model in (10) is called natural when the canonical parametrisation η(θ) = θ is employed. Proposition 2. Consider X = R d and the Langevin Stein operator S P θ in (5), where P θ is the exponential family in (10), and a kernel K ∈ Assuming the prior has a p.d.f. π, the KSD-Bayes generalised posterior has a p.d.f. where Λ n ∈ R k×k and ν n ∈ R k are defined as For a natural exponential family we have η(θ) = θ, and the prior π(θ) ∝ exp(− 1 2 (θ − µ) · Σ −1 (θ − µ)) leads to a generalised posterior where Σ −1 n := Σ −1 + 2βnΛ n and µ n : The proof is in Appendix B.2. That the Gaussian distribution will be conjugate in KSD-Bayes, even in the presence of intractable likelihood, is remarkable and notably different from the classical Bayesian case, albeit at a O(n 2 ) computational cost. Strategies to further reduce this computational cost are discussed in Section 3.4. It is well known that certain minimum discrepancy estimators, such as the score matching estimator (Hyvärinen, 2005) and the minimum KSD estimator (Barp et al., 2019), have closed forms in the case of an exponential family models; it is similar reasoning that has led us to Proposition 2. Non-Conjugate Inference and Computational Cost To access the generalised posterior in the non-conjugate case, existing MCMC algorithms for tractable likelihood can be used. The per-iteration computational cost appears to be O(n 2 ) since, for each state θ visited along the sample path, the KSD in (8) must be evaluated. However, various strategies enable this computational cost to be mitigated. For concreteness of the discussion that follows, we consider the Langevin Stein operator, for which where the equality holds up to a θ-independent constant. Memoisation: The above expression depends on θ only through the terms {∇ log p θ (x i )} n i=1 , of which there are O(n), while all other terms involving K, of which there are O(n 2 ), can be computed once and memoised. The double summation still necessitates O(n 2 ) computational cost but this operation is embarrassingly parallel. Finite rank kernel: Computational cost can be reduced from O(n 2 ) to O(n) using a finite rank kernel. A useful and important example is the rank one kernel K(x, x ) = I d , which reduces (8) to and is closely related to divergences used in score matching (Hyvärinen, 2005). Random finite rank approximations of the kernel can also considered in this context (Huggins and Mackey, 2018). Stochastic approximation: The construction of low-cost unbiased estimators for (8) is straight-forward via sampling mini-batches from the dataset. This enables a variety of exact and approximate algorithms for posterior approximation to be exploited (e.g. Ma et al., 2015). Alternatively, Huggins and Mackey (2018); Gorham et al. (2020) argued for stochastic approximations of KSD that could be used. Limitations of KSD-Bayes A divergence D(Q||P) induces an information geometry (Amari, 1997), encoding a particular sense in which Q can be considered to differ from P. As such, all divergence exhibit pathologies, meaning that certain characteristics that distinguish Q from P are less easily detected. A documented pathology of gradient-based discrepancies, including the Langevin KSD, is their insensitivity to the existence of high-probability regions which are well-separated; see Gorham et al. (Section 5.1 2019) and Wenliang (2020). To see this, consider a Gaussian mixture model Figure 1: Illustrating the insensitivity to mixture proportions of KSD. Panels (a-c,e-g) display the density function p θ (x) from (11) together with the gradient ∇ log p θ (x), the latter rescaled to fit onto the same plot. Panels (d,h) display the discrepancy KSD 2 (P θ P n ), where P n is an empirical distribution of n = 1000 samples from the model with θ = 0.5. where θ ∈ [0, 1] specifies the mixture ratio and µ ∈ R controls the separation between the two components. If the two components are well-separated i.e. µ 1, the gradient ∇ log p θ becomes insensitive to θ and hence a gradient-based divergence such as KSD will be insensitive to θ, as demonstrated in Figure 1. For this reason, caution is warranted when gradient-based discrepancies are used. However, in practice direct inspection of the dataset and knowledge of how P θ is parametrised can be used to ascertain whether either distribution is multi-modal. Our applications in Section 6 are not expected to be multi-modal (with the exception of the kernel exponential family in Section 6.3 which was selected to demonstrate the insensitivity to mixing proportions of KSD-Bayes). A second limitation of KSD-Bayes is non-invariance to a change of coordinates in the dataset. This is a limitation of loss-based estimators in general. In Section 5.1 we recommend a data-adaptive choice of kernel, which serves to provide approximate invariance to affine transformations of the dataset. As usual in statistical analyses, we recommend post-hoc assessment of the sensitivity of inferences to perturbations of the dataset. Despite these two limitations, KSD-Bayes represents a flexible and effective procedure for generalised Bayesian inference in the context of an intractable likelihood. Our attention turns next to theoretical analysis of KSD-Bayes. Theoretical Assessment This section contains a comprehensive theoretical treatment of KSD-Bayes. The main results are posterior consistency and a Bernstein-von Mises theorem in Section 4.2, and global biasrobustness of the generalised posterior in Section 4.3. In obtaining these results we have developed novel intermediate results concerning an important V-statistic estimator for KSD; these are anticipated to be of independent interest, so we present these in Section 4.1 of the main text. Note that all theory is valid for the misspecified regime where P need not be an element of {P θ : θ ∈ Θ}. Moreover, the results in Section 4.1 and Section 4.2 hold for general data domains X . For the entirety of this section we set β = 1, with all results for β = 1 immediately recovered by replacing K with βK. The results of this section motivate a specific choice for β that is described in Section 5. Standing Assumptions 2: The dataset {x i } n i=1 consists of independent samples generated from P ∈ P(X ), with empirical distribution denoted P n := (1/n) n i=1 δ x i . The set Θ ⊆ R p is open, convex and bounded 3 . Assumption 1 holds with Q = P θ for every θ ∈ Θ. Minimum KSD Estimators First we present novel analysis of the V-statistic in (8). Note that a U-statistic estimator of KSD was analysed in Barp et al. (2019), but only for the so-called diffusion Stein operator, a variant (or standardisation) of the Langevin Stein operator in (5). Our results for the Vstatistic do not depend on a specific form of S P θ , and may hence be of independent interest. Despite the bias present in a V-statistic, our standing assumptions are sufficient to derive the following consistency result: Lemma 1 (a.s. Pointwise Convergence). For each θ ∈ Θ, The proof is contained in Appendix B.3.1. If we impose further regularity, we can obtain a uniform convergence result. It will be convenient to introduce a collection of assumptions that are indexed by r max ∈ {0, 1, 2, . . . }, as follows: Assumption 2 (r max ). For all integers 0 ≤ r ≤ r max , the following conditions hold: (1) the map θ → ∂ r S P θ [h](x) exists and is continuous, for all h ∈ H and x ∈ X ; (2) the map h → (∂ r S P θ )[h](x) is a continuous linear functional on H, for each x ∈ X ; where (∂ 0 S P θ ) := S P θ ; note that (2) with r = 0 is implied from Standing Assumption 2. In the expression above, the first and second (∂ r S P θ ) are applied, respectively, to the first and second argument of K, as with S P θ S P θ K(x, x). These assumptions become concrete when considering a specific Stein operator; the case of the Langevin Stein operator is presented in Appendix B.3.5. Our next results concern consistency and asymptotic normality of the estimator θ n that minimises the V-statistic in (8). Assumption 4 specifies the amount of prior mass in a neighbourhood around the populationoptimal value θ * that is required. This is not a strong assumption and Appendix B.7 demonstrates how each of Assumptions 2 to 4 can be verified in the case of an exponential family model. Theorem 1 (Posterior Consistency). Suppose Assumptions 3 and 4 hold. Let σ(θ) := E X∼P [S P θ S P θ K(X, X)]. Then, for all δ ∈ (0, 1], where the probability is with respect to realisations of the dataset The proof is contained in Appendix B.4. Next, we derive a Bernstein-von Mises result. The pioneering work of Hooker and Vidyashankar (2014) and Ghosh and Basu (2016) established Bernstein-von Mises results for generalised posteriors defined by α-and β-divergences. Unfortunately, the form of KSD is rather different and different theoretical tools are required to tackle it. Miller (2021) introduced a general approach to deriving Bernstein-von Mises results for generalised posteriors, demonstrating how the assumptions can be verified for several additive loss functions L n . Our proof builds on Miller (2021), demonstrating that the required assumptions can also be satisfied by the non-additive KSD loss function in (8). Theorem 2 (Bernstein-von Mises). Suppose Assumption 2 (r max = 3), 3, and part (1) of 4 hold. Letπ D n the p.d.f. of the random variable √ n(θ − θ n ) for θ ∼ π D n , viewed as a p.d.f. on R p . Let H * := ∇ 2 θ KSD 2 (P θ P)| θ=θ * . If H * is nonsingular, where the a.s. convergence is with respect to realisations of the dataset {x i } n i=1 . The proof is contained in Appendix B.5. These positive results are encouraging, as they indicate the limitations of KSD-Bayes described in Section 3.5 are at worst a finite sample size effect. However, we note that the asymptotic precision matrix H * from Theorem 2 differs to the precision matrix H * J −1 * H * of the minimum KSD estimator from Lemma 4; this is analogous to fact that Bayesian credible sets can have asymptotically incorrect frequentist coverage if the statistical model is mis-specified (Kleijn and van der Vaart, 2012). This point will be addressed in Section 5.2. Remark 1. The analysis in Sections 4.1 and 4.2 covers general domains X and Stein operators S P . Henceforth, in the main text we restrict attention to X = R d , but the case of a discrete domain X , and the identification of an appropriate Stein operator in this context, are discussed in Appendix D.5. Global Bias-Robustness of KSD-Bayes An important property of KSD-Bayes is that, through a suitable choice of kernel, the generalised posterior can be made robust to contamination in the dataset. This robustness will now be rigorously established. Consider the ε-contamination model P n, ,y = (1 − )P n + δ y , where y ∈ X and ∈ [0, 1] (see Huber and Ronchetti, 2009). In other words, the datum y is considered to be contaminating the dataset {x i } n i=1 . Robustness in the generalised Bayesian setting has been considered in Hooker and Vidyashankar (2014); Ghosh and Basu (2016); Nakagawa and Hashimoto (2020). In what follows we write L n (θ) = L(θ; P n ) to make explicit the dependence of the loss function L n on the dataset P n . Following Ghosh and Basu (2016), we consider a generalised posterior based on a (contaminated) loss L(θ; P n, ,y ) with density π L n (θ; P n, ,y ), and define the posterior influence function Here the notation π L n (θ; P n, ,y ) emphasises the dependence of the generalised posterior on the (contaminated) dataset P n, ,y . A generalised posterior π L n is called globally bias-robust if sup θ∈Θ sup y∈X | PIF(y, θ, P n )| < ∞, meaning that the sensitivity of the generalised posterior to the contaminant y is limited. The following lemma provides general sufficient conditions for global bias-robustness to hold: Lemma 5. Let π L n be a generalised Bayes posterior for a fixed n ∈ N with a loss L(θ; P n ) and a prior π. Suppose L(θ; P n ) is lower-bounded and π(θ) is upper-bounded over θ ∈ Θ, for any P n . Denote D L(y, θ, P n ) := (d/d )L(θ; P n, ,y )| =0 . Then π L n is globally bias-robust if, for any P n , 1. sup θ∈Θ sup y∈X |D L(y, θ, P n )| π(θ) < ∞, and 2. Θ sup y∈X |D L(y, θ, P n )| π(θ)dθ < ∞. The proof is contained in Appendix B.6.1. Note that standard Bayesian inference does not satisfy the conditions of Lemma 5 in general. Indeed, when L(θ; P n ) is the negative log likelihood, D L(y, θ, P n ) = log p θ (y) − n i=1 log p θ (x i ), and the term log p θ (y) can be unbounded over y ∈ X . This can occur even if the statistical model is not heavy-tailed, e.g. for a normal location model p θ on X = R d . In contrast, the kernel K in KSD-Bayes provides a degree of freedom which can be leveraged to ensure that the conditions of Lemma 5 are satisfied; the specific form of D L(y, θ, P n ) for KSD-Bayes is derived in Appendix B.6.2. This enables us to derive sufficient conditions on K for global bias-robustness of KSD-Bayes, which we now present. The proof is contained in Appendix B.6.3. The preconditions of Theorem 3 can be satisfied through an appropriate choice of kernel K; see Section 5.1. A comparison of KSD-Bayes to existing robust generalised Bayesian methodologies for tractable likelihood can be found in Appendix D.4. The difference in performance of robust and non-robust instances of KSD-Bayes is explored in detail in Section 6. Default Settings for KSD-Bayes The previous section considered β to be fixed, but an appropriate selection of β is essential to ensure the generalised posterior is calibrated. The choice of β is closely related to the choice of a Stein operator S P θ and kernel K; the purpose of this section is to recommend how these quantities are selected. If the recommendations of this section are followed, then KSD-Bayes has no remaining degrees of freedom to be specified. Default Settings for S P θ and K For Euclidean domains X = R d , we advocate the default use of the Langevin Stein operator S P θ in (5) and a kernel of the form where Σ is a positive definite matrix, γ ∈ (0, 1) is a constant, and M ∈ C 1 b (R d ; R d×d ) will be called a matrix-valued weighting function 4 . For M (x) = I d , (14) is called an inverse multiquadratic (IMQ) kernel. The IMQ kernel and the Langevin Stein operator have appealing properties in the context of KSD. Firstly, under mild conditions on P, KSD(P||P n ) → 0 implies that P n converges weakly to P (Chen et al., 2019, Theorem 4). This convergence control ensures that small values of KSD(P θ P n ) imply similarity between P θ and P n in the topology of weak convergence, so that minimising KSD is meaningful 5 . Secondly, and on a more practical level, the combination of Stein operator and IMQ kernel, with γ = 1/2, was found to work well in previous studies (Chen et al., 2019;Riabiz et al., 2021); we therefore also recommend γ = 1/2 as a default. The weighting function M (x) facilitates an efficiencyrobustness trade-off: If global bias robustness is not required then we recommend setting M (x) = I d as a default, which enjoys the aforementioned properties of KSD. If global biasrobustness is required then we recommend selecting M (x) such that the supremum in (13) exists and the preconditions of Theorem 3 are satisfied; see the worked examples in Section 6 and the further discussion in Appendix D.3. The theoretical analysis of Section 4 assumed that K is fixed, but in our experiments we follow standard practice in the kernel methods community and recommend a data-adaptive choice of the matrix Σ. All experiments we report used the 1 -regularised sample covariance matrix estimator of Ollila and Raninen (2019). The sensitivity of KSD-Bayes to the choice of kernel parameters is investigated in Appendix D.1. Default Setting for β For a simple normal location model, as described in Section 6.1, and in a well-specified setting, the asymptotic variance of the KSD-Bayes posterior with β = 1 is never smaller than that of the standard posterior. This provides a heuristic motivation for the default β = 1. However, in a misspecified setting smaller values of β are needed to avoid over-confidence in the generalised posterior, taking misspecification into account; see the recent review of Wu and Martin (2020). Here we aim to pick β such that the scale of the asymptotic precision matrix of the generalised posterior (H * ; Theorem 2) matches that of the minimum KSD point estimator (H * J −1 * H * ; Lemma 4), an approach proposed in Lyddon et al. (2019). This ensures the scale of the generalised posterior matches the scale of the sampling distribution of a closely related estimator whose frequentist properties can be analysed when the statistical model is misspecified. Since P is unknown, estimators of H * and J * are required. We propose the following default for β: where the matrix H * is approximated using H n := ∇ 2 θ KSD 2 (P θ P n ) θ=θn , and the matrix J * is approximated using The minimum of β = 1 and β = β n taken in (15) provides a safeguard against selecting a value of β that over-shrinks the posterior covariance matrix -a phenomenon that we observed for the experiments reported in Sections 6.2 to 6.4, due to poor quality of the approximations H n and J n when n is small. The above expressions are derived for the exponential family model in Appendix B.7. This completes our methodological and theoretical development, and next we turn to empirical performance assessment. Empirical Assessment In this section four distinct experiments are presented. The first experiment, in Section 6.1, concerns a normal location model, allowing the standard posterior and our generalised pos- terior to be compared and confirming our robustness results are meaningful. Section 6.2 presents a two-dimensional precision estimation problem, where standard Bayesian computation is challenging but computation with KSD-Bayes is trivial. Then, Section 6.3 presents a 25-dimensional kernel exponential family model, and Section 6.4 presents a 66-dimensional exponential graphical model; in both cases a Bayesian analysis has not, to-date, been attempted due to severe intractability of the likelihood. In addition, the kernel exponential family model allows us to explore a multi-modal dataset and to understand the potential limitations of KSD-Bayes in that context (c.f. Section 3.5). For all experiments, the default settings of Section 5 were used. An example of KSD-Bayes applied to a discrete dataset is presented in Appendix D.5. Normal Location Model For expositional purposes we first consider fitting a normal location model Our aim is to illustrate the robustness properties of KSD-Bayes, and we therefore generated the dataset using a contaminated data-generating model where, for each index i = 1, . . . , n independently, with probability 1 − the datum x i was drawn from P θ with "true" parameter θ = 1, otherwise x i was drawn from P y = N (y, 1), so that y and control, respectively, the nature and extent of the contamination in the dataset. The task is to make inferences for θ based on a contaminated dataset of size n = 100. The prior on θ was N (0, 1). The standard Bayesian posterior is depicted in the leftmost panels of Figure 2, for varying (top row) and varying y (bottom row). Straightforward calculation shows that the expected Figure 2. This generalised posterior is slightly less sensitive to contamination compared to the standard posterior. Moreover, the variance slightly increases whenever either or y are increased, as a result of estimating β (c.f. Section 5.2). In the rightmost panels of Figure 2 we display the robust generalised posterior using the weighting function M (x) = (1 + x 2 ) −1/2 , intended to bound the influence of large values in the dataset. This choice of M (x) vanishes just fast enough as |x| → ∞ to ensure that the bias-robustness conditions of Theorem 3 are satisfied; see Appendix D.3. The effect is clear from the bottom right panel of Figure 2, where even for y = 20 (and fixed to a small value, = 0.1) the robust generalised posterior remains centred close to the true value θ = 1. While our theoretical results relate to y and do not guarantee robustness when is increased, the top right panel in Figure 2 suggests that the robust generalised posterior is indeed robust in this regime as well. Figure 3 displays the posterior influence function (12) for this normal location model. This reveals that the standard Bayesian posterior is not bias-robust, since the tails of the posterior are highly sensitive to the contaminant y. In contrast, the tails of the generalised posterior are insensitive to the contaminant. This appears to be the case for both weighting functions, despite only one weighting function satisfying the conditions of Theorem 3. Precision Parameters in an Intractable Likelihood Model Our second experiment is due to Liu et al. (2019), and concerns an exponential family model Despite the apparent simplicity of this model, the term a(θ), which determines the normalisation constant, is analytically intractable and exact simulation from this data-generating model is not straightforward (excluding the case θ = 0). As a consequence, standard Bayesian analysis is not practical without, for example, the development of model-specific numerical methods, such as cubature rules to approximate the intractable normalisation constant. In sharp contrast, the generalised posterior produced by KSD-Bayes is available in closed form for this model. Our aim here is to assess robustness of the generalised posterior, focusing on the setting where y is fixed and is increased, since this is the regime for which our theoretical results do not hold. A dataset of size n = 500 was generated from the model P θ with true parameter θ = (0, 0), so that P θ has the form N (0, Σ) and can be exactly sampled. The left column in Figure 4 displays the standard posterior 6 , which is seen to be sensitive to contamination in the dataset, in much the same way observed for the normal location model in Section 6.1. The generalised posterior with M (x) = I d is depicted in the middle column of Figure 4, and is seen to be more sensitive to contamination compared to the standard Bayesian posterior, in that the mean moves further from 0 as is increased. Finally, in the right column of Figure 4 we display the robust generalised posterior obtained with weighting function which ensures the criteria for bias-robustness in Theorem 3 are satisfied. From the figure, we observe that the robust generalised posterior remains centred close to the data-generating value θ = 0, even for the largest contamination proportion considered ( = 0.2), with a variance that increases as is increased. At = 0, the spread of the robust generalised posterior is almost twice that of the standard posterior, which reflects the trade-off between robustness and efficiency. Robust Nonparametric Density Estimation Our third experiment concerns density estimation using the kernel exponential family, and explores the performance of KSD-Bayes when the dataset is multi-modal (c.f. Section 3.5). Let q denote a reference p.d.f. on R d , and let κ : is parametrised by f , an element of the RKHS H(κ). The implicit normalisation constant of (16), if it exists, is typically an intractable function of f . There appears to be no Bayesian or generalised Bayesian treatment of (16) in the literature, which may be due to intractability of the likelihood. Indeed, we are not aware of a computational algorithm that would easily facilitate Bayesian inference for (16), so a standard Bayesian analysis will not be presented. As the theory in this paper is finite-dimensional, we consider a finite-rank approximation , with coefficients θ (i) ∈ R and basis functions φ (i) ∈ H(κ), where we will take θ to be p = 25 dimensional. Finite rank approximations have previously been considered for frequentist learning of kernel exponential families in Strathmann et al. (2015); Sutherland et al. (2018). In our case, the finite rank approximation ensures that any prior we induce on f via a prior on the coefficients θ (i) will be supported on H(κ). If one is interested in a well-defined limit as p → ∞ then one will need to ensure a.s. convergence of the sum in this limit. If the φ i are orthonormal in H(κ), and if the θ (i) are a priori independent, then Our interest is in the performance of KSD-Bayes applied to a multi-modal dataset, and to explore these we considered the galaxy data of Postman et al. (1986); Roeder (1990), comprising n = 82 velocities in km/sec of galaxies from 6 well-separated conic sections of a survey of the Corona Borealis. The data were whitened prior to computation, but results are reported with the original scale restored. For the kernel exponential family we use q(x) = N (0, 3 2 ) and the kernel κ(x, y) = exp(−(x − y) 2 /2), which ensures that (16) . . , 24, which are orthonormal in H(κ) (Steinwart et al., 2006). For our prior we let θ (i) ∼ N (0, 10 2 i −1.1 ), which is weakly informative within the constraint of having a well-defined p → ∞ limit. Our contamination model replaces a proportion of the dataset with values independently drawn from N (y, 0.1 2 ), with y = 5, shown as black bars in the top row of Figure 5. The generalised posterior with M (x) = 1 is displayed in the second row of Figure 5, with the bottom row presenting a robust generalised posterior based on the weighting function M (x) = (1 + x 2 ) −1/2 , which ensures the conditions of Theorem 3 are satisfied. The results we present are for fixed y and increasing , since this regime is not covered by Theorem 3. The generalised posterior mean is a uni-modal density, which we attribute to the insensitivity of KSD to mixture proportions discussed in Section 3.5, but multi-modal densities are evident in sampled output. Our results indicate that the robust weighting function reduces sensitivity to contamination in the dataset (note how the mass in the central mode of the generalised posterior decreases when = 0.2, when the identity weighting function is used). Whether this insensitivity of KSD to well-separated regions in the dataset is desirable or not will depend on the application, but in this case it happens to be beneficial. Network Inference with Exponential Graphical Models Our final example concerns an exponential graphical model, representing negative conditional relationships among a collection of random variables W = (W 1 , . . . , W d ), described in Yang et al. (2015, Sec. 2.5). The likelihood function is where w ∈ (0, ∞) d and θ (i) > 0, θ (i,j) ≥ 0. The total number of parameters is p = d(d + 1)/2. Simulation from this model is challenging and the normalisation constant is an intractable integral, so in what follows a standard Bayesian analysis is not attempted. Our aim is to fit (17) to a protein kinase dataset, mimicking an experiment presented by Yu et al. (2016) in the score-matching context. This dataset, originating in Sachs et al. (2005), consists of quantitative measurements of d = 11 phosphorylated proteins and phospholipids, simultaneously measured from single cells using a fluorescence-activated cell sorter, so the parameter θ is 66-dimensional. Nine stimulatory or inhibitory interventional conditions were combined to give a total of 7, 466 cells in the dataset. The data were square-root transformed and samples containing values greater than 10 standard deviations from their mean were judged to be bona fide outliers and were removed. The remaining dataset of size n = 7, 449 was normalised to have unit standard deviation. In most cases the measurement reflects the activation state of the kinases, and scientific interest lies in the mechanisms that underpin their interaction 7 . These mechanisms are often summarised as a protein signalling network, whose nodes are the d proteins and whose edges correspond to the pairs of proteins that interact. An important statistical challenge is to estimate a protein signalling network from such a dataset (Oates, 2013). However, it is known that existing approaches to network inference are non-robust, in a general sense, with community challenges regularly highlighting the different conclusions drawn by different estimators applied to an identical dataset (Hill et al., 2016). Our interest is in whether networks estimated using KSD-Bayes are robust. For our experiment the variables w (i) were re-parametrised as x (i) := log(w (i) ), in order that they are unconstrained and P θ ∈ P S (R d ). For the contamination model, a proportion of the data were replaced with the fixed value y = (10, . . . , 10) ∈ R d . Parameters were a priori independent with θ (i) ∼ N T (0, 1), θ (i,j) ∼ N T (0, 1), where N T is the Gaussian distribution truncated to the positive orthant of R p . This prior is conjugate to the likelihood, as explained in Section 3.3, and allows the generalised posterior to be exactly computed. Generalised posteriors were produced both without and with the exponential weighting function [M (x)] (i,i) = exp(−x (i) ), the latter aiming to reduce sensitivity to large values in the dataset and coinciding with the identity weighting function at x = 0. From these, protein signalling networks were estimated using the s most significant edges, defined as the s largest values ofθ (i,j) /σ (i,j) , where the generalised posterior marginal for θ (i,j) is N T (θ (i,j) , σ 2 (i,j) ). Results are shown in Figure 6; to optimise visualisation we report results for s = 5, though for other values of s similar conclusions hold. It is interesting to observe little agreement between the networks returned when the identity weighting function is used, which may reflect the difficulty of the network inference task. Reduced sensitivity to was observed when the exponential weighting function was used. In Figure 6 we report the number of edges that are consistent with the network reported in Sachs et al. (2005, Fig. 3A); the use of the exponential weighting function resulted in more edges being consistent with this benchmark network. colorblack Conclusion There is little existing literature concerning robust Bayesian inference in the setting of intractable likelihood. Existing approaches to Bayesian inference for intractable likelihood fall into three categories: (1) likelihood-free methods (such as approximate Bayesian computation and Bayesian synthetic likelihood ; Tavaré et al., 1997;Beaumont et al., 2002;Marin et al., 2012;Price et al., 2018;Cherief-Abdellatif and Alquier, 2020;Frazier, 2020), (2) auxiliary variable MCMC (such as the exchange algorithm and pseudo-marginal MCMC; Møller et al., 2006;Murray et al., 2006;Andrieu and Roberts, 2009;Liang, 2010;Lyne et al., 2015;Doucet et al., 2015;Andrieu et al., 2020), and (3) approximate likelihood methods (such as pseudo-likelihood and composite likelihood ; Besag, 1974;Dryden et al., 2002;Eidsvik et al., 2014), which are of course also applicable beyond the Bayesian context. Both (1) and (2) rely on either the ability to simulate from the generative model or the ability to unbiasedly estimate the data likelihood, whilst (3) represents an ad hoc collection of approaches that are tailored to particular statistical models (see the recent surveys in Lyne et al., 2015;Park and Haran, 2018). These algorithms aim to approximate the standard Bayesian posterior, and do not attempt to confer robustness in situations where the model is misspecified. This paper proposed KSD-Bayes, a generalised Bayesian procedure for likelihoods that involve an intractable normalisation constant. KSD-Bayes provides robust generalised Bayesian inference in this context, including a theoretical guarantee of global bias-robustness over Θ. Moreover, and unlike existing Bayesian approaches to intractable likelihood, the generalised posterior can be approximated by standard sampling methods without additional levels of algorithmic complexity, even admitting conjugate analysis for the exponential family model. From a theoretical perspective, the soundness of KSD-Bayes, in terms of consistency and asymptotic normality of the generalised posterior, was established. Although KSD-Bayes has several appealing features, it is not a panacea for intractable likelihood. The generalised posterior is not invariant to transformations of the dataset and, as discussed in Section 3.5, KSD can suffer from insensitivity to mixture proportions, which limits its applicability to models and datasets that are not "too multi-modal". The selection of β remains an open problem for generalised Bayesian inference, and further regularisation may be required when the parameter θ is high-dimensional relative to the size n of the dataset. These are challenging issues for future work. In addition, our experiments focused on continuous data, though our theory was general. The empirical performance of KSD-Bayes for discrete data remains to be assessed. Supplementary Material This electronic supplement contains proofs for all theoretical results in the main text, as well as the additional empirical results referred to in the main text. First, in Appendix A a formal definition of a vector-valued RKHS is provided. Proofs for the results in the main text are contained in Appendix B, with the statements and proofs of auxiliary technical lemmas contained in Appendix C. Additional empirical results are contained in Appendix D. For simplicity we start with the scalar-valued case and define a scalar-valued kernel: A Background on Vector-Valued RKHS To every scalar-valued kernel is an associated Hilbert space H of functions h : X → R, called the reproducing kernel Hilbert space (RKHS) of the kernel. Definition 3 (Reproducing kernel Hilbert space). A Hilbert space H is said to be reproduced by a kernel k : for all x ∈ X and h ∈ H. Item (ii) is called the reproducing property of k in H. It can be shown that, for every kernel k, there exists a unique Hilbert space H reproduced by k (Paulsen and Raghupathi, 2016, Theorem 2.14). These definitions can be generalised in the form of a matrix-valued kernel K : X × X → R m×m . ii) k is positive semi-definite; i.e. n i=1 n j=1 c i · k(x i , x j )c j ≥ 0 for all n ∈ N, c 1 , . . . , c n ∈ R m and all x 1 , . . . , x n ∈ X . As a direct generalisation of the scalar-valued case, there exists a uniquely associated Hilbert space H of functions h : X → R m to every matrix-valued kernel K : X ×X → R m×m . To define this Hilbert space, whose inner product we denote ·, · H , some additional notation is required: Let F be a R m×m -valued function and let F i,− denote the vector-valued function F i,− : X → R m defined by the the i-th row of F . Similarly, let G be a R m×m -valued function and let G −,j denote the vector-valued function G −,j : X → R m defined by the j-th column of G. Formally define the symbols F, g H , f, G H and F, G H as follows where these are to be interpreted as compound symbols only (i.e. we are not attempting to define an inner product on matrix-valued functions). Then, the generalisation of the reproducing property (item (ii) in Definition 3) to a matrix-valued kernel K is B Proofs of Theoretical Results This appendix provides proofs for all theoretical results in the main text. On occasion we refer to auxiliary theoretical results, which are stated and proven in Appendix C. B.1 Proof of Result in Section 2 The following properties of the Stein operator S Q will be useful: Lemma 6. Under Assumption 1, we have, for all x, x ∈ X and h ∈ H, Proof. First of all, since h → S Q [h](x) is a continuous linear functional on H for each fixed x ∈ X by assumption, from the Riesz representation theorem (Steinwart and Christmann, 2008, Theorem A.5.12) there exists a representer g x ∈ H for each fixed x ∈ X s.t. Second of all, the reproducing property h(x ) = h(·), K(·, x ) H holds for any h ∈ H, where we recall that the inner product between h ∈ H and a matrix-valued function K(x, ·) is defined in Appendix A. By the reproducing property, for all x, x ∈ X , In particular, S Q K(x, ·) ∈ H since g x ∈ H, establishing item (i). Based on these two observations, we can rewrite S Q [h](x) at each fixed x ∈ X as establishing item (ii). We now apply (19) with h(·) = S Q K(x , ·) to deduce that Applying the Cauchy-Schwarz inequality, Here for each x ∈ X the norm term can computed using (20): Therefore for all x, x ∈ X we have establishing item (iii). B.1.1 Proof of Proposition 1 Proof. From item (ii) of Lemma 6, for each x ∈ X , h ∈ H, we have Taking the expectation of both sides, Here since the inner product is continuous liner operator, the expectation and inner product can be exchanged if the function x → S Q K(x, ·) is Bochner P-integrable (Steinwart and Christmann, 2008, A.32). This is indeed the case, since from item (ii) of Lemma 6 again, and Jensen's inequality, where the last term is finite by Assumption 1. A standard argument based on the Cauchy-Schwarz inequality gives where X and X are independent, and we again appeal to Bochner P-integrability to interchange expectation and inner product. Thus from (21) and (22) we have as claimed. B.1.2 Verifying Assumption 1 for the Langevin Stein Operator This section demonstrates how to verify the assumption that h → S Q [h](x) is a continuous linear functional on H for each fixed x ∈ X in the case where S Q is the Langevin Stein operator (5) for Q ∈ P S (R d ). Since a linear functional is continuous if and only if it is bounded, we aim to show that, for each fixed x ∈ X , there exist a constant C x s.t. For each fixed x ∈ R d , the Langevin Stein operator S Q is given as From the reproducing property h(x) = h, K(x, ·) H for any h ∈ H, we have where the order of inner product and other operators is exchangeable by the continuity of h, · H : H → R (Steinwart and Christmann, 2008, Corollary 4.36). Then by the Cauchy-Schwarz inequality, where the first and second gradient of ∇ · (∇ · K(x, x)) are taken each with respect to the first and second argument of K. For the constant C x to exist, it is sufficient to require that ∇ log q(x), K(x, x) and ∇ · (∇ · K(x, x)) exist. This is the case when, for example, Q ∈ P S (R d ) and K ∈ C 1,1 b (R d × R d ; R d×d ), as assumed in Gorham and Mackey (2017). B.2 Proofs of Results in Section 3 B.2.1 Proof of Proposition 2 Proof. From (9), S P θ S P θ K is given by , where +C = indicates equality up to an additive term that is θ-independent. The exponential family model in (10) satisfies ∇ log p θ (x) = ∇t(x)η(θ) + ∇b(x). Thus for term ( * 1 ) we have where the last equality follows from symmetry of K. For terms ( * 2 ) and ( * 3 ), From Section 3.2, the KSD-Bayes posterior is so we may collect together terms in Equations (23) to (25) Proof. Let f n (θ) := KSD 2 (P θ P n ) and f (θ) := KSD 2 (P θ P). Decomposing the double summation of f n (θ) into the diagonal term (i = j) and non-diagonal term (i = j), . Fix θ ∈ Θ. From the strong law of large number (Durrett, 2010, Theorem 2.5.10), provided that E X∼P [|S P θ S P θ K(X, X)|] < ∞. From the positivity of S P θ S P θ K(x, x), we have E X∼P [|S P θ S P θ K(X, X)|] = E X∼P [S P θ S P θ K(X, X)], which has been assumed to exist. The form of (b) is called an unbiased statistic (or U-statistic for short) and Hoeffding (1961) proved the strong law of large numbers whenever E X,X ∼P [|S P θ S P θ K(X, X )|] < ∞. From item (iii) of Lemma 6 and Jensen's inequality, we have E X,X ∼P [|S P θ S P θ K(X, X )|] ≤ E X∼P [S P θ S P θ K(X, X)] where the right hand side is again assumed to exist. Therefore, since 1/n → 0 and (n − 1)/n → 1, where the argument holds for each fixed θ ∈ Θ. Since f n is continuously differentiable on Θ, and Θ is assumed to be open and convex, the mean value theorem yields Lemma 14 (the first of our auxiliary results, stated and proved in Appendix C) implies that sup θ∈Θ ∇ θ f n (θ) 2 < ∞ a.s. for all sufficiently large n. Therefore, setting L n = sup θ∈Θ ∇ θ f n (θ) 2 concludes the proof. In the remainder, we show the convergence of ( * 1 ), ( * 2 ) and ( * 3 ), and apply the Slutsky's theorem to see the convergence in distribution of √ n(θ − θ n ). First, it follows from the strong law of large number (Durrett, 2010, Theorem 2.5.10) that ( * a ) a.s. → denotes convergence in probability.) Both the required conditions indeed hold from the auxiliary result Lemma 16 in Appendix C. Thus we have This convergence in probability implies that √ n∇f n (θ * ) and (1/ √ n) n i=1 S(x i , θ * ) converge in distribution to the same limit. Therefore we may apply the central limit theorem for (1/ √ n) n i=1 S(x i , θ * ) to obtain the asymptotic distribution of √ n∇f n (θ * ). Again from van der Vaart (1998, Theorem 12.3), we have Collecting together these results, we have shown that Since H * is guaranteed to be at least positive semi-definite, it is in fact strictly positive definite if H * is non-singular, as we assumed. Finally, Slutsky's theorem allows us to conclude that B.3.5 Verifying Assumption 2 for the Langevin Stein Operator Here we compute the quantities involved in Assumption 2 for the Langevin Stein operator S P θ with P θ ∈ P S (R d ). In this case, The operator ∂ r S P θ in (26) is therefore well-defined and θ → ∂ r S P θ [h](x) is continuous whenever θ → ∇ x log p θ (x) is r-times continuously differentiable over Θ. For each fixed x ∈ X , it is clear that h → (∂ r S P θ )[h](x) is a continuous linear functional on H. Then the term (∂ r S P θ )(∂ r S P θ )K(x, x) appearing in the final part of Assumption 2 takes the explicit form The regularity of (27) therefore depends on K and P θ . See Appendix B.7, where (27) is computed for an exponential family model. Now we turn to the proof of Theorem 1: Proof of Theorem 1. Since θ * uniquely minimise f , Thus, from Lemma 8, Applying the simplifying upper bound taking complement of the probability and performing a change of variables, we obtain the stated result. Proof. We sequentially prove each statement in the list. Part (3): From Assumption 2 (r max = 3), for all h ∈ H and x ∈ X the map θ → S P θ [h](x) is three times continuously differentiable, meaning that f n is three times continuously differentiable on Θ. Hence a second order Taylor expansion gives where, for all sufficiently large n, ∇f n (θ n ) = 0 was assumed and the mean value form of the remainder term r n in the Taylor expansion provides a bound Finally, lim sup n→∞ sup θ∈Θ ∇ 3 f n (θ) 2 < ∞ a.s. by the auxiliary Lemma 14 in Appendix C. Part (4): H n is symmetric since the assumed regularity of f n allows the mixed second order partial derivatives of f n to be interchanged. The auxiliary Lemma 15 in in Appendix C establishes that H n a.s. → H * where H * is positive semi-definite. Thus, since we assumed H * is nonsingular, it follows that H * is positive definite. Part (5): The inequality lim inf n→∞ (a n + b n ) ≥ lim inf n→∞ a n + lim inf n→∞ b n holds for any sequences of a n , b n ∈ R. Combining the property lim inf n→∞ (−b n ) = − lim sup n→∞ b n , we have that lim inf n→∞ (a n − b n ) ≥ lim inf n→∞ a n − lim sup n→∞ b n . Applying this inequality, = inf where the last inequality follows from Assumption 3. Now we turn to the main proof: Proof of Theorem 2. Our aim is to verify the conditions of Theorem 4 in Miller (2021). Note that this result in Miller (2021) views {f n } ∞ n=1 as a deterministic sequence; we therefore aim to show that the conditions of Theorem 4 in Miller (2021) are a.s. satisfied by our random sequence {f n } ∞ n=1 . Recall that the generalised posterior has p.d.f. π D n (θ) ∝ exp (−nf n (θ)) π(θ) defined on Θ ⊂ R p . This p.d.f. can be trivially extended to a p.d.f. on R p by defining π(θ) = 0 and (e.g.) f n (θ) = inf θ∈Θ f n (θ) + 1 for all θ ∈ R p \ Θ. This brings us into the setting of Miller (2021). The assumptions of Miller (2021, Theorem 4) are precisely the list in the statement of Lemma 9, and the conclusion is that Proof. First of all, (17) of Ghosh and Basu (2016) demonstrates that PIF(y, θ, P n ) = βnπ L n (θ) − D L(y, θ, P n ) + Θ D L(y, θ , P n )π L n (θ )dθ . B.6.2 The Form of D L(y, θ, P n ) for KSD The following lemma clarifies the form of D L(y, θ, P n ) for KSD: Lemma 10. For L(θ; P n, ,y ) = KSD 2 (P θ P n, ,y ), we have Proof. From the definition of the -contamination model as a mixture model, and using the symmetry of K, we have KSD 2 (P θ P n, ,y ) = E X,X ∼Pn, ,y [S P θ S P θ K(X, X )] = (1 − ) 2 E X,X ∼Pn [S P θ S P θ K(X, X )] + 2(1 − ) E X∼Pn [S P θ S P θ K(X, y)] + 2 S P θ S P θ K(y, y). Direct differentiation then yields as claimed. B.6.3 Proof of Theorem 3 Proof. From Lemma 5 with X = R d , it is sufficient to show that To establish (i) and (ii) we exploit the expression for D L(y, θ, P n ) in Lemma 10. This furnishes us with the bound D L(y, θ, P n ) ≤ 2E X∼Pn |S P θ S P θ K(X, y)| =:( * 1 ) From Lemma 6, ( * 1 ) ≤ S P θ S P θ K(y, y) Plugging these bounds into (37) and using Jensen's inequality gives Now, observing that and taking a supremum over y in (38), we obtain the bound Therefore, from (40), it suffices to verify the conditions which imply the original conditions (i) and (ii). To this end, in the remainder we (a) exploit the specific form of S P θ to derive the an explicit upper bound on sup y∈R d S P θ S P θ K(y, y), then (b) verify the conditions (I) and (II) based on this upper bound. Thus, taking the supremum with respect to y ∈ R d yields the upper bound, where γ(θ) was defined in the statement of Theorem 3. Part (b): Now we are in a position to verify conditions (I) and (II). For condition (I), we use (41) to obtain sup θ∈Θ π(θ) sup which is finite by assumption. Similarly, for condition (II), we use (41) to obtain which is also finite by assumption. This completes the proof. B.7 Verifying Assumptions 2 to 4 In this appendix we demonstrate how Assumptions 2 to 4 can be verified for the exponential family model when the Langevin Stein operator is employed. For simplicity, consider the case where the data dimension is d = 1, the parameter dimension is p = 1, and the conjugate prior π(θ) ∝ exp(−θ 2 /2) is used. From (10), a canonical exponential family model with η(θ) = θ and X = R is given by where t : R → R, a : Θ → R and b : R → R. Accordingly, the log derivative is given by ∇ log p θ (x) = ∇t(x)θ + ∇b(x). Identical calculations to Proposition 2 show that the KSD of the exponential family model with the Langevin Stein operator takes a quadratic form KSD 2 (P θ P n ) = C 1,n θ 2 + C 2,n θ + C 3,n and KSD 2 (P θ P) = C 1 θ 2 + C 2 θ + C 3 . where C i,n = (1/n 2 ) n i,j=1 c i (x i , x j ) and C i = E X,X ∼P [c i (X, X )] and Note that C 1,n > 0 and C 1 > 0 if a positive definite kernel K is used. Verifying Assumption 2 (r max = 3): First, note that H * = ∇ 2 θ KSD 2 (P θ P)| θ=θ * is non-singular since ∇ θ KSD 2 (P θ P) = ∇ 2 θ (C 1 θ 2 + C 2 θ + C 3 ) = 2C 1 > 0. Now, as demonstrated in Section 4.1, when S P θ is the Langevin Stein operator, we have ( is a continuous linear functional on H for each fixed x ∈ X . In the exponential family case, the map θ → ∇ x log p θ (x) is infinitely differentiable over Θ since it is polynomial, leading to It is then clear that E X∼P [sup θ∈Θ ((∂ r S P θ )(∂ r S P θ )K(X, X))] < ∞ for r = 2, 3. For r = 1, For the remaining term in Assumption 2, by essentially same calculations as Proposition 2, Since Θ is a bounded set in R, it is clear that sup θ∈Θ θ < ∞. The finiteness of (42) and (43) can therefore be interpreted as finite moment conditions involving t, b, K and P. Quantities S n (x, θ) and J n : Here we provide the explicit form of S n (x, θ) and J n used to determine the value of β for exponential family model. From the definition, Let c 1,n (x) := (1/n) n i=1 c 1 (x, x i ) and c 2,n (x) := (1/n) n i=1 c 2 (x, x i ). From the definition, 2c 1,n (x i )θ n + c 2,n (x i ) 2c 1,n (x i )θ n + c 2,n (x i ) . Together with H n = C 1,n , the default choice of β is given by (15) in Section 5. C Auxiliary Theoretical Results In Appendix B we exploited a number of auxiliary results, the details of which are now provided. Recall that Standing Assumptions 1 and 2 continue to hold throughout. C.1 Derivative Bounds Our auxiliary results mainly concern moments of derivative quantities, and the aim of Appendix C.1 is to establish the main bounds that will be used. Recall that ∂ 1 , ∂ 2 and ∂ 3 denote the partial derivatives (∂/∂θ h ), (∂ 2 /∂θ h ∂θ k ) and (∂ 3 /∂θ h ∂θ k ∂θ l ) respectively. For the proofs in Appendix C.1, we make the index explicit by re-writing them as ∂ 1 (h) , ∂ 2 (h,k) and ∂ 3 (h,k,l) . For x ∈ X and (h, k, l) ∈ {1, . . . , p} 3 , we define where we continue to use the convention that the first and second operator in expressions such as (∂ 1 (h) S P θ )(∂ 1 (h) S P θ )K(x, x ) are respectively applied to the first and second argument of K. Further define Based on these quantities, we now provide three technical results, Lemma 11, Lemma 13 and Lemma 12. Proof. We first derive the upper bound for r = 1 and then apply the same argument for the remaining upper bound for r = 2 and r = 3. By the definition of ∇ θ , By Lemma 6 and Standing Assumption 2, we have S P θ K(x, ·) ∈ H for any x ∈ X and From Assumption 2 (r max = 1), the operator (∂ 1 (h) S P θ ) exists over Θ and satisfies the preconditions of Lemma 6. Hence, by setting S Q = (∂ 1 (h) S P θ ) in Lemma 6, we have that (∂ 1 Following the same argument as the preceding upper bound for r = 1, the triangle inequality and Cauchy-Schwarz imply that which are the claimed upper bounds for the cases r = 2 and r = 3. Lemma 12. Suppose Assumption 2 (r max = 3) holds. For r = 0, 1, 2, 3, E X∼P [|m r (X)|] < ∞ and E X∼P [|m r (X)| 2 ] < ∞. For r = 1, 2, 3, E X,X ∼P [|M r (X, X )|] < ∞ and E X∼P [|M r (X, X)|] < ∞. If instead Assumption 2 (r max = 1) holds, these results hold for 0 ≤ r ≤ 1. Proof. First, note that positivity of m r (·) and M r (·) implies that the absolute value signs can be neglected. Moreover, from Jensen's inequality The argument is analogous for each r = 0, 1, 2, 3 and we present it with r = 3. The bounded follows from Jensen's inequality and the triangle inequality: where the terms in the sum are finite by Assumption 2 (r max = 3). Part (b): Since X, X are independent in the expectation E X,X ∼P [M r (X, X )], it is clear from the definition of M r that E X,X ∼P [M r (X, X )] exists if the expectation of each term m s (X), s ≤ r, exists. Thus by part (a), E X,X ∼P [M r (X, X )] < ∞ for r = 1, 2, 3. Part (c): From the definition of M r (x, x) for r = 1, 2, 3, Applying the Cauchy Schwartz inequality for each term Since each of the latter expectations is finite by part (a), E X∼P [M r (X, X)] < ∞ for r = 1, 2, 3. Inspection of the proof reveals that these results hold for r = 0, 1 if instead Assumption 2 (r max = 1) holds. Proof. The proof is based on the strong law of large numbers, the sufficient conditions for which are provided by Lemma 12, which shows that E X∼P [|m r (X)|] < ∞ for r = 0, 1, 2, 3 under Assumption 2 (r max = 3). Then the strong law of large numbers (Durrett, 2010, Theorem 2.5.10) → E X∼P [m r (X)] =: ( * r ) for r = 0, 1, 2, 3. Then, from the definition of M 1 , Since each limit in the right hand side converges a.s. to either ( * 0 ) or ( * 1 ), so that where X, X are independent. An analogous argument holds for M 2 (x i , x j ) and M 3 (x i , x j ), giving that Inspection of the proof reveals that (48) still holds for r = 1 if Assumption 2 (r max = 1) holds instead. Proof. First of all, for finite n we have From the triangle inequality and Lemma 11, we further have It follows from Lemma 13 that (1/n Inspection of the proof reveals that the argument still holds for r = 1 if Assumption 2 (r max = 1) holds instead. Part (a): The argument here is analogous to that used to prove Lemma 1, based on the decomposition It follows from the strong law of large number (Durrett, 2010, Theorem 2.5.10) that ( * 1 ) a.s. Similarly, it follows from the strong law of large number for U-statistics (Hoeffding, 1961) Both the required conditions holds by Lemma 12 and the fact that Theorem 4 (Concentration Inequality for KSD). Let σ(θ) := E X∼P [S P θ S P θ K(X, X)]. Then where the probability is with respect to realisations of the dataset . In what follows we use E to denote an expectation with respect to the dataset ∼ P. Applying Markov's inequality followed by Cauchy-Schwarz, we have To conclude the proof, we bound the two expectations one the right hand side. The preconditions of Lemma 6 holds due to Standing Assumption 2. Thus from Lemma 6 part (iii), together with Jensen's inequality, we have the two bounds KSD 2 (P θ P n ) ≤ (1/n) n i=1 S P θ S P θ K(x i , x i ) and KSD 2 (P θ P) ≤ E X∼P [S P θ S P θ K(X, X)]. Plugging these into the previous inequality, and exploiting independence of x i and x j whenever i = j, we have where existence of σ(θ) for all θ ∈ Θ is ensured by Standing Assumption 2. Bounding E[( * 2 ) 2 ]: From the fact | sup x |f (x)| − sup y |g(y)|| ≤ sup x |f (x) − g(x)| for functions f and g, the term ( * 2 ) is upper bounded by We can see from this expression that standard arguments in the context of Rademacher complexity theory can be applied. Noting that | · | 2 is a convex function, Proposition 4.11 in Wainwright (2019) gives that are independent random variables taking values in {−1, +1} with equiprobability 1/2 and E is the expectation over { i } n i=1 . From the essentially same derivation as Proposition 1, the following equality holds: Plugging this equality into the upper bound of E [( * 2 ) 2 ], we have Bounding E[( * ) 2 ]: Returning to (49), we have the overall bound as claimed. D Additional Empirical Results This appendix contains additional empirical results referred to in the main text. (50), with length-scale parameter σ and exponent γ, are considered in the context of the normal location model in Section 6.1. The settings σ ≈ 1, γ = 0.5 (central panel) were used in the main text. The true parameter value is θ = 1, while a proportion of the data were contaminated by noise of the form N (y, 1). Here y = 10 is fixed and ∈ {0, 0.1, 0.2} are considered. D.1 Sensitivity to Kernel Parameters The kernel K that we recommend as a default in Section 5.1 has no degrees of freedom to be specified (with the exception of the weighting function M , whose choice is further explored in Appendix D.3). Nevertheless, it is interesting to ask whether the generalised posterior is sensitive to our recommended choice of kernel. To this end, we considered the family of kernels of the form where σ > 0 and γ ∈ (0, 1). Our recommended kernel sets σ equal to a regularised version of the sample standard deviation of the dataset and γ = 1/2. To investigate how the generalised KSD-Bayes posterior depends on the choice of σ and γ, we re-ran the normal location model experiment from Section 6.1 using values σ ∈ {0.5, 1, 2} and γ ∈ {0.1, 0.5, 0.9}. To limit scope, we consider the performance of the robust version of KSD-Bayes from Section 6.1, with weight function M (x) = (1 + x 2 ) −1/2 , in the case where the contaminant is fixed to y = 10 and the proportion of contamination is varied in ∈ {0, 0.1, 0.2}. Results in Figure 7 indicate that the generalised posterior is insensitive to σ, with almost identical output for each value of σ considered. The results for γ ∈ {0.5, 0.9} were almost identical, but the generalised posterior appeared to be less robust to contamination when γ = 0.1. These results support the default choices recommended in the main text (σ ≈ 1, γ = 0.5) and provide reassurance that the generalised posterior is not overly sensitive to how these values are specified. D.2 Sampling Distribution of β An important component of the KSD-Bayes method is the use of a data-adaptive β, as specified in Section 5.2. In this appendix the sampling distribution of this data-adaptive β is investigated. Of particular interest are (1) the extent to which β varies at small sample sizes, and (2) how the behaviour of β changes when the data-generating model is misspecified. To investigate, we considered multiple independent realisations of the dataset in the context of the normal location model from Section 6.1, collecting the corresponding estimates of β together into box plots, so that the sampling distribution of β can be visualised. To limit scope, we consider the performance of the standard version of KSD-Bayes from Section 6.1 (i.e. with weight function M (x) = 1), in the case where the contaminant is fixed to y = 10 and the proportion of contamination is varied in ∈ {0, 0.1, 0.2}. The dataset sizes n ∈ {10, 50, 100} were considered. Results in Figure 8 show that, in the case = 0 where the model is well-specified, the value β = 1 is typically selected. This value ensures that the scale of the KSD-Bayes posterior matches that of the standard posterior in this example, so that the approach used to select β can be considered successful. In the mis-specified regimes ∈ {0.1, 0.2}, with small n the estimation of an appropriate weight β is expected to be difficult and indeed the default choice of β = 1 in (15) is automatically adopted. At larger values of n it is possible to reliably estimate a weight β < 1 and this weight is seen to be smaller on average when data are more contaminated. These results support our recommended approach to selecting β in (15). D.3 Efficiency/Robustness Trade-Off There is a well-known trade-off between statistical efficiency and robustness to model misspecification, as exemplified by the data-agnostic statistician who is robust by not learning from data. Minimum distance estimation, which can be considered the frequentist analogue of generalised Bayesian inference, can strike an attractive balance between these competing goals (see e.g. Lindsay, 1994;Basu et al., 2019). In Section 4.3 it was demonstrated that global bias-robustness can be achieved using KSD-Bayes through the inclusion of an appropriate weighting function M in the kernel, and in Section 6 it was demonstrated that KSD-Bayes can learn from data whilst being bias-robust. However, it remains to investigate the extent to which statistical efficiency is lost in KSD-Bayes, compared to standard Bayesian inference, in the case where the data-generating model is correctly specified. In this appendix we return to the normal location model of Section 6.1 and explore the effect of the choice of weighting function M on the efficiency of the inferences that are produced. Recall from Theorem 3 that KSD-Bayes is globally bias-robust if there is a function γ : Θ → R such that sup y∈R d ∇ y log p θ (y) · K(y, y)∇ y log p θ (y) ≤ γ(θ) where sup θ∈Θ |π(θ)γ(θ)| < ∞ and Θ π(θ)γ(θ)dθ < ∞. For our recommended kernel K in (14), the expression on the left hand side of (51) reduces to For the normal location model in Section 6.1 we have ∇ y log p θ (y) = θ − y and thus, with our recommended kernel from Equation (14), we have In order that (52) is bounded over y ∈ R we require M (y) to decay at the rate O(|y| −1 ) as |y| → ∞. This decay is achieved, for example, by functions of the form M (y) = a 2 a 2 + (y − b) 2 c/2 (53) for any a = 0, b ∈ R and any c ≥ 1, although of course there are infinitely many other such functions that could be considered. The particular value c = 1, which we considered in Section 6.1 of the main text and consider here in the sequel, represents the smallest value of c for which (52) is bounded over y ∈ R. For this choice we have that (52) is maximised by y = θ ± a 2 + (θ − b) 2 and sup y∈R d (y − θ) 2 M (y) 2 = [a 2 + (θ − b) 2 ]a 2 a 2 + [θ − b ± a 2 + (θ − b) 2 ] 2 ≤ a 2 + (θ − b) 2 =: γ(θ). (53), with lengthscale parameter a and location parameter b, are considered in the context of the normal location model in Section 6.1. The settings a = 1, b = 0 (central panel) were used in the main text. The true parameter value is θ = 1, while a proportion of the data were contaminated by noise of the form N (y, 1). Here y = 10 is fixed and ∈ {0, 0.1, 0.2} are considered. For this bound γ(θ), all conditions of Theorem 3 are satisfied. The aim in what follows is to investigate how the performance of KSD-Bayes depends on the specific choices of a and b and in (53). To limit scope, we consider performance in the case where the contaminant is fixed to y = 10 and the proportion of contamination is varied in ∈ {0, 0.1, 0.2}. The dataset sizes was fixed at n = 100 as per the main text. Recall from Section 6.1 of the main text that the choices a = 1, b = 0 lead to statistical efficiency comparable to that of standard Bayesian inference. Results in Figure 9 show that a = 0.1 led to almost total robustness to contamination at the expense of inefficient estimation, with the spread of the generalised posterior approximately twice as large as the case where a = 1. The setting a = 10 causes the generalised posterior to approximate the non-robust KSD-Bayes approach with M ≡ 1, as would be expected from inspection of (53). The generalised posterior was somewhat insensitive to b, though we note that the choice b = −5 conferred additional robustness at the expense of efficiency, while the choice b = 5 sacrificed both robustness and efficiency, in both cases relative to b = 0. These results broadly support the choices of a = 1 and b = 0 for this inference problem, as we considered in the main text. D.4 Comparison with Robust Generalised Bayesian Procedures This paper presented a generalised Bayesian approach to inference for models that involve an intractable likelihood. However, several generalised Bayesian approaches exist for tractable likelihood and it is interesting to ask how the performance of KSD-Bayes compares to these existing approaches in the case of a tractable likelihood. To this end, we return to the normal location model of Section 6.1, which has a tractable likelihood, and consider two distinct generalised Bayesian procedures that have been developed in this context; the power posterior approach of Holmes and Walker (2017) and the MMD-Bayes approach of Cherief-Abdellatif and Alquier (2020). These approaches are representative of two of the main classes of robust statistical methodology; data-adaptive scaling parameters β and minimum discrepancy methods. Both approaches are briefly recalled: Power Posteriors Motivated by the coherence argument of Bissiri et al. (2016), the authors Holmes and Walker (2017) consider a generalised posterior of the form, for some β > 0, π n (θ) ∝ π(θ) exp β n i=1 log p θ (x i ) , which we call a power posterior (e.g. following Friel and Pettitt, 2008). To select an appropriate value for β, with the intention to "allow for Bayesian learning under model misspecification", the authors first introduce the function ∆(x) = Θ π(θ) ∂ 1 log p θ (x) 2 2 dθ, where we recall that, in our notation, ∂ 1 = (∂ θ 1 , . . . , ∂ θp ). Then the authors set whereθ n is a maximiser of the likelihood. The motivation for (54) is quite involved, so we refer the reader to Holmes and Walker (2017) for further background. The authors prove that β → 1 in probability when the model is well-specified (Holmes and Walker, 2017, Lemma 2.1), and present empirical evidence of robustness when the model is mis-specified. For the normal location model of Section 6.1 we can compute ∂ 1 log p θ (x) = x − θ, ∆(x) = 1+x 2 ,θ n = 1 n n i=1 x i , and X pθ n (x)∆(x)dx = 2+(θ n ) 2 , leading to the recommended weight and an associated generalised posterior that is again Gaussian with mean ( βn 1+βn )( 1 n n i=1 x i ) and variance 1 1+βn . Figure 10: Comparison with robust generalised Bayesian procedures: Robust KSD-Bayes (this paper), power posterior (Holmes and Walker, 2017) and MMD-Bayes (Cherief-Abdellatif and Alquier, 2020) approaches are considered in the context of the normal location model in Section 6.1. The true parameter value is θ = 1, while a proportion of the data were contaminated by noise of the form N (y, 1). In the top row y = 10 is fixed and ∈ {0, 0.1, 0.2} are considered, while in the bottom row = 0.1 is fixed and y ∈ {1, 10, 20} are considered. D.5 Application to Discrete Data This section illustrates how KSD-Bayes may be applied to an intractable discrete-space model; note that the theoretical results in Section 4.1 and Section 4.2 cover both the discrete and continuous data context. For demonstration purposes we consider a simple Ising model P θ on a vectorised 10 × 10 lattice X = {−1, 1} 100 , with a temperature parameter θ ∈ (0, ∞), whose density is where x (+,i) and x (−,i) are vectors whose i-th coordinate is 1 if x (i) = −1 and −1 if x (i) = 1, with all other coordinates identical to their values in x. The difference operators can be extend to act element-wise on vector-valued functions h : {0, 1} d → R d , so that ∇ + h(x) and ∇ − h(x) are d × d matrices whose i-th columns are given, respectively, by ∇ + h i (x) and ∇ − h i (x). Further, for a vector-valued function h : The operator ∇ − · will be applied to a matrixvalued kernel K; in the same manner as the divergence operator in continuous domain, ∇ − · h(x) takes a value in R d for a matrix-valued function h : . The Stein operator S P θ we consider in this example is as follows: For an empirical distribution P n associated to a dataset {x i } n i=1 , the corresponding KSD i.e. KSD 2 (P θ , P n ) = (1/n 2 ) n i=1 n j=1 S P θ S P θ K(x i , x j ) is based on where ∇ − x denotes an action of the operator ∇ − · with respect to the argument x and likewise for ∇ − x . See Yang et al. (2018) for further detail. The availability of a discrete KSD enables the application of our KSD-Bayes methodology to the Ising model. As an empirical demonstration, we consider the same setting as Yang et al. (2018); we approximately draw 1000 samples {x i } 1000 i=1 from P θ with θ = 5 using thinned MCMC (see Figure 11, left). The prior π was taken to be a half-normal distribution over Θ = [0, ∞) with the scale hyper-parameter 3.0. Our focus is on robustness of the generalised posterior, and for the contamination model we replaced a proportion of the data with the vector (1, 1, · · · , 1), corresponding to the all-white lattice (a configuration more typically observed at low values of the temperature parameter θ). For KSD-Bayes, the kernel in Yang et al. (2018) was used in combination with a weighting function M (x), i.e. our kernel is . The weighting function in case (ii) is designed to limit the influence of data whose coordinates are almost all equal. The generalised posterior in cases (i) and (ii) will be called, respectively, the KSD-Bayes posterior and the robust KSD-Bayes posterior. The KSD-Bayes and robust KSD-Bayes posteriors were approximated using Hamiltonian Monte Carlo. For simplicity, the weight β = 1 was fixed in this experiment. Results in Figure 11 (right) present the generalised posteriors for a uncontaminated ( = 0.0) and contaminated ( = 0.1) dataset. It can be observed that both the KSD-Bayes and robust KSD-Bayes posteriors place their mass near the true parameter θ = 5 when there is no contamination = 0. Furthermore, when contamination is present, the robust KSD-Bayes posterior is not strongly affected. The computational challenge associated with discrete intractable likelihoods, as exemplified by the Ising model, continues to attract attention (e.g. Kim et al., 2021). Perhaps as a consequence, there has been little consideration of robust estimation in this context. The nature of data contamination in discrete spaces, and the extent to which this can be mitigated by careful selection of the weighting function in KSD-Bayes, requires further careful examination and will be addressed in a sequel. However, these preliminary results are an encouraging proof-of-concept.
2021-04-16T01:15:29.311Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "1f9c368316ba57215f8bdea6d4617ea298f73632", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1f9c368316ba57215f8bdea6d4617ea298f73632", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
231819666
pes2o/s2orc
v3-fos-license
The Challenge and Management of Clinical Trials in Integrative Cancer during the COVID-19 Pandemic Worldwide Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons AttributionNonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Letter to the Editor Since the rapid spread of novel coronavirus disease in 2020 worldwide, it has been a great challenge to launch clinical trials in most countries, especially for integrative cancer therapies due to the close-contact interventions and the frequent follow-up. It is estimated that the drop-out rate of clinical trials during the COVID-19 pandemic was about 70% to 80%, and we also predict that the next 5 years might witness a sharp fall in the number of the clinical trials of integrative cancer therapies. Thus, it is significant to propose some countermeasures to improve the quality of current and future clinical trials of integrative cancer therapies and protect subjects participating in the clinical trials, as more complementary and alternative medicine (CAM) in randomized controlled trials (RCTs) of cancer were conducted and published in the recent years before 2020. 1 COVID-19 forces most hospitals in the epidemic areas to significantly reduce non-emergency admissions and services, which led to the fact that it is difficult to treat cancer patients involved in clinical trials, including during recruitment and follow-up. 2 To address these problems, the sponsor, principal investigators, and CAM physicians should be encouraged to postpone or suspend those clinical trials with close-contact therapies such as acupuncture and massage. 3 For pharmaceutical therapies, it is convenient for researchers to send the investigated products to the subjects by express delivery. Telemedicine technology such as telemedicine consultations and electronic data capture (EDC) have multiple benefits for cancer patients in clinical trials, as EDC could be comprehensively built and used for follow-up and safety evaluation instead of traditional involving face-to-face assessments done on paper. 4 In the future, integrative cancer care may need to rely more on telemedicine services, such as consultations with integrative physicians and remotely delivered mindfulness or exercise therapies. 5 For Yoga, Tai Chi, Qigong, meditation, and other mind-body therapies, the best way of participating in a clinical trial might be to stay at home under the instruction of telemedicine services, and feedback information by EDC. With the advent of COVID-19 vaccines, we suddenly face to new problems, in addition to the safety of vaccines themselves. For integrative cancer therapies, we may need to focus more on the interaction between vaccines and highdose vitamins, minerals, or Chinese herbs, as the ingredients of multi-component herbal and supplement formulas are too complicated. Here, we also appeal to investigators to avoid the combination of COVID-19 vaccines and herbs or supplements for at least a 4-week period, until we could access to evidence-based observations for safety. To launch new clinical trials in integrative cancer care, the extent, design, outcome, follow-up, and safety evaluation might be quite different from the conventional ones in the period of COVID-19. The development of protocols and pathways in this field is necessary. Concerning design, open-label, single-center, and distant-contact intervention would be more suitable and safe; likewise, overall survival time, quality of life, and other scales would be better options for outcome instead of an index that needs to be investigated with CT or MRI imaging and blood tests. Additionally, real world studies, cohort studies, and pilot RCTs might be promoted as the limitations on these studies are fewer than those on RCTs. In conclusion, COVID-19 has necessitated a dramatic change in the way we do clinical trials in integrative cancer therapy. To better manage clinical trials and protect subjects, more detailed guidelines and expert consensus documents should be set up as soon as possible, and the wide use of vaccines should be also taken into consideration. We hope that our suggestions might be useful for the management of clinical trials in integrative cancer therapy during the epidemic of COVID-19. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2021-02-06T06:17:38.243Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "11b8041886d7af9ac7dafaf3eb741026bf62232c", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1534735421991218", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "2a58aed0f563ad441fbd732dfb072a87f8301b42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
365220
pes2o/s2orc
v3-fos-license
Slow-string limit and"antiferromagnetic"state in AdS/CFT We discuss a slow-moving limit of a rigid circular equal-spin solution on R x S^3. We suggest that the solution with the winding number equal to the total spin approximates the quantum string state dual to the maximal-dimension ``antiferromagnetic'' state of the SU(2) spin chain on the gauge theory side. An expansion of the string action near this solution leads to a weakly coupled system of a sine-Gordon model and a free field. We show that a similar effective Hamiltonian appears in a certain continuum limit from the half-filled Hubbard model that was recently suggested to describe the all-order dilatation operator of the dual gauge theory in the SU(2) sector. We also discuss some other slow-string solutions with one spin component in AdS_5 and one in S^5. Introduction One implication of the AdS/CFT duality is that each free string state in AdS 5 × S 5 should correspond to a certain single-trace gauge invariant operator in the large N maximally supersymmetric SYM gauge theory; the quantum string energy E should be equal to the quantum dimension ∆ of the operator. For example, if we look at the closed SU(2) sector of operators like Tr(Φ J 1 1 Φ J 2 2 ) built out of two complex combinations of N = 4 SYM scalars and diagonalize the corresponding dilatation operator (for a review see [1]) then each of its eigenstates with given R-charges (J 1 , J 2 ) should be dual to a particular string state with the same SO(6) spins (J 1 , J 2 ), and one should have also ∆(J 1 , J 2 , m; λ) = E(J 1 , J 2 , m; √ λ) [2,3,4]. λ in ∆ is the 't Hooft coupling and √ λ in E is the string tension; m in ∆ labels various eigenstates with fixed J 1 and J 2 while m in E stands for other "hidden" quantum numbers like winding number or number of folds of the string configuration. The leading one-loop term in the dilatation operator in this sector can be identified with the Hamiltonian of the XXX 1/2 ferromagnetic spin chain of length J = J 1 + J 2 [5], while higher-loop corrections add long-range and multi-spin interaction terms [6,7,1]. For small λ higher loop corrections are not expected to qualitatively change the structure of the spectrum of the XXX 1/2 model (modulo possible lifting of degeneracies), i.e. they should deform the eigenvalues order by order in λ. One may then conjecture that the same should be true also in the large λ limit, i.e. the exact spectrum should have the same qualitative structure as the Heisenberg model spectrum. This conjecture seems to be supported, for large length J, by the close relation between the standard one-loop Bethe ansatz and the exact asymptotic Bethe ansatz [8] (see also [9]). The AdS/CFT duality then implies that the Heisenberg model spectrum and the corresponding part of the quantum string spectrum should have the same qualitative structure. 4 The spectrum of the one-loop ferromagnetic Heisenberg model (∆ = J + of Φ 1 and Φ 2 clusters. The strong-coupling extrapolation of these states are "fast" (J 1 ∼ J 2 , J ∼ √ λ ≫ 1) semiclassical strings whose world surface is approximately null [4,13,14,15,16,19]. Going higher in the energy, the spectrum is expected to contain, for J ≫ 1, some "intermediate" states with E (1) ∼ λ + O(1/J) and, finally, the highest energy state with E (1) ∼ λJ + O (1). In the latter case the energy density will be approximately constant at large J instead of vanishing as for the magnons or "macroscopic strings". Indeed, the spectrum of the ferromagnetic Heisenberg chain is isomorphic to the spectrum of the antiferromagnetic chain: the two spectra are formally related by changing the sign of the overall coefficient λ or the sign of the energy. 5 This implies that the highest-energy state in the Heisenberg ferromagnet spectrum is the same as the Neeltype antiferromagnetic (AF) ground state of the Heisenberg antiferromagnet, i.e. it should have J 1 = J 2 = J/2 and for J ≫ 1 its energy should be E (1) = c 1 λJ, c 1 = ln 2 4π 2 [20,21]. The fluctuations near the AF state will lower the energy of the ferromagnetic chain, eventually filling up the part of the spectrum from the near-AF states with E (1) ∼ λJ to the "intermediate" states with E (1) ∼ λ. Beyond the 1-loop order one expects to find that the energy of the AF state should be given by (assuming J ≫ 1 for any fixed λ ≪ 1) The exact expression for the "energy density" f (λ) was recently found by starting with the conjectured asymptotic BDS [8] Bethe ansatz [9,22] f (λ) = 1 + √ λ π ∞ 0 dk k Here J n are the Bessel functions. The formal extrapolation of this expression to large λ gives [22]: The BDS ansatz is not expected to correctly represent the quantum string spectrum at the quantitative level, but it was previously found to lead to the same qualitative results for its low-energy part. 6 The same is likely to be true also for the upper part of the spectrum (in the large J limit). Indeed, the arguments in [22] suggest that one 5 Changing formally the sign of λ in the full all-loop dilatation operator will not, of course, lead to an antiferromagnetic chain with isomorphic spectrum; for example, the BDS ansatz [8] has non-trivial dependence on λ through the magnon dispersion relation e(p) = 1 + λ π 2 sin 2 p 2 − 1 (cf. also [9]). Still, as already mentioned above, the exact spin chain Hamiltonian is expected to have the spectrum (including its higher-energy near-antiferromagnetic-state part) which is a smooth λ-deformation of the Heisenberg model spectrum. 6 In particular, the low-energy Landau-Lifshitz-type effective actions corresponding to the BDS ansatz and the quantum string theory appear to have the same structure [17,23]. should find the same f (λ ≫ 1) ∼ √ λ scaling by starting with the string Bethe ansatz of [24] (though the proportionality coefficient is likely to be different from 1/π 2 ). It is then natural to conjecture that the energy of the string state dual to the AF state should scale in the same way at large λ (i.e. in the classical string limit) and large J, i.e. we should find This is not the familiar scaling for semiclassical strings of the type discussed in [3,25,4] (for which both the energy E and the spin J of the classical string are proportional to the string tension, i.e. scale as √ λ at large λ) so one may then question if the AF state can be represented by a semiclassical string. Since the string should carry the large angular momentum J one may hope that the corresponding quantum string state may still be approximated -in the large λ limitby a classical string configuration. Our aim below will be to try to identify semiclassical string states that should be dual to the upper part of the gauge-theory spin chain spectrum in the limit of large J and large λ. We shall find that there are indeed classical string solutions whose energy scales as (1.4). 7 However, the semiclassical expansion here will have an unusual form, with subleading terms in the classical energy receiving contributions from higher orders in string α ′ ∼ 1 √ λ expansion. Also, the classical string solution will be unstable under small fluctuations. That direct semiclassical expansion may not apply here is not surprising since it is only the true quantum string state that should be dual to the AF state on the gauge theory state. While the AdS/CFT duality implies that the quantum string theory spectrum, being equivalent to the gauge spin chain spectrum, should be bounded from the above in the compact SU(2) case [22,9], adding small fluctuations to a semiclassical string one can always increase the energy. Our main observation is that while the lower part of the SU(2) spin chain spectrum is dual to fast-moving strings (which are "locally null-geodesic" or "locally BPS"), the upper part appears to be dual to slow-moving long strings which are as far as possible from the BPS limit. While for the fast strings the time (τ ) evolution of the string configuration dominates over the spatial (σ) evolution, with each bit of the string having a near-null-geodesic trajectory, for the slow long string one has just the opposite: each of its bits moves very slowly. The slow motion is not in contradiction with the assumption that J ≫ 1: the effective string rotation frequency J = J √ λ is very small in the classical string limit √ λ ≫ 1 if we assume that √ λ ≫ J. This should be contrasted with the fast string case where J was fixed in the limit of √ λ ≫ 1, i.e. J and √ λ were of the same order [4]. 8 We propose that the quantum string state representing the AF state of the gauge theory may be approximated by the simplest rotating string solution in S 5 [4,29]: a circular string moving in S 3 part of S 5 with two equal angular momenta J 1 = J 2 = J/2 and wound along a big circle. The winding number m should be equal to the total momentum J, i.e. the total length of the string in σ direction should be proportional to J. The assumption that the winding number m should be proportional to the angular momentum J is a natural one from the spin chain/Bethe ansatz point of view: for the AF state case the excitation momenta p i determined by the Bethe ansatz and thus the energy density should be constant in the large length J limit (similar limit was considered in [32]). The classical energy of the circular string [4] Here the first term in the large λ expansion is indeed the same as in (1.4). Only this leading term in the large λ expansion of the classical energy should be trusted since the subleading terms will receive contributions from higher quantum string corrections (see below). A qualitative reason for the existence of the "slow" string states is the compactness of S 5 : in flat space the closed string needs to rotate or pulsate to balance its tension, while on a sphere it can be wrapped on a big circle and thus can be static (to embed such a state in the SU(2) sector we need still to add two angular momenta and take J large). The apparent small-fluctuation instability of the wrapped (and rotating) classical string solution [4] may be interpreted as an indication that the corresponding quantum state has the maximal energy for given spins J 1 = J 2 . The rest of the paper is organized as follows. In section 2 we shall review the circular two-spin solution and consider its fast and slow limits. In section 3 we shall discuss the effective Hamiltonian for the fluctuations around the string state corresponding to the AF state of the gauge theory chain. It is found by expanding the string Hamiltonian near the circular string solution and is expected to be related to a strong coupling limit of an effective action describing fluctuations near the AF state of the gauge-theory spin chain. In contrast to the XXX 1/2 case this Hamiltonian need not be just that of a relativistic sigma model on S 2 , but may be related to a bosonized field theory limit of a Hubbard-type model that may represent the all-loop dilatation operator of gauge theory in the SU(2) sector [9]. Indeed, we will show in section 4 that a similar effective Hamiltonian appears in a continuum limit from the half-filled Hubbard model. The bosonized Hamiltonian exhibits a certain discontinuous behaviour as we move away from half-filling. This bears certain similarities with the non-closure of the SU(2) sector at large coupling for 8 There the effective couplingλ = 1 J 2 = λ J 2 was fixed while λ was taken large. One could then expand in powers ofλ at each order of semiclassical expansion in 1 √ λ . states with J 1 = J 2 [10] suggesting that the Hubbard model may need to be modified to take this into account. In section 5 we shall consider a similar slow string limit of some string solutions with one spin in AdS 5 and one spin in S 5 that are related, in particular, to the SL(2) sector on the gauge side. Quantum 1-loop correction to the energy of the latter solution will be discussed in Appendix. 2 Circular rotating J 1 = J 2 string on S 3 Here we shall start with recalling the form of the simplest 2-spin solution [4] for the string on R t × S 3 (in the form given in [29]) and then discuss its new "slow-string" limit. Classical string energy and its limits Parameterizing S 3 by two complex coordinates X i with |X 1 | 2 + |X 2 | 2 = 1, the classical string equations in conformal gauge may be written as where m is an integer winding number (we shall choose it to be positive). Note that a similar solution exists in flat space where w = m (Λ = 0) and a is arbitrary. The conformal gauge constraint gives κ 2 = w 2 + m 2 . The corresponding SO(4) spins and the energy are (T = √ λ 2π is the string tension) The quadratic fluctuation spectrum near this solution was found in [4,29]. There are 4 massive AdS 5 fluctuations with mass κ, i.e. with the characteristic frequency ω n = √ n 2 + κ 2 = √ n 2 + w 2 + m 2 . In addition, there is a free massive field from S 5 with mass w 2 − m 2 , i.e. with ω n = √ n 2 + w 2 − m 2 which is real if The remaining three S 5 bosonic fluctuations are coupled and the corresponding frequencies are given by [29] ω 2 n = n 2 + 2w 2 ± 2 √ w 4 + n 2 w 2 + m 2 n 2 . (2.5) Their reality condition is which, if satisfied, implies also (2.4). As a result, there is always a finite number of unstable modes with 0 < n < 2m, i.e. the solution is always unstable. Returning to the classical energy, we see that it is a function of three independent parameters: λ, J, m. Taking different limits of these parameters one finds special cases of this solution that have different physical interpretation. Let us first consider the cases where the standard semiclassical expansion applies, which assumes that the parameters of the solution w, m are fixed while λ is taken to be large to suppress quantum string (inverse tension) corrections. Then J = √ λw is also large, and J ≫ m. There are several possible choices of the rotation velocity w and the winding number m: (i) m = 0, w = 0: this is the point-like (BPS) case with E = J. 9 (ii) m ≪ w: this is the "fast string" case of [4] when E has a regular expansion in the small semiclassical parameter m 2 w 2 = m 2λ ,λ ≡ λ J 2 : Here the time evolution is dominating over the spatial evolution: the effective string tension m such solution is formally the same as in flat space (the Lagrange multiplier Λ vanishes), but the classical energy here is still linear in J: m may be of order 1 or much bigger than 1 but is still much smaller than J = √ λw since λ is assumed to be taken to be large first. 10 The cases (iii) and (iv) are different from the fast-moving string case (ii) where string world-surface is nearly null. In the "slow string" case of (iv) the σ dependence 9 The corresponding massless geodesic runs along big circle in the "diagonal" 2-plane in (X 1 , X 2 ) space. 10 The scaling of the energy of long wound strings with winding E = √ λm + ... was observed in the uniform gauge Hamiltonian formalism in [30]; however, in contrast to [4] and the present discussion, the winding there was assumed to be in the same direction as the momentum J. Similar behavior is found also in the su(1|1) sector which was analyzed in detail (for any J and m) in [31]. dominates over τ dependence, and a reflection of that is the explicit dependence of the classical energy on the string tension √ λ (in the fast string case the classical energy depends only on the square of string tension, i.e. is analytic in λ [4]). Such slow strings should correspond to an intermediate part of the spin chain spectrum where the energy scales as (J ≫ 1) To sum up, fixing the spins J 1 = J 2 = J/2 we may label the string or the corresponding spin chain states by growing values of m; then the energy increases One may wonder what will happen if we increase m or the string length further. The spin chain correspondence suggests that the highest possible value of m should be J (which takes integer values in the quantum theory). If we assume that 11 Hence in this case the string motion is "very slow" for large tension: the string wrapped many times on big circle is nearly static in the classical √ λ ≫ 1 limit. The energy (2.3) for m = J is then Our main conjecture is that this special case of the circular string solution should be dual to the highest-energy antiferromagnetic state of the corresponding gauge-theory spin chain. 12 Like in the previous cases (i)-(iv) the solution in the case (v) is still unstable, with the number of tachyonic modes with n < 2m = 2J growing with J. This instability may, however, be an artifact of the naive semiclassical expansion near the highest-energy state: our conjecture implies that there is a well-defined maximal-energy state in the discrete quantum string spectrum which in the large λ limit may be approximated by the above classical solution with m = J. The standard semiclassical expansion does not indeed directly apply in the last case (v): the classical energy depends on √ λ and contains subleading terms that appear also from higher orders in inverse string tension expansion (see next subsection). Still, the leading √ λJ term in the classical string energy (2.13) does not receive corrections from higher world-sheet loops, and this leading scaling behavior thus provides a qualitative support to our conjecture that this solution (v) is dual to the highest-energy AF state of the gauge-theory spin chain. The fact that the proportionality coefficient 1/π 2 in (1.3) as obtained in [22] by extrapolating to strong coupling the AF energy of the BDS spin chain does not match the one in (2.13) may not be considered as a contradiction. Indeed, the orders of limits taken are opposite: on the string side we first take λ large and then J large, while on the gauge side we first assume that J is large and then extrapolate the perturbative in λ result to strong coupling. 1-loop correction to string energy Let us now consider the slow-string limit of the 1-loop correction to the energy of the above circular solution which was computed in [33,34] (see also [35]). Its expansion that was discussed before was the fast string limit when w ≫ m (for a discussion of subtleties in this expansion see [36,37]). Here we shall consider the opposite limit of w ≪ m. We shall formally ignore the instability of the solution, concentrating on the real part of the 1-loop correction E 1 (m, w). The expansion in powers of w m = J √ λm produces powers series in 1 √ λ for fixed J and m (in particular, for J = m). As we shall see, for large m the one-loop correction E 1 will scale linearly with m ≫ 1 or, for m = J, with J, in agreement with the general expectation that 14) The expression for E 1 is given by the sum of the zero-mode and non-zero-mode contributions In contrast to the large w expansion relevant for the fast string case, the small w expansion of the 1-loop correction is regular: higher orders coefficients are given by convergent sums. The leading term in the expansion in w/m is found by setting w = 0 in the above expression (omitting the imaginary part): The expansion of this at large m is subtle but numerical evaluation shows that the real part of E Setting To analyze the dependence of E 1 on J, we expand S n at large λ for fixed J: 21) To study the J-dependence of the series we computed the sums N n=2J A 0 , N n=2J A 1 , N n=2J A 2 numerically for N = 10 5 and 10 2 < J < 10 4 . We found that they scale as J 2 , so that S n in (2.21) grows linearly with J. Numerically evaluating the coefficients and combining E 1 with the classical expression (2.13) we get for the large λ expansion of E = E 0 +hE 1 + ... (ignoring O(J 0 ) terms in E 1 , i.e., in particular, terms coming from E zero ): Here we formally introduced the parameterh to distinguish between the classical and the 1-loop corrections. It is interesting to observe that while the classical part of E/J contains 1 √ λ terms in odd powers, the 1-loop corrections produce the even powers of 1 √ λ . The subleading coefficients will be further corrected by higher loop string contributions. This illustrates the point that was already mentioned above: in contrast to the usual semiclassical expansion in the m = J case the string sigma model loop expansion is not equivalent to large λ expansion. It is interesting also to note that the leading −0.34h correction to the classical √ λ term in E/J is negative, which seems consistent with the idea of interpolation from strong to weak coupling (cf. (1.1)). Returning to the issue of instability of the solution, we expect that it is related to the fact that one tries to expand near a maximum of a potential like sin 2 θ. The exact quantization should produce a discrete set of levels in this potential with the maximal energy state being "approximated" by the above classical solution. 13 3 Effective action for slow-moving strings on R × S 3 In the case of the lower part of the spectrum of the ferromagnetic spin chain dual to fast strings it was possible to establish a correspondence between a non-relativistic Landau-Lifshitz (LL) effective action for long-wave length excitations of the spin chain and the fast string limit of the classical string action [16,17,18,19]. One may wonder if a similar kind of effective action correspondence exists also near the upper antiferromagnetic end of the spin chain spectrum related to a slow-string limit of the string action. As is well known (see, e.g., [20,40]), the effective action describing near AF-state excitations of the XXX 1/2 spin chain is a relativistic sigma model on S 2 (with a topological term ensuring its conformal invariance and no-gap spectrum). The exact spin chain representing gauge theory anomalous dimensions is certainly different from the XXX 1/2 chain and the strong-coupling limit of the corresponding effective action need not be simply an S 2 relativistic sigma model as in the XXX 1/2 case. The exact spin chain was suggested to be related to a version of the Hubbard model [9]. It is not completely clear at the moment which is the correct Hubbard-type model which should be related to string theory and which should be the corresponding near-AF state effective action for it, but one may assume that it should be qualitatively similar to that of the Hubbard model. The continuous effective action for the fluctuations near the ground AF state of the half-filled Hubbard model is a combination of the massless spinon sigma model and a sine-Gordon action for massive charge excitations [41,20,42]. Here we shall first attempt to see what kind of effective Hamiltonian for near-AF state fluctuations may appear in the dual limit on the string side. Then in section 4 we will find that the form of this Hamiltonian and thus its spectrum is qualitatively similar to that of the Hamiltonian appearing in a scaling limit of the Hubbard model of [9]. In general, one would need to start with the full quantum string theory and integrate out all modes but the ones relevant for the description of the near-AF states of the SU(2) sector. Here we shall suppress world sheet quantum corrections by assuming that λ ≫ 1, i.e. we shall consider only the classical string action. We shall follow a naive approach that essentially copies the derivation of the LL action in [16,17,19] but now focuses on modes close to the wrapped slow-moving string that we conjectured above to be the counterpart of the AF state. More precisely, the analogy here will be with the action of magnons as small fluctuations near the ferromagnetic state, or with the corresponding "plane wave" action on the string side. Given a classical string moving on R t × S 3 we are to gauge fix two coordinates (time and a spatial one) to get an action for two physical transverse degrees of freedom. In [17] this was done by fixing the momentum density corresponding to the sumα of the two polar angles ( to be constant and equal to J. This is equivalent [19] to gauge-fixing the 2d dual coordinateα = J σ. One possible strategy is to use the same gauge also in the present case of slow strings. Then the spin J will again have the interpretation of the length on the spin chain side. The difference with the fast string case is that there we had J ≡ J/ √ λ large so that we expanded in smallλ = λ J 2 = 1 J 2 . For the slow strings we may first expand in large λ, and then in large J, so that now we have √ λ ≫ J, or, equivalently, J ≪ 1. In general, quantum string corrections are expected to be important (modifying subleading terms in the classical action) but we may hope that they do not change the form of the leading large λ term in the action. Proceeding as in [19], i.e. fixing t = τ,α = J σ, we obtain the R t × S 3 string action in the form We expand this action at large λ for fixed J and fixed derivatives of the fields Since for slow strings 1 ≪ J ≪ √ λ we have ignored the first JC 0 term (which played the important role in the fast string case). In terms of the two angles ψ and ϕ in (3.1) we get This action does admit our basic circular string configuration (2.1) as its solution for which ψ = π 4 , ϕ = mσ. We may now set m = J and expand the action near this keeping all orders in the fields but dropping higher powers of their derivatives. Then the J-dependence can be absorbed into the new spatial parameter s = Jσ and we finish with where the prime now stands for the derivative over s. To quadratic order in fluctuations this becomes which represents the unstable mode. Its origin is similar to the tachyonic mode appearing when expanding the sine-Gordon model near the maximum of the potential. An alternative approach to deriving the effective action is to start with the string action on R t × S 3 in a different -conformal -gauge For large J we may replace J + g ′ → J and thus get a weakly-coupled combination of a sine-Gordon model for f and a free homogeneous g mode. In conformal gauge the action will then scale as J 2 but since t = κτ ≈ Jτ (κ = √ m 2 + w 2 ≈ m = J) the target-space energy will scale linearly with J. It is useful to rewrite the action for (3.10) in terms of more natural world-sheet coordinates to facilitate comparison with spin chain action in the next section, namely, in terms of the target-space time t = Jτ +... and s = Jσ. The use of s is natural since here the length of the wound string is large, so J ≫ 1 corresponds to the thermodynamic limit. Then we get and it is now obvious that the action and the energy of an approximately homogeneous configurations should scale linearly with large J. As stressed at the beginning, to compare to spin chain we should consider the spectrum of Hamiltonian for small fluctuations near this slow string state. The Hamiltonian corresponding to (3.11) is After a canonical transformation that rescales momenta and fields by √ λ in the opposite way we get, to quadratic order in the fluctuation field f (cf. (3.8)) Higher-order fluctuation terms are suppressed in the large λ limit. In the next section we shall see that an effective Hamiltonian similar to (3.13) appears in the relevant large λ limit on the gauge theory spin chain side assuming it is described by the Hubbard model of [9]. An effective Hamiltonian for fluctuations near AF state of gauge theory spin chain described by Hubbard model It has recently been shown [9] that the Bethe equations diagonalizing the BDS spin chain [8] are identical to those diagonalizing the infinitely long Hubbard chain with the half-filled state as the ground state. From the standpoint of the N = 4 SYM theory the most important property of the Hubbard model is that its interactions are short-ranged. Consequently, it can be defined on a lattice of any length, providing a possible extension of the BDS chain to operators of finite length. The relation between the Hubbard model and the AdS 5 × S 5 string theory is a very interesting question. In the event that (some modification of) it represents the correct extension of the BDS chain to finite length operators, the Hubbard model should also be related to the world sheet theory, perhaps in the same spirit as the Heisenberg-type chain near the ferromagnetic end of the spectrum is related to the fast string limit of the world sheet sigma model [16]. There are important differences however. The ground state of the half-filled Hubbard model is anti-ferromagnetic, in the sense of possessing Néel order. As was pointed out earlier, in the leading perturbative gauge theory limit the effective action of excitations around this state is relativistic and also strongly coupled. The lack of an expansion parameter analogous to λ/J 2 raises the question of how to compare this action to some action derived from the string world sheet action. A possibility is that on the string side the relevant action may be obtained by integrating out all fields except those describing the SU(2) sector in the λ → ∞ limit. Deriving such a quantum effective action appears to be beyond our reach at the moment. If a version of Hubbard model does give the correct representation for the gauge theory dilatation operator it would then allow to establish a contact with the perturbative/semiclassical (i.e. large tension or large √ λ) limit of the string world sheet theory. In the large 't Hooft coupling limit, the effective action of small excitations around the AF ground state of the Hubbard model should be compared to the classical world sheet action expanded around the classical solution dual to this ground state. In what follows we shall compare the classical continuum limit of the standard Hubbard chain with the effective Hamiltonian (3.12) of fluctuations around the solution corresponding to the AF state. It is important to stress again that this comparison is qualitatively different from that of the ferromagnetic case coherent state continuum limit and the fast string action in [16,17]. Rather, it should be thought of as the comparison between the spectrum of eigenvalues of the gauge theory dilatation operator close to some large anomalous dimension with the eigenvalues of the effective fluctuations Hamiltonian obtained by expanding the string effective action around a specific solution. Also, it is clear that here we may not expect the precise match between the string and spin chain Hamiltonians. As was found in [9], the standard Hubbard model does not resolve the "3-loop discrepancy", i.e. it does not reproduce the precise stringtheory values of subleading coefficients in the energy of fast-rotating strings in the large λ limit; this indicates that this model does not capture all the details of the world sheet theory. The best we may hope for is a qualitative agreement between the continuum limit of the Hubbard Hamiltonian and the slow-string effective fluctuation Hamiltonian. Below we will first review the continuum limit and the bosonization of the Hubbard model at a general filling fraction (see, e.g., [45] for a recent thorough discussion). We shall consider the odd-length Hubbard chain to avoid complications related to the twist necessary for even lengths [9]. We shall then focus on the half-filling case and compare the result with the effective Hamiltonian of fluctuations around the slow string solution. We shall find a qualitative agreement. Review of continuum limit The Hubbard model Hamiltonian is (see, e.g., [41]) where c † iα and c iα are creation and annihilation operators of electrons of spin α = {↑, ↓} at site i. The relation between the two parameters t and U and the 't Hooft coupling was established in [9] by comparing the ground state energy of the Hubbard model with the maximum energy state of the BDS chain: Here t RSS and U RSS are the t and U parameters used in [9]. In the weak gauge coupling region (where U ≫ t and so the quartic term dominates over the quadratic one which is then treated as a perturbation) the effective Hamiltonian is given by a series of the form where k are operators constructed out of c † α and c α . The k = 0 and k = 1 terms correspond to the tree-level and one-loop dilatation operators, respectively. Let us note that the normalization in (4.2) is different from the one usually considered: here the tree-level Hamiltonian contributes O(1/λ) to the dimension of operators while the one-loop Hamiltonian contributes terms independent of the 't Hooft coupling. The usual extra order-λ factor may be restored by rescaling both t and U by λ Indeed, in [9] the energy of the Hubbard model (4.1) was multiplied by g 2 to get the anomalous dimension. It is more natural to define the Hamiltonian so that its eigenvalues are directly related to anomalous dimensions and thus, via AdS/CFT, to string energies. To implement this, here we will adopt the following "rescaled" choice of the parameters in (4.1): 15 The negative sign of t corrects the fact that the energy of the Hubbard model and the gauge theory anomalous dimensions have opposite signs. In relation to the world sheet theory we will choose to implement this relation by replacing t and U with |t| and |U| and reversing at the very end the sign of the time coordinate. This will ensure that the sigma model energies are identified with the negative of the Hubbard model energies. For the comparison with the classical world sheet string theory we will be interested in the opposite limit to the one discussed in [9] -in the strong-coupling limit where λ → ∞. In this limit |t| ≫ |U| and thus the Hubbard model as well as its continuum limit may be treated "semiclassically" or by expansion near the free quadratic term (the quartic term in H may be considered as a perturbation). 16 Our aim will be to study small fluctuations around the half-filled state. The standard procedure is to construct the operators Fourier-conjugate to c j,α and c † j,α . The operators creating the ground state fill up all momentum levels of the Fermi sea; our aim will be to find the effective action for the excitations around the Fermi level, having momenta much smaller than the Fermi momentum k F . While we are particularly interested in the half-filled state, it is possible -and, in fact, instructive -to analyze the fluctuations around the minimum energy state at a general filling fraction, i.e. for arbitrary J 1 and J 2 charges of the SU(2) sector. The effective Hamiltonian obtained following this procedure could then be compared to the Hamiltonian for fluctuations around a classical solution dual to the minimal energy string state with spins J 1 and J 2 . The annihilation operators then are where a is the lattice spacing and Λ ≪ k F is a cutoff enforcing that the fluctuations have momenta much smaller than k F . The Fermi level of a system of length J with n c 15 Note that the Bethe ansatz (Lieb-Wu) equations for the Hubbard model that reduce to the BDS Bethe ansatz equations [9] depend only on the ratio U/t and thus are the same for the two choices. 16 Note that with the normalization (4.4) it is immediately clear that in the strong-coupling limit the AF ground state energy should scale as t ∼ g ∼ √ λ, i.e. in the same way as found by extrapolating to strong coupling (1.3) the perturbative expression (1.2). electrons is k F = πn c /J; at half-filling the number of electrons is half the number of lattice sites and, therefore, we find that 2k F a = π. Let us then use the expansion (4.5) in the Hamiltonian (4.1) and take the continuum limit c j+1,α ≃ c j,α + a∂ x c j,α + . . . and By construction, the largest value of the coordinate x should be V = Ja. One possible choice used in the near-ferromagnetic ground state case [17] is a = 2π J , V = 2π; in that case the world-sheet coordinate had J-independent length while the J-factors combined in the scaling limit with √ λ. Here we shall use instead a = 1, V = J; this is natural since in the thermodynamic limit J ≫ 1 all extensive quantities describing near-AF states should scale linearly with J. The coordinate x will then be directly related to s in (3.11) up to 2π factor. For generality we shall keep the dependence on the lattice spacing a explicit in what follows. Plugging (4.5) into the quadratic and quartic terms of (4.1) leads to: 1) the quadratic Hamiltonian: In writing the first line in the equation above we discarded summands proportional to e ±2ik F ja ; the reason is that, upon Fourier transforming L and R, the sum over j vanishes due to the assumption that the momenta of the excitations are much smaller than the Fermi momentum k F . 2) the quartic Hamiltonian: We have again discarded summands proportional to e ±2ik F ja . Away from half-filling the second line in (4.8) is irrelevant. At half-filling we have e ±4πia = 1 which leads to the survival of the second line in (4.8) or in the effective action. Introducing the parameter ζ which vanishes away from half-filling and equals unity at half-filling, it follows that the continuum limit of the quartic part of the Hubbard Hamiltonian expanded around the Fermi levels is To summarize, the equations (4.7) and (4.9) represent the Hamiltonian of the fluctuations around the half-filled state (ζ = 1) and the state at generic filling (ζ = 0) of the Hubbard model. We would like to compare the large λ limit (or linearized) spectrum of this fluctuation Hamiltonian to the spectrum of the string Hamiltonian (3.12) or (3.13). The first step is then to bosonize (4.7),(4.9). Bosonization of the continuum-limit Hamiltonian There are three ways to relate the above fermionic Hamiltonian to a bosonic theory. One -which we will follow here -is to directly bosonize the Hamiltonians (4.7) and (4.9). Another is to express the continuum limit of H in terms of the SU(2) × SU(2) currents [41,42]; the third possibility is to use a mean field approximation [20]. The latter two approaches yield a direct sum of the conformal SU(2) level one WZW model and a massive U(1) Thirring model. This representation of the scaling-limit theory is not bosonic and thus is not suitable for comparison with the slow-string actions (3.7) or (3.11). However, the WZW model is equivalent to a compact boson at self-dual radius, while the Thirring model is equivalent to a sine-Gordon model. In the end, all three approaches are equivalent, leading to the results obtained by directly bosonizing (4.7) and (4.9) as discussed below. Using the rather standard bosonization formulae (γ is for the time being an arbitrary constant) translated into the Hamiltonian formalism 17 we are quickly led to the following bosonic Hamiltonian The commutation relations of the original creation and annihilation operators imply certain commutation relations between the fields In particular, it turns out that ∂ x θ α can be interpreted as the momentum conjugate to φ α , implying that the Hamiltonian simplifies to Furthermore, this Hamiltonian can be rewritten as a sum of two decoupled Hamiltonians by introducing (4.14) We then get: 18 Thus apparently we end up with two sine-Gordon theories. As is well known, the continuum limit of the Heisenberg XXX 1/2 chain near the antiferromagnetic state is described by a relativistic 2-d theory [20,42]. The excitations of the Heisenberg chain span only a subset of the excitations of the Hubbard model, namely (up to duality transformations), only those in which all sites are either empty or doubly-occupied. Taken separately and after appropriate redefinitions of the spacelike coordinate, each of the two Hamiltonians (4.16) and (4.17) can be interpreted as describing a relativistic theory. However, if they are combined together, the relativistic theory interpretation is not possible because the speeds of light for the two types of decoupled excitations are different: It is important to emphasize that in constructing this continuum limit we have assumed that the Hubbard coupling constant U is small compared to t, i.e. g should be large enough. This is reflected in the above expressions (4.18) in that the positivity of the Hamiltonian (4.15) implies that we are not allowed to take the 't Hooft coupling or g 2 to be arbitrarily small. In other words, as expected from the analysis of the discrete Hamiltonian, recovering the perturbative region of the gauge theory dilatation operator requires quantum treatment of the Hubbard model of [9]. The bosonized Hamiltonians (4.16) and (4.17) are, however, not the end of the story. Their sum, while looking similar to the effective Hamiltonian (3.12) of fluctuations around the slow-string solution, is qualitatively different from (3.12): both fields appear to be interacting at half-filling (ζ = 1), while one of the fields of the slow-string action (3.11) or in (3.12) is free in the large J limit. To find a way to match (4.15) and (3.12), (3.13) 20 let us analyze (4.16) and (4.17) separately. Through a canonical transformation the speed of light factor can be moved into the argument of the cosine potential. Then, in the free theory approximation (which is valid as λ is assumed to be large), the dimensions of the operators representing the potential terms . (4.20) This means that the interaction term is an irrelevant operator in H s but relevant one in H c (with ζ = 1). From the standpoint of the world sheet infrared physics we can therefore replace H s by a free (gapless) Hamiltonian. As a result, the effective Hamiltonian for small fluctuations around the half-filled state of the Hubbard model is Next, let us choose the free parameter γ such that the second velocity is zero, v s = 0, i.e. Introducing the rescaled fields we are then led to The identifications (4.2) combined with the choice of γ in (4.22) lead to Moreover, choosing, as discussed above, the lattice spacing to be a = 1, we conclude that the Hamiltonian (4.24) has essentially the same structure as (3.12) apart from order λ factors. We then find the following effective Hamiltonian for the linearized fluctuations around the half-filled state: The relative coefficients here can be adjusted further by canonically rescaling the momenta and the fields. There are quite obvious similarities between this Hamiltonian (4.26) and the Hamiltonian of the fluctuations around the slow string solution (3.13): both describe a massive and a massless field and the ratio between the mass and the mode number of the massive field is also the same in the two Hamiltonians. As already mentioned, the target-space time coordinate t in (3.11) should be identified (due to our choice of sign for the couplings of the Hubbard model) with the sign-reversed time coordinate conjugate to Hubbard's Hamiltonian to ensure that the string energies match anomalous dimensions on the spin chain side. The spatial coordinates x in (4.26) and s in (3.11) are essentially the same, modulo the 2π factor. The coefficients in the two Hamiltonians, however, appear to be different: H in (4.26) has an extra overall factor of g 2 = λ 8π 2 while it is absent in (3.13). 21 One may say this is hardly unexpected, given, in particular, the 1 π 2 mismatch between the AF ground state energy of Hubbard model (1.3) and the slow-string energy in (2.13), to leading order in √ λ. To appreciate additional subtleties that one may need to overcome on the way to better understanding the correspondence between the near-AF state spin chain described by Hubbard model and the slow-string limit on the string side it is instructive to consider the continuum limit for fluctuations around the minimal energy state at some arbitrary filling fraction. On the one hand, this limit should correspond to an effective Hamiltonian for fluctuation around the semiclassical solution dual to the (J 1 , J 2 ) operator of maximal anomalous dimension. 22 It is reasonable to expect that at least one of the two fields of appearing in the slow-string effective Hamiltonian will be interacting in general and massive at the quadratic level. On the other hand, as we have seen earlier in this section, away from the half-filling we are to set ζ = 0. Then the interaction term in the Hamiltonian (4.17) vanishes and the continuum limit as constructed above yields a free theory. It appears, therefore, that the qualitative agreement that we have described above is restricted to the half-filled Hubbard model. It would be interesting to understand if considering an effective action including other degrees of freedom would yield a better match away from the half-filling or, if possible, find a modification of the Hubbard Hamiltonian which does not affect the weak 't Hooft coupling limit, preserves integrability and accounts for the additional interaction in the strong 't Hooft coupling limit. There is an intriguing similarity between the discontinuous behaviour of the effective Hamiltonian of the Hubbard model 23 and that of the SU(2) sector of gauge theory at strong coupling, i.e. from the point of view of the world sheet theory. As it has been discussed in [10], while the excitations around the J 1 = J 2 states mix with other "non-SU(2)" world sheet excitations, they could be decoupled if J 1 = J 2 . It is tempting to conjecture that the differences between the Hubbard model and the slow string effective Hamiltonian away from half-filling can be corrected by additional interaction terms in the Hubbard Hamiltonian which account for mixing with other gauge theory operators. 5 Some "slow" string solutions with spins in AdS 5 and S 5 The general case of noncompact sectors is different: there is apparently no bound on the quantum string energy. One may relate this to the fact that the string wrapped on a circle in S 3 part of AdS 5 can not be static and in any case can have any radius (and thus any energy). It is still useful to study "slow-string" limits of solutions that carry one spin (S) in AdS 5 and one spin (J) in S 5 as they may have some interpretation in the SL(2) sector of gauge theory. In particular, we shall find that there is again a case 22 For J i ≪ √ λ and before expanding to quadratic order, this should be a "slow string" Hamiltonian, of the same type as (3.7) or (3.11). in which the classical string energy scales as E ∼ √ λJ + .... Below we shall discuss limits of the circular (S, J) solution of [29] and also consider a "flat-space like" solution which may be viewed as a special case of the more general (S, J 1 , J 2 ) circular solution in [29]. Circular solution in AdS Let us review the form of the solution of [29] describing a string which has a rigid circle form in AdS 5 and in S 5 and each circle rotates "along itself". In terms of complex combinations of global embedding coordinates (Y i in AdS 5 and X i in S 5 ) one has: where ρ 0 =const, and k and m are positive integer winding numbers. The charges are The equations of motion imply ω 2 = k 2 + κ 2 and the conformal gauge constraints give The energy is thus a function of the three parameters, e.g., E = √ λE( J Solving for κ we obtain Note that the minus sign solution can exist only if m ≥ w. The energy is Like in the case of the S 5 solution with J 1 = J 2 energy of the SL(2) sector solution with S = J thus has an explicit analytic form. Small fluctuations near this solution were discussed in [26]. There are 4 real massive fields from S 5 with mass √ w 2 − m 2 , i.e. ω n = √ n 2 + w 2 − m 2 . This frequency is real if n 2 + w 2 − m 2 ≥ 0. From AdS 5 there are also 2 free massive real fields with mass κ. Remaining fluctuations are coupled and the corresponding characteristic equation is where The stability condition is the reality of the solutions of this quartic equation. In the standard semiclassical expansion one assumes that m, w are fixed while λ is large. As in the above S 5 solution case we can now consider particular limits of the parameters: 24 (i) w ≫ m: this is the "fast string" case [29]. Only the solution with plus sign is possible. The energy has a regular expansion in m 2 As was shown in [29], this solution is stable for large w. (ii) w = m: this a "flat-space" type solution. We get Similar "flat-space"-type solution will be discussed in the next subsection. As follows from (5.9), this solution may be unstable for certain values of w. (iii) w ≪ m: this is a slow-moving string: the τ part of the solution is much smaller then the winding σ part. 25 There are two possible cases for the two signs in (5.7). We will concentrate only on the solution with plus sign, as the other one can be treated similarly. Here we can expand E = √ λm F ( w m ) in w/m, i.e. Here m ≪ J = √ λ w since λ is taken large first. As in the SU(2) case this solution always has unstable modes: the condition of reality of characteristic frequencies of S 5 24 The case when k = m = 0 is again the BPS one, E −S = J, when string world surfaces degenerates to a massless geodesic. Here the string trajectory is a massive geodesic in AdS 5 and a big circle in S 5 ; to make canonical identification between the string and gauge states/energies one is to apply an AdS 3 transformation to transform the AdS 5 geodesic to rest frame, t = τ . 25 A different large winding limit of the S = J solution was considered in [44]. fluctuations is n ≥ m. The fluctuations in other directions are non-tachyonic: expanding the solutions of the quartic equation (5.9) at large m we find that all frequencies are real in this limit. Like in the SU(2) case, we can then increase m further, but here one is not expecting the upper bound on the string energy so there should be no obvious choice for the maximal m. 26 Still, let us formally consider again the case of m = J (which corresponds to w = m √ λ → 0 in the large λ limit). Although it is not clear which state on the gauge spin chain side should correspond to the m = J string state, let us discuss this case by analogy with the SU(2) case. Setting m = J in (5.8) one obtains (we choose plus sign) i.e. at large λ As in the SU(2) case we computed the 1-loop correction to the energy for the case of m = J and found that it depends linearly on large J (details are given in Appendix). This suggests that in general for large J one should have E = f (λ)J; this relation may then be extrapolated to weak coupling and should correspond to the anomalous dimension of a particular state in the spectrum of the SL(2) spin chain. " Flat-space" type (S, J) solution in AdS 3 × S 2 Let us now consider another example of an (S, J) "flat-space" solution which may be viewed as a special case of the rational (S, J 1 , J 2 ) solution in [29]: it admits a special J 2 = 0 limit when the S 5 part of the solution is left (or right) moving. Here the string is wrapping a circle of S 5 which is not the maximal radius one. Explicitly (cf. (5.1)) 27 Y 0 = r 0 e iκτ , Y 1 = r 1 e iωτ +ikσ , X 1 = cos ψ 0 e im(τ −σ) , X 2 = sin ψ 0 (5. 15) where the coordinates ρ 0 ,ψ 0 specifying the position of the circular string are constant. The only non-zero elements of the rotation generators are S 50 = E, S 12 = S, J 12 = J, and now J = √ λm cos 2 ψ 0 . Again we have ω 2 = k 2 + κ 2 and 2κE − κ 2 = 2S √ k 2 + κ 2 + 2mJ , kS = mJ or 2κE − κ 2 = 2S √ k 2 + κ 2 + 2kS . (5.16) 26 The absence of an upper bound on the string energy is consistent with gauge-theory expectations in the SL(2) sector: for fixed J, i.e. fixed length of the chain, the spin-chain energy can be arbitrarily large because the spin S can be arbitrarily large. We are grateful to K. Zarembo for this remark. 27 One may wonder whether other such solutions exist. One can show that a similar solution of the form Y 0 = cosh ρ 0 e iκτ , Y 1 = sinh ρ 0 e ik(τ −σ) , X 1 = e iwτ +imσ does not exist. Also, a solution in AdS 5 of the form Y 0 = cosh ρ 0 e iκτ , Y 1 = √ 2 sinh ρ 0 e iwτ +ikσ , Y 2 = √ 2 sinh ρ 0 e im(τ −σ) does not exist. A useful relation following from r 2 0 − r 2 1 = 1 is E κ − S √ k 2 +κ 2 = 1 and the non-trivial solutions for κ are Note that cos 2 ψ 0 = kS m 2 . Therefore, a large S limit with k, m held fixed is not well defined. Instead, a useful limit to consider is large S and large m with m/S=fixed, e.g., equal to 1. In this limit the string is located near ψ 0 → π 2 and ρ 0 → ∞. The solution κ + has a regular expansion at small S, which is the flat space limit. For the physical κ + solution, the energy E = E(S, k) becomes E = 1 2 −k + 4S + k(k + 8S) k + 4S + k(k + 8S) 2S + 2k(k + 4S + k(k + 8S)) (5.18) Its large S expansion is Let us now consider two "slow" limits with small S → 0. The first limit is S → 0 and m → 0 or J → 0 with k finite. In this case the string shrinks to a point in both AdS 5 and S 5 and we get the usual flat-space scaling E = 2 k √ λS + S 3/2 λ √ k − 5S 5/2 4λ 2 k 3/2 + ... . (5.20) Another limit is S → 0 and k → ∞ with m, J kept finite. Now the string shrinks to a point in AdS 5 but it remains macroscopic in S 5 . The energy in this limit is the same as in (5.20). The same result (5.20) is found also in the special case when S = J, i.e. when m = k. Thus in contrast with a similar limit in the case of the previous (S, J) solution, here we obtain the flat-space behaviour of the energy instead of the √ λS behaviour. This solution is stable for sufficiently large S. One can compare its energy E = E(k, S) with the energy of the rational solution in the SL(2) sector E = E(k, m, S) reviewed in the previous subsection. Numerical analysis shows that the energy of this new solution has less energy than of the old one. Since the later is known to be stable for large S, we conclude that the new solution presented here should also be stable for large S. This is confirmed by direct analysis of its fluctuation spectrum which follows the discussion in [29]. Acknowledgments We are grateful to K. Zarembo for useful comments and discussions. We also thank G. Arutyunov and S. Frolov for helpful remarks. The work of A.T. and A.A.T. was supported in part by the DOE grant DE-FG02-91ER40690. A.A.T. also acknowledges the support of PPARC, the INTAS grant 03-51-6346 and the RS Wolfson award. Appendix: 1-loop correction to the energy of (S, J) solution in the slow-string limit Below we shall consider the case of m = J; the discussion for the general case of m ≫ w is similar. The 1-loop correction to the energy E 1 was found in [26]. It can be written as the sum of the contributions of the zero and non-zero modes sign(C where ω I,n are the bosonic frequencies for n = 0, ω p,0 are bosonic frequencies for n = 0, and the relevant part of the minor m 11 for computing the signs of C (n) I,B , C B p is m 11 ∼ (ω 2 − n 2 ). As in the SU(2) case discussed in section 2.2 we may expand E 1 at large λ for fixed J and then take J large. Again we can do this expansion inside the sum over n. Expanding the zero-mode part we get (omitting the imaginary part) The non-zero mode bosonic frequencies from the quartic characteristic equation have the following large λ expansions We see that in the large λ limit these frequencies are real, so the only unstable modes with n ≤ J come from S 5 fluctuations. One way to obtain the real 1-loop correction to energy is to omit the unstable modes, i.e. to take the sum over n starting with n = J. The sign functions can also be computed in the large λ limit and are found to be sign(C (A.10) B 1 , B 2 have complicated form which we will not write down, but we used them to evaluate the series numerically and plotĒ 1 against J. Taking the sums over n from J to N = 10 5 we plotted B 0 , B 1 and B 2 for J = 10 2 , ..., 10 4 . As in the SU(2) case, we found linear dependence with J. Combining together the classical energy and the 1-loop correction we got 28 Here as in (2.25) we introducedh to indicate the 1-loop contributions.
2014-10-01T00:00:00.000Z
2006-01-12T00:00:00.000
{ "year": 2006, "sha1": "44cf14af26cf7d3117c302a33e5a4737e15a61b9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0601074", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c1f70d5b2c45181b2df1d51b9ec2d6a2b5877248", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237762271
pes2o/s2orc
v3-fos-license
Clinical Application of Gold Nanoparticles for Diagnosis and Treatment Advances in nanobiotechnology have presented numerous possibilities of more effective diagnostic and therapeutic options. In particular, gold nanoparticles have demonstrated the potential for application in molecular imaging and treatment of cancers, including drug delivery system of certain target molecules, enhancement of radiation therapy, and photothermal treatment. This review discusses the properties, mechanism of action, and clinical application of gold nanoparticles. Although the safety of nanoparticles is yet to be ascertained, there is no doubt that in the future, nanotechnology will play an important role in the development and enhancement of a wide range of diagnostic and treatment modalities. INTRODUCTION Nanotechnology is a representative of fusion research and fusion technologies, born from the interdisciplinary fusion of several scientific fields such as chemistry, physics, engineering, and molecular biology. The development of nanotechnology has opened new doors of application, among them, the integration of nanoparticles into biology. Nanoparticles exhibit unique structural, chemical, biological, mechanical, electrical, and magnetic properties, which allow a wide range of applications in the field of biomedicine. 1 Currently, nanotechnology potential applications in biomedicine include drug delivery, 2-4 diagnosis including cancer cells, [5][6][7][8] and therapeutic including cancer treatment. 9,10 Nanoparticles are categorized based on their composition; gold, iron oxide, carbon, dielectric materials, and liposomes, and shape; solid nanoparticles, nanoshells, nanocages, Nanowires and nanotubes. 11 Gold nanoparticles are among the nanometerials with high potential for application in the biomedicine field. They are easily produced in multiple nanoparticles that exhibit multiple surface functionalities, have versatile surface chemistry properties, are relatively biocompatible, and have low toxicity. [11][12][13][14][15] These unique properties open many possibilities for their application making them the first choice for researchers in the biomedicine field. Currently, there are ongoing research studies on gold nanoparticles application in intravascular drug delivery, and gene transmission, 16 photothermia, 9,11 and ionizing radiation enhancement. 17,18 Although nanotechnology, in general, is relatively necessary to be prove its feasibility and safety, it is a field that has enormous potential with a lot to be explored. The clinical trials on the application of gold nanoparticles in head and neck cancer is a major milestone, in addition, our experience thus far has enabled us to discover more possible applications than we had hypothesized. In the future, more research including the use of gold nanoparticles should be performed in otolaryngology. Herein, I present the potential of nanotechnology application in biomedicine. FEATURE OF GOLD NANOPARTICLES Unique optical and physical properties of gold nanoparticles as well as its tailored surface functionalization provide an opportunity for developing cancer theragnostics through different mechanisms. 2 First, when nano-sized particles are injected into the body, a large number of particles accumulate in the tumor cells enabling tumor tracking and clear identification, and high drug uptake and penetration into the tumor. Second, because gold nanoparticles respond to light of a specific wavelength, it is possible to take images of both the tumor and tumor cells; In addition, their ability to dissipate heat from the particle surface facilitates hyperthermal treatment. These mechanisms underlie simultaneous therapeutic and diagnostic applications. Surface plasmon resonance When gold nanoparticles are stimulated by external light, the phenomenon that the conduction band electrons on the surface of the particles vibrate on the surface of the particles and collide with each other is called surface plasmon resonance (SPR). Gold nanoparticles irradiated by light show two reactions to the energy received from light: light scattering and light absorption. Light scattering is a phenomenon that energy received from light excites electrons on the surface of a particle vibrates and light of the same wavelength is emitted; light absorption is that the absorbed energy is converted into heat. In particular, nanoparticles have a characteristic maximum wavelength, "Peak frequency", at which light is absorbed and scattered, and it is dependent on size, shape, composition, and surrounding environment. This wavelength can be tuned to the visible and infrared regions with appropriate size and shape changes ( Fig. 1). By utilizing such characteristics, it is possible to produce nanoparticles with a high level of sensitivity at a specific wavelength, applicable Review Article in imaging of cancer cells and photothermal treatment. Therefore, the concept of utilizing optical properties of the gold nanoparticles such as light scattering, light absorption, and surface plasmon resonance can make promising platforms for a wide range of technologies including fluorescence, 18 photo-absorption and -scattering, 19 photoacoustic imaging, 20 and surface-enhanced Raman scattering (SERS). 21,22 Fluorescent image of gold nanoparticles Surface plasmon resonance can modify the optical properties of materials close to the nanoparticles. Thus, when gold nanoparticles bind to a substance, they can diminish its fluorescence. However, when the material is separated by a certain distance from the nanoparticles, the fluorescence of the material increases again. In a cell culture model of oral squamous cell carcinoma, it was discovered that when gold nanoparticles were cultured like cells, the original autofluorescence of cancer cells was reduced by about 15%. 18 This phenomenon is known to occur due to the strong light absorption of nanoparticles. Surface-enhanced raman scattering Raman scattering is a phenomenon that when light with a specific frequency is irradiated to a molecule the scattered light transformed by the natural vibration energy of the molecule is generated. Since the generated scattered light represents the intrinsic properties of the molecule, the molecular structure of a material can be inferred. The surface plasmon resonance of gold nanoparticles amplifies Raman scattering of adjacent molecules to generate surface-enhanced Raman scattering, and the Raman scattering generated at this time is amplified by several million times, enabling detection of cancer cells in in-vivo animal experiments. 21 USE OF GOLD NANOPARTICLES IN DRUG DELIVERY Gold nanoparticles of 10 to 140 nm in size injected intravenously have the property of gathering around a malignant tumor when immune system cannot recognize it, and this phenomenon is called an enhanced permeability and retention (EPR) effect. 23 Malignant tumors create a large number of new blood vessels to supply the nutrients needed for rapid growth, but those new blood vessels created are more permeable than normal blood vessels because their morphology is immature. Small nano-sized particles can move through the walls of the immature and highly permeable blood vessels and are retained in the tumor due to reduced lymphatic drainage. However, simple gold nanoparticles, cause an immune response and are removed by the reticuloendothelial system as soon as they enter the body. To keep the nanoparticles in the bloodstream for a long time, it is necessary to put a protective film around the nanoparticles using a polymer such as polyethylene glycol (PEG) so that the nanoparticles can be sent to the tumor through the EPR effect, and such a method is called 'Pegylation'. 3 Since gold easily binds to sulfur molecules, gold nanoparticles can easily bind to a wide variety of substances by utilizing the reducing power of the thiol group. Taking advantage of these properties, the toxic compound tumor necrosis factor (TNF) can be selectively transferred to the target site. 3 In particular, when TNF is bound to pegylation of gold nanoparticles, it exhibits selective toxicity in tumors without damage to other normal organs, and tumor specificity is also increased by TNF ligands. 24,25 Currently available tracer ligands are epidermal growth factor, 26 folate, 27 transferrin, 28 and single-chain variable fragment. [29][30][31] PHOTOTHERMAL THERAPY Tumors are selectively destroyed by hyperthermia at about 41-47°C because their heat-resistant capacity is reduced due to inadequate blood supply compared to normal cells. The high temperature loosens the cell membrane and elicit irreversible cell destruction through protein denaturation. However, the hyperthermia methods used in the past have shown many limitations in selectively destroying only the tumor and preserving the surrounding normal tissues, so their application has been limited. 32 Along with the discovery of the laser, thermal treatment using a laser was attempted for clinical application. However, a laser beam with a strong and small beam size penetrated deeply into the tissue but was limited due to a large disadvantage of non-selectivity. There is photodynamic therapy known as photochemotherapy, which started as a method to increase such a target-specificity. 33-39 a photosensitizer that reacts to light of a specific wavelength in the visible or near-infrared region convert normal oxygen in the tissue into toxic and activated oxygen, which cause direct destruction of tumor cells and closure of surrounding blood vessels. However, the main limitation of photodynamic therapy is that the photosensitizer remains in the body for too long. During this period, the patient is very sensitive to light, so it must be blocked from light. Photothermal therapy is a variation of photodynamic therapy, and its basic concept is similar to that of photodynamic therapy. When a photothermal material reacts to a specific wavelength, electrons present on the surface of the material become excited, and heat is generated in surrounding tissues by energy generated while the excited electrons are stabilized. Such the heat can be used to destroy tumor cells. Currently, the available photoabsorbers include indocyanine green, 40,41 naphthalocyanines, 32 and porphyrins coordinated with transition metals. 42 However, such dye-based materials have a problem of losing fluorescence after the light irradiation. Recently, with the development of nanotechnology, various nanoparticles have used for photothermal treatment. The light absorption of metallic nanoparticles is 4-5 times more than that of the conventional light-absorbing dyes. Such strong light absorption can reduce the destruction of surrounding normal tissues because it enables effective treatment using less energy lasers. In addition, the metal nanoparticles show high stability to light and do not lose the fluorescence unlike in the dyes. The metal nanoparticles currently in use include gold nanosphere, 16,[43][44][45] gold nanorod, 11,46 gold nanoshell, 9,10,47,48 Gold nanocage, 49 carbon nanotube. 50 The metal nanoparticles have shown strong light absorption in the visible and near-infrared regions, which is suitable for photothermal treatment. As described above, by adjusting the size, shape, and composition of the gold nanoparticles, the most energy can be absorbed in the visible and near-infrared regions. In particular, nanospheres, nanorods, and nanoshells are very useful because of their ease of manufacture, various applicability, and adjustable optical properties. Since making colloidal gold for the first time, it has become possible to produce particles that can be sized, and there have been many studies on the relationship between these particles and light. Gold nanospheres create a strong surface plasmon resonance phenomenon in response to light in the visible light region. As the particle size increases, the wavelength of the reacting light moves to the longer side, and when the nanoparticles are fused or clumped together, they react to light in the near-infrared region. This concept has spurred great interest for research on gold nanoparticles, whose reaction frequency can be adjusted by varying nanoparticles size and shape. As a result, near-infrared rays that have higher transmittance in tissues can be used for photothermal treatment. Gold nanoshell which is a modification of gold nanospheres, allows frequency adjustments. The gold nanoshell structure consists of a 100-200 nm silica core surrounded by a thin gold shell made of 5-20 nm gold. These nanoshells show strong light absorption and light scattering to near-infrared rays. 51 This optical property can be adjusted according to the ratio of the thickness of the gold shell to the diameter of the silica core of the nanoshell, and as the ratio decreases, the wavelength responds to longer light. 52 The discovery of gold nanoparticles, which can effectively generate heat with low-energy light in response to specific wavelengths, will be a major milestone in the use of photothermal treatment in cancer treatments. Considering that most cancers are deep-seated in the body, photothermal treatment using near-infrared rays is a promising option owing to its good tissue permeability and minimal damage to normal tissues. Near-infrared rays can penetrate approximately 10 cm inside the breast tissue and about 4 cm inside deep muscle tissues. 53 Notably, the efficacy of nanoparticles in phototherapy is also dependent on the type of nanoparticles and light source. The method of transmission may vary depending on the tumor origin and site; it can be passed through a blood vessel or directly infused into the tumor to destroy the tumor cells or to irradiate the cancer cells that remain after surgery. Although there are no controversies surrounding the method of light transmission, there are ongoing research on methods of selectively moving gold nanoparticles into tumors. Among the promising methods is the use of preengineered macrophages; macrophages are pre-engineered in vitro and act as vectors by ingesting nanoparticles and then latter moving them through the blood vessels into the tumor cells (Fig. 2). 10,54-61 However, It is thought that the specificity of the delivery of nanoparticles can be enhanced by further studies on which substances among the chemokines secreted from tumor cells promote the chemotaxis of macrophages. APPLICATION IN THE FIELD OF RADIATION ONCOLOGY Gold has high radiation absorption capability providing an excellent platform for enhancing ionizing radiation. It has been reported that tumor radiation exposure increases by 200% or more with gold nanoparticles. 62 Furthermore, other animal study demonstrated gold nanoparticles enhanced radiation therapy to have excellent therapeutic effects against tumors compared to the use of radiation therapy alone. 16 Thus, the radiation therapy with gold nanoparticles may enhance the effect Review Article of radiotherapy as well as achieve effective photothermal treatment. 63 POTENTIAL HAZARDS OF NANOPARTICLES The limitation of nanoparticles, which is expected to be applicable to diagnosis or treatment of various diseases, is that sufficient studies have not been conducted on the potential dangers of nanoparticles in vivo. These particles pose a risk of causing various lesions in the respiratory, cardiovascular and gastrointestinal systems. 64 Because nanoparticles have a large surface area compared to their volume, they are very active and can induce various catalytic reactions. In addition, since it can easily move through the cell membrane, sufficient research should be conducted on the physiological action of the particles. In particular, although it is possible in animal experiments, studies on the dangers in humans in vivo are limited, so studies on this should be sufficiently conducted in the future. CONCLUSION The clinical application of nanoparticles with constant and adjustable optical features is expected to be very encouraging in the future. The development of a very sensitive short-term method that goes beyond several diagnostic techniques currently in use, or more tumorspecific chemotherapy, radiation therapy, and photothermal therapy can be applied as primary or auxiliary therapy. The possibility of nanoparticles acting as toxic substances in the body is still unresolved, but if the use of nanoparticles is expected to be superior in terms of effects by comparing the limitations of current treatments with the toxic effects Significant steps can be made in the diagnosis and treatment of many diseases, including head and neck cancer.
2021-09-01T15:04:38.910Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "b67ce2f048a3e354479a142bbb65f92b29fe20e1", "oa_license": "CCBYNC", "oa_url": "http://www.jkslms.or.kr/journal/download_pdf.php?doi=10.25289/ML.2021.10.2.61", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "db9df4d0d9190ab24c9c642950a797e5f3995248", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science" ] }
58941491
pes2o/s2orc
v3-fos-license
Dynamic Modeling of Multimode Resonance Measuring Mode in Atomic-Force Microscopy with Piezoresistive , Self-Actuating Cantilevers The development of fast, qualitative and quantitative material characterization methods is one of the most important current issues in the field of nanosystems metrology. On this evidence it seems to be important to conduct a research on the capabilities of multimode resonance imaging mode in atomic-force microscopy (AFM) that allows broadening AFM capabilities in quality of nanonscale structures metrology and nano-object image quantitative analysis. The subject of this paper is modeling of physical phenomena that arise during the creation of such systems that describes coherent mechanic and electric phenomena in self-sensing and self-actuating cantilevers operating in multi-frequency resonance mode. The outcome of the research is represented by a virtual dynamic AFM model that allows understanding the signal generation process in AFM control and measuring circuits during sample scanning in multi-frequency mode. Introduction Since the AFM development thirty years ago [1][2][3], it became wide known as a high-performance tool for surface topography investigation of wide range of samples.Nevertheless the search for the optimum methods, which would describe samples properties and structure in the most complete way, drives the further development of AFM's.Methods for better surface visualization (sensitivity and resolution), scanning speed, and an ability to provide quantitative analysis of nano-mechanical properties and expanding the AFM applicability fields. The most widely used method now is amplitude modulation method where cantilever resonant oscillations are generated at one of its own frequencies (e.g. at the first resonant frequency), while surface visualization is operated at a signal stipulated by oscillations amplitude attenuation as a result of interaction between a sample and a cantilever tip [4].Nowadays the multimode mode is one of the most prospective ways to broaden the operating modes range, which opens new opportunities for AFM application [5][6][7][8]. The wide used method to measure the deflection of the cantilever customs a laser beam reflected from the surface of the cantilever onto a position-photodetector.When the cantilever is bended, the reflected laser light arise the cantilever at a different angle and provides in a measurable vertical shift of the laser spot on the detector.This method is called as beam detection. The paper by Viani demonstrated the use of small cantilevers showing a high sensitivity and was used to unfold single molecules using imaging speeds an order of magnitude faster than previously performed with conventional cantilevers [9].However, the use of small cantilever remains a challenging task using classical optical read-out techniques. At the same time, during the surface scanning process tip deflection signal is also detected at excitation frequency.The drawback of this approach is the loss of additional useful information about sample that is contained in the deflection signal at frequencies that differ from excitation frequencies. This problem is overcome in the AFM multimode mode that implies cantilever oscillations generation and\or response measurement at one or more frequencies.The received additional information, i.e. amplitude, phase and\or frequency response, may be used for both surface visualization with high resolution [10][11][12] and simultaneously extraction of additional information about the sample's nanomechanical properties simultaneously [13,14]. At the same time, the implementation of AFM multimode methods requires further theoretical discussion, as long as force interaction between the cantilever tip and the sample's surface in multimode mode is complex.As a result, experimental capabilities of AFM multimode mode now requires its theoretical understanding.In particular, the phase images, obtained with this mode, show clearly the detailed structure of the sample, however their physical interpretation is still unclear which obstructs a quantitative analysis of sample properties.Therefore, there is an actual need of research of measured response interpretation features in AFM multimode mode with considering its features and development of model-algorithmic support for near-surface structure properties measurement of materials and thin film in the micro-and nano-range. The features of AFM multimode mode hardware implementation The main measuring ingredient of atomic-force microscope is cantilever -a tiny force sensor in a form of cantilevered beam, the free end of which has a tip with sharpness at nano-range on it (Fig. 1).The self-sensing and self-actuating cantilever allows for much easier system integration and significant reduction in its weight.Hence, the microscope provides better controllability for full metrological automation and significant higher scan speeds. The fabrication and basic characteristics of thermo-mechanically driven cantilevers with integrated resistive readout have been described in detail previously [15][16][17].In brief: we use Si cantilevers 300 µm long, 110 µm wide, and 3-4 µm thick.The cantilevers have a piezoresistive Wheatstone-bridge positioned on the base of the cantilever and thermomechanical actuator located at near of the tip. Probe tip vertical displacement is operated by thermomechanical actuator in form as a resistive heater on cantilever surface.The bridge measuring circuit, located on cantilever surface as well, is used for measuring the displacement. The capability to record additional responses in amplitude, phase and\or oscillation frequencies is stipulated by complementing a widely used AFM single-frequency circuit by additional generators and synchronous detectors.The described above structural scheme is shown in Fig. 2. In the presented scheme oscillation generators are used to excite several (in this case, the first three bending) oscillation forms of cantilever by an embedded thermomechanical actuator.A piezoscanner provides cantilever's displacement relative to the sample along X, Y and Z axis.As the cantilever tip approaches to the sample's surface, the force interaction arises between them, which lead to cantilever amplitude attenuation.When the surface profile of the sample changes, the force acting on the probe become different as well.Thus, there is amplitude modulation of cantilever oscillations by force acting on the probe from the surface.The cantilever mechanical response arising at this point is recorded.After that the acquired signal is amplified and sent to synchronous detectors inputs, reference signals of which are generator output voltages.This is the way to distinguish the envelopes of acquired signals.Then synchronous detectors output voltages are sent to the controller, after that they may be used for feedback while scanning the surface and may be displayed and recorded.Thus, regardless of the main information (topographic) channel cantilever oscillations may be controlled at higher natural frequencies, which allow researching a wider range of tip-sample interactions.Due to this, while scanning there is an opportunity to create a distribution map of other sample surface local properties, apart from topographic ones.The feedback is realized by complementing the scheme with a proportional-integrating link that forms a control signal further amplified and sent to Z-electrode of the scanner.At the output proportional-integrating link the signal is proportional to the sample surface topography height change.Based on this, the image of sample surface characteristics is further formed. The opportunity of retrieving useful information from the additional data involves the necessity of researching mechanisms of informative signals conditioning during the sample surface scanning as well as it involves the development of reliable and exact amplitude, phase and frequency response interpretations.One of the most efficient solutions for this type of problem is simulation modeling. Unlike other methods, e.g.analytic, simulation modelling is able to describe functioning of the system almost without limitations in terms of detailing.Matlab Simulink has been used for implementation of AFM multi-frequency resonance mode simulation model as it was the most appropriate solution in this case. The further questions of creating a mathematical description for multi-frequency AFM converters and electronic components in the framework of solving the problem of its simulation model creation will be carried by the example of AFM with active cantilever produced by the Nanoanalytik GmbH Cantilever dynamic model The simplest model describing cantilever displacement (while scanning is one-dimensional model) considers cantilever as a resonator with lumped parameters.In case of multi-frequency cantilever excitation this model results in a system of n differential equations, as follows: The further questions of creating a mathematical description for multi-frequency AFM onverters and electronic components in the framework of solving the problem of its simulation odel creation will be carried by the example of AFM with active cantilever produced by the anoanalytik GmbH Company [18]. Cantilever dynamic model The simplest model describing cantilever displacement (while scanning is one-dimensional odel) considers cantilever as a resonator with lumped parameters.In case of multi-frequency antilever excitation this model results in a system of n differential equations, as follows: here z i is the vertical cantilever tip displacement at the i-th oscillation mode; m i is the cantilever ffective mass at the i-th oscillation mode; k i is the cantilever stiffness at the i-th oscillation mode; i is the natural frequency at the i-th oscillation mode ( ); F i is the amplitude of the i-th xcitatory force; φ i is the initial phase of the i-th oscillation; ts i F is the interaction strength between antilever tip and the sample's surface at i-th oscillation mode; c i is the damping coefficient of i-th where z i is the vertical cantilever tip displacement at the i-th oscillation mode; m i is the cantilever effective mass at the i-th oscillation mode; k i is the cantilever stiffness at the i-th oscillation mode; ω i is the natural frequency at the i-th oscillation mode ( where z i is the vertical cantilever tip displacement at the i-th oscillation mode; m i is the cantilever effective mass at the i-th oscillation mode; k i is the cantilever stiffness at the i-th oscillation mode; ω i is the natural frequency at the i-th oscillation mode ( ); F i is the amplitude of the i-th excitatory force; φ i is the initial phase of the i-th oscillation; ts i F is the interaction strength between cantilever tip and the sample's surface at i-th oscillation mode; c i is the damping coefficient of i-th oscillation mode: where ζ i is the relative damping coefficient; Q i is the quality coefficient of the i-th oscillation mode. With the aim to transform model (1) to the form that would be more convenient for the structural modeling, it is considered rational to describe it as a state space.A dynamic object model in the state space is presented as an aggregate of physical variables q 1 (t), …, q n (t) that determine object's behavior in the further moments of time, on condition that the object's state at the first moment of time and all the applied impacts are known.The connection between input variables u 1 (t), …, u n (t), output variables p 1 (t), …, p n (t) and state variables q 1 (t), …, q n (t) is represented by the first-order differential equations written in matrix form. The following variables are introduced as the mentioned above state parameters: ); F i is the amplitude of the i-th excitatory force; φ i is the initial phase of the i-th oscillation; ( ) ( ) where z i is the vertical cantilever tip displacement at the i-th oscillation mode; m i is the cantilever effective mass at the i-th oscillation mode; k i is the cantilever stiffness at the i-th oscillation mode; ω i is the natural frequency at the i-th oscillation mode ( ); F i is the amplitude of the i-th excitatory force; φ i is the initial phase of the i-th oscillation; ts i F is the interaction strength between cantilever tip and the sample's surface at i-th oscillation mode; c i is the damping coefficient of i-th oscillation mode: where ζ i is the relative damping coefficient; Q i is the quality coefficient of the i-th oscillation mode. With the aim to transform model (1) to the form that would be more convenient for the structural modeling, it is considered rational to describe it as a state space.A dynamic object model in the state space is presented as an aggregate of physical variables q 1 (t), …, q n (t) that determine object's behavior in the further moments of time, on condition that the object's state at the first moment of time and all the applied impacts are known.The connection between input variables u 1 (t), …, u n (t), output variables p 1 (t), …, p n (t) and state variables q 1 (t), …, q n (t) is represented by the first-order differential equations written in matrix form. The following variables are introduced as the mentioned above state parameters: is the interaction strength between cantilever tip and the sample's surface at i-th oscillation mode; c i is the damping coefficient of i-th oscillation mode: re z i is the vertical cantilever tip displacement at the i-th oscillation mode; m i is the cantilever tive mass at the i-th oscillation mode; k i is the cantilever stiffness at the i-th oscillation mode; the natural frequency at the i-th oscillation mode ( ); F i is the amplitude of the i-th tatory force; φ i is the initial phase of the i-th oscillation; ts i F is the interaction strength between ilever tip and the sample's surface at i-th oscillation mode; c i is the damping coefficient of i-th lation mode: re ζ i is the relative damping coefficient; Q i is the quality coefficient of the i-th oscillation mode. With the aim to transform model ( 1) to the form that would be more convenient for the tural modeling, it is considered rational to describe it as a state space.A dynamic object model e state space is presented as an aggregate of physical variables q 1 (t), …, q n (t) that determine ct's behavior in the further moments of time, on condition that the object's state at the first ent of time and all the applied impacts are known.The connection between input variables , …, u n (t), output variables p 1 (t), …, p n (t) and state variables q 1 (t), …, q n (t) is represented by irst-order differential equations written in matrix form. The following variables are introduced as the mentioned above state parameters: where ζ i is the relative damping coefficient; Q i is the quality coefficient of the i-th oscillation mode. With the aim to transform model ( 1) to the form that would be more convenient for the structural modeling, it is considered rational to describe it as a state space.A dynamic object model in the state space is presented as an aggregate of physical variables q 1 (t), …, q n (t) that determine object's behavior in the further moments of time, on condition that the object's state at the first moment of time and all the applied impacts are known.The connection between input variables u 1 (t), …, u n (t), output variables p 1 (t), …, p n (t) and state variables q 1 (t), …, q n (t) is represented by the first-order differential equations written in matrix form. The following variables are introduced as the mentioned above state parameters: 1) results in the following system of equations in state variables: is the vector of input effects, A (n×n) is the system's state matrix, B ] [ ] Substituting ( 2) into (1) results in the following system of equations in state variables: Substituting ( 2) into (1) results in the following system of equations in state variables: where q is the state vector, u is the vector of input effects, A (n×n) is the system's state matrix, B (n×r) is the control (input) matrix. [ ] For complete description of dynamic model the state equation's dynamic model has to be complemented (3) with equations making up a connection between the state variables n n q q q q K and the output variables p 1 , …, p n : where p is the output vector, C (m×n) is the output matrix, D (m×r) is the output control matrix. [ ] The cantilever frequency characteristics based on the resulting model in accordance with the where q is the state vector, u is the vector of input effects, A (n×n) is the system's state matrix, B (n×r is the control (input) matrix. Substituting ( 2) into (1) results in the following system of equations in state variables: where q is the state vector, u is the vector of input effects, A (n×n) is the system's state matrix, B (n×r) is the control (input) matrix. [ ] For complete description of dynamic model the state equation's dynamic model has to be complemented (3) with equations making up a connection between the state variables n n q q q q K and the output variables p 1 , …, p n : where p is the output vector, C (m×n) is the output matrix, D (m×r) is the output control matrix. [ ] The cantilever frequency characteristics based on the resulting model in accordance with the For complete description of dynamic model the state equation's dynamic model has to be complemented (3) with equations making up a connection between the state variables Substituting ( 2) into (1) results in the following system of equations in state variables: where q is the state vector, u is the vector of input effects, A (n×n) is the system's state matrix, B (n×r) is the control (input) matrix. [ ] For complete description of dynamic model the state equation's dynamic model has to be complemented (3) with equations making up a connection between the state variables n n q q q q K and the output variables p 1 , …, p n : where p is the output vector, C (m×n) is the output matrix, D (m×r) is the output control matrix. [ ] The cantilever frequency characteristics based on the resulting model in accordance with the and the output variables p 1 , …, p n : ubstituting ( 2) into (1) results in the following system of equations in state variables: is the state vector, u is the vector of input effects, A (n×n) is the system's state matrix, B he control (input) matrix. [ ] or complete description of dynamic model the state equation's dynamic model has to be ented (3) with equations making up a connection between the state variables [ ] he cantilever frequency characteristics based on the resulting model in accordance with the where p is the output vector, C (m×n) is the output matrix, D (m×r) is the output control matrix.complemented (3) with equations making up a connection between the state variables , , , n n q q q q K and the output variables p 1 , …, p n : where p is the output vector, C (m×n) is the output matrix, D (m×r) is the output control matrix. [ ] The cantilever frequency characteristics based on the resulting model in accordance with the Matlab Simulink parameters given in Table 1, are presented in Fig. 3 The cantilever frequency characteristics based on the resulting model in accordance with the Matlab Simulink parameters given in Table 1, are presented in Fig. 3. Thermomechanical actuator model Thermomechanical actuator is by design a resistive heater on the cantilever surface of mass m h and specific heat c h that initially has resistance R h0 at ambient temperature T 0. When the electric current i flows through the conductor with resistance R h , the power is P. The temperature of the conductor rises by ∆T.Power that allocates at conductor's resistance, depending on overheating temperatures (relatively to the initial ambient temperature), is determined by the following expression: НАДО R h0 .То есть НОЛЬ в индексе должен быть без наклона (не курсивом). Thermomechanical actuator model Thermomechanical actuator is by design a resistive heater on the cantilever surface of mass m h and specific heat c h that initially has resistance R h0 at ambient temperature T 0. When the electric current i flows through the conductor with resistance R h , the power is P. The temperature of the conductor rises by ∆T.Power that allocates at conductor's resistance, depending on overheating temperatures (relatively to the initial ambient temperature), is determined by the following expression: is the overheat relatively to the initial temperature, β is the temperature coefficient of resistance. Then the amount of heat accumulated in the conductor is: The defining equation for thermomechanical actuator is the differential equation of heat is the overheat relatively to the initial temperature, β is the temperature coefficient of resistance. Then the amount of heat accumulated in the conductor is: Thermomechanical actuator model Thermomechanical actuator is by design a resistive heater on the cantilever surface of mass m h and specific heat c h that initially has resistance R h0 at ambient temperature T 0. When the electric current i flows through the conductor with resistance R h , the power is P. The temperature of the conductor rises by ∆T.Power that allocates at conductor's resistance, depending on overheating temperatures (relatively to the initial ambient temperature), is determined by the following expression: is the overheat relatively to the initial temperature, β is the temperature coefficient of resistance. Then the amount of heat accumulated in the conductor is: The defining equation for thermomechanical actuator is the differential equation of heat balance: is the overheat relatively to the initial temperature, β is the temperature ient of resistance. Then the amount of heat accumulated in the conductor is: The defining equation for thermomechanical actuator is the differential equation of heat : ( ) T is the heater temperature, t is the time, α is the reduced heat transfer coefficient, S is the area of the heater, Т с is the ambient temperature. Due to the difference between thermal expansion coefficients of cantilever materials (Sise, Al -metallization) the heat produced by the heater causes mechanical stresses in er and, as a result, bending.The displacement d of the tip along the axis Z may be calculated : The defining equation for thermomechanical actuator is the differential equation of heat balance: ) where T is the heater temperature, t is the time, α is the reduced heat transfer coefficient, S is the surface area of the heater, Т с is the ambient temperature. Due to the difference between thermal expansion coefficients of cantilever materials (Si -the base, Al -metallization) the heat produced by the heater causes mechanical stresses in cantilever and, as a result, bending.The displacement d of the tip along the axis Z may be calculated by [19]: of resistance.n the amount of heat accumulated in the conductor is: defining equation for thermomechanical actuator is the differential equation of heat ( ) the heater temperature, t is the time, α is the reduced heat transfer coefficient, S is the a of the heater, Т с is the ambient temperature. to the difference between thermal expansion coefficients of cantilever materials (Si -Al -metallization) the heat produced by the heater causes mechanical stresses in nd, as a result, bending.The displacement d of the tip along the axis Z may be calculated where L is the cantilever length, ρ is the curvature of the cantilever's bent axis [19]: НАДО R h0 .То есть НОЛЬ в индексе должен быть без наклона (не курсивом). Model of power interaction between cantilever tip and sample is possible to study a sample only due to the variety of powers appearing between the tip and the sample while surface scanning.Depending on the probe-sample distance orces may prevail.r instance, in the attraction mode (tip moving away from the sample) the prevailing type tion is Van der Waals force of intermolecular interaction.In the repulsion mode (tip ng to the sample) elastic and inelastic interactions with the sample prevail.The ns are calculated from the Derjagin-Muller-Toropov model [21]: is the Hamaker constant, R tip is the radius of cantilever tip curvature, h = z s + ∆z is the etween the cantilever tip and the sample surface (∆z is the cantilever deflection value, z s is nce between the un-bended cantilever and the sample), a 0 is the intermolecular where I is the moment of section inertia, EI is the equivalent rigidity defined by the following expression [20]: Удалить отмеченные на рисунке артефакты . Model of power interaction between cantilever tip and sample It is possible to study a sample only due to the variety of powers appearing between the cantilever tip and the sample while surface scanning.Depending on the probe-sample distance different forces may prevail. For instance, in the attraction mode (tip moving away from the sample) the prevailing type of interaction is Van der Waals force of intermolecular interaction.In the repulsion mode (tip approaching to the sample) elastic and inelastic interactions with the sample prevail.The interactions are calculated from the Derjagin-Muller-Toropov model [21]: where L is the cantilever length, ρ is the curvature of the cantilever's bent axis [19]: ( ) ( ) where r is the curvature radius, b 1 and b 2 are the width of silicon and aluminum layers, respectively; α 1 и α 2 are the coefficients of silicon and aluminum thermal expansion, respectively; E 1 и E 2 are the silicon and aluminum elasticity modulus. Then equivalent force developed by the actuator is: where I is the moment of section inertia, EI is the equivalent rigidity defined by the following expression [20]: Model of power interaction between cantilever tip and sample It is possible to study a sample only due to the variety of powers appearing between the cantilever tip and the sample while surface scanning.Depending on the probe-sample distance different forces may prevail. For instance, in the attraction mode (tip moving away from the sample) the prevailing type of interaction is Van der Waals force of intermolecular interaction.In the repulsion mode (tip approaching to the sample) elastic and inelastic interactions with the sample prevail.The interactions are calculated from the Derjagin-Muller-Toropov model [21]: where H is the Hamaker constant, R tip is the radius of cantilever tip curvature, h = z s + ∆z is the distance between the cantilever tip and the sample surface (∆z is the cantilever deflection value, z s is the distance between the un-bended cantilever and the sample), a 0 is the intermolecular (interatomic) distance, E* is the effective modulus of elasticity of the probe-sample system: , where H is the Hamaker constant, R tip is the radius of cantilever tip curvature, h = z s + ∆z is the distance between the cantilever tip and the sample surface (∆z is the cantilever deflection value, z s is the distance between the un-bended cantilever and the sample), a 0 is the intermolecular (interatomic) distance, E* is the effective modulus of elasticity of the probe-sample system: where H is the Hamaker constant, R tip is the radius of cantilever tip curvature, h = z s + ∆z is the distance between the cantilever tip and the sample surface (∆z is the cantilever deflection value, z s is the distance between the un-bended cantilever and the sample), a 0 is the intermolecular (interatomic) distance, E* is the effective modulus of elasticity of the probe-sample system: where E t and E s are the modulus of tip and sample materials elasticity, respectively, ν t and ν s are Poisson's ratios of the tip and the sample materials, respectively. , where E t and E s are the modulus of tip and sample materials elasticity, respectively, ν t and ν s are Poisson's ratios of the tip and the sample materials, respectively. Measuring circuit model Responses that come up during the scanning of the sample are recorded by a measuring circuit, embedded into the cantilever.The cantilever measuring circuit in the Nanoanalytik GmbH company's atomic-force microscope is formed by the system of four piezoresistors (R 1 , R 3 from one side and R 2 , R 4 from the other) with the resistance of 1098 ohms each one.All of the piezoresistors are located so that cantilever deformation causes resistance changes, equal in absolute value and opposite in sign, in the adjacent shoulders of the bridge.The typical reference voltage is V 0 = 2.5 V. At the output of the measuring circuit the measured voltage V out is proportional to the difference of the relative resistances: Measuring circuit model Responses that come up during the scanning of the sample are recorded by a measuring circuit, embedded into the cantilever.The cantilever measuring circuit in the Nanoanalytik GmbH company's atomic-force microscope is formed by the system of four piezoresistors (R 1 , R 3 from one side and R 2 , R 4 from the other) with the resistance of 1098 ohms each one.All of the piezoresistors are located so that cantilever deformation causes resistance changes, equal in absolute value and opposite in sign, in the adjacent shoulders of the bridge.The typical reference voltage is V 0 = 2.5 V. At the output of the measuring circuit the measured voltage V out is proportional to the difference of the relative resistances: where V 0 is the reference voltage applied to the measuring circuit. While scanning, the cantilever perceives the external force action F from the surface, causing its deflection ∆z: is the cantilever stiffness. It is obvious that the maximum cantilever deflection is observed at the loose end (Y = L coordinate): ( ) where V 0 is the reference voltage applied to the measuring circuit. While scanning, the cantilever perceives the external force action F from the surface, causing its deflection ∆z: At the output of the measuring circuit the measured voltage V out is proportional to the ce of the relative resistances: , 0 is the reference voltage applied to the measuring circuit.While scanning, the cantilever perceives the external force action F from the surface, its deflection ∆z: is the cantilever stiffness. It is obvious that the maximum cantilever deflection is observed at the loose end (Y = L ate): ( ) where At the output of the measuring circuit the measured voltage V out is proportional to the difference of the relative resistances: where V 0 is the reference voltage applied to the measuring circuit. While scanning, the cantilever perceives the external force action F from the surface, causing its deflection ∆z: is the cantilever stiffness. It is obvious that the maximum cantilever deflection is observed at the loose end (Y = L coordinate): ( ) is the cantilever stiffness.At the output of the measuring circuit the measured voltage V out is proportional to the difference of the relative resistances: where V 0 is the reference voltage applied to the measuring circuit. While scanning, the cantilever perceives the external force action F from the surface, causing its deflection ∆z: is the cantilever stiffness. It is obvious that the maximum cantilever deflection is observed at the loose end (Y = L coordinate): ( ) The maximum bending moment for the cantilever under research loaded at the end by the concentrated force F appears at the attachment point (Y = 0) and is expressed by: It is obvious that the maximum cantilever deflection is observed at the loose end (Y = L coordinate): its deflection ∆z: is the cantilever stiffness. t is obvious that the maximum cantilever deflection is observed at the loose end (Y = L ate): ( ) he maximum bending moment for the cantilever under research loaded at the end by the rated force F appears at the attachment point (Y = 0) and is expressed by: The maximum bending moment for the cantilever under research loaded at the end by the concentrated force F appears at the attachment point (Y = 0) and is expressed by: ausing its deflection ∆z: is the cantilever stiffness. It is obvious that the maximum cantilever deflection is observed at the loose end (Y = L oordinate): ( ) The maximum bending moment for the cantilever under research loaded at the end by the oncentrated force F appears at the attachment point (Y = 0) and is expressed by: Mechanical stresses from the given load reach the greatest value on its surface Z = ±t 1 /2 in the section where M max acts, i.e. in the place where the cantilever is clamped [22]: Удалить отмеченные на рисунке артефакты v tip (курсивом) . At last, mechanical stresses' impact on each of the piezoresistors included in the measuring circuit with the resistance R i causes the resistance increment ΔR i : 2 , where π is the piezoresistive coefficient, the value and the sign of which depend on the resistor's location at the cantilever (the longitudinal piezoresistive coefficient is π l = 70e-11 Pa -1 , the transverse piezoresistive coefficient is π t = -π l ). The cantilever measuring circuit model created in conformity with the ratio described above is presented at Fig. 5. Simulation model of atomic-force microscope The developed mathematical models of AFM converters and electronic assemblies, implemented as algorithmic blocks, allow switching to a multi-frequency AFM simulation model.The microscope Mechanical stresses from the given load reach the greatest value on its surface Z = ±t 1 /2 in the section where M max acts, i.e. in the place where the cantilever is clamped [22]: At last, mechanical stresses' impact on each of the piezoresistors included in the measuring circuit with the resistance R i causes the resistance increment ΔR i : where π is the piezoresistive coefficient, the value and the sign of which depend on the resistor's location at the cantilever (the longitudinal piezoresistive coefficient is π l = 70e-11 Pa -1 , the transverse piezoresistive coefficient is π t = -π l ). The cantilever measuring circuit model created in conformity with the ratio described above is presented at Fig. 5. Fig. 5.The cantilever measuring circuit model Simulation model of atomic-force microscope The developed mathematical models of AFM converters and electronic assemblies, implemented as algorithmic blocks, allow switching to a multi-frequency AFM simulation model. The microscope model described by a differential equations system has been implemented by means of Matlab Simulink package as a structural scheme (Fig. 6). As the basis for the model the parameters matching the Nanoanalytik Gmbh company's atomic-force microscope cantilever were taken: cantilever length L = 350 μm, cantilever width w = 140 μm, aluminum layer thickness t Al = 0.7 μm, silicon oxide layer thickness t SiO2 = 0.5 μm, Conclusion In this paper we have presented a model describing the operation of self-actuating and self-sensing cantilevers and their mechanical and electrical characteristics.This paper presents also particularly a The developed model allows researching and interpreting the measured responses in the multimode mode AFM taking into account its natural features.Also it is of interest as a basis for development of model-algorithmic support for measurements of the surface properties of materials and thin film structures in micro-and nanoscales using self-sensing and self-actuating cantilevers. Fig. 2 .Fig. 2 . Fig.2.The structural scheme of a multimode AFM.Here f i (i = 1, 2, 3) are excitation frequencies; A i , φ i are cantilever oscillation amplitudes measured at excitation frequencies f i dynamic model the state equation's dynamic model has to be quations making up a connection between the state variables output variables p 1 , …, p n : , Du Cq p + = C (m×n) is the output matrix, D (m×r) is the output control matrix. output variables p 1 , …, p n : , Du Cq p + = is the output vector, C (m×n) is the output matrix, D (m×r) is the output control matrix. , where r is the curvature radius, b Si and b Al are the width of silicon and aluminum layers, respectively; α Si и α Al are the coefficients of silicon and aluminum thermal expansion, respectively; E Si и E Al are the silicon and aluminum elasticity modulus.Then equivalent force developed by the actuator is: s the cantilever length, ρ is the curvature of the cantilever's bent axis [19]: b 1 and b 2 are the width of silicon and aluminum layers, respectively; e the coefficients of silicon and aluminum thermal expansion, respectively; E 1 и E 2 are the d aluminum elasticity modulus.enequivalent force developed by the actuator is: of section inertia, EI is the equivalent rigidity defined by the following n[20]: Fig. 4 . Fig. 4. Structural model of the block modeling tip-sample interaction Fig. 4 . Fig. 4. Structural model of the block modeling tip-sample interaction5.Measuring circuit modelResponses that come up during the scanning of the sample are recorded by a measuring embedded into the cantilever.The cantilever measuring circuit in the Nanoanalytik GmbH y's atomic-force microscope is formed by the system of four piezoresistors (R 1 , R 3 from one R 2 , R 4 from the other) with the resistance of 1098 ohms each one.All of the piezoresistors ted so that cantilever deformation causes resistance changes, equal in absolute value and e in sign, in the adjacent shoulders of the bridge.The typical reference voltage is V 0 = 2.5 V. Fig. 4 . Fig. 4. Structural model of the block modeling tip-sample interaction5.Measuring circuit modelResponses that come up during the scanning of the sample are recorded by a measuring circuit, embedded into the cantilever.The cantilever measuring circuit in the Nanoanalytik GmbH company's atomic-force microscope is formed by the system of four piezoresistors (R 1 , R 3 from one side and R 2 , R 4 from the other) with the resistance of 1098 ohms each one.All of the piezoresistors are located so that cantilever deformation causes resistance changes, equal in absolute value and opposite in sign, in the adjacent shoulders of the bridge.The typical reference voltage is V 0 = 2.5 V. Fig. 4 .Fig. 4 . Fig. 4. Structural model of the block modeling tip-sample interaction Fig. 4. Structural model of the block modeling tip-sample interaction5.Measuring circuit modelResponses that come up during the scanning of the sample are recorded by a measuring circuit, embedded into the cantilever.The cantilever measuring circuit in the Nanoanalytik GmbH company's atomic-force microscope is formed by the system of four piezoresistors (R 1 , R 3 from one side and R 2 , R 4 from the other) with the resistance of 1098 ohms each one.All of the piezoresistors are located so that cantilever deformation causes resistance changes, equal in absolute value and opposite in sign, in the adjacent shoulders of the bridge.The typical reference voltage is V 0 = 2.5 V. Fig. 6 . Fig. 6.The simulation model of the multi-frequency atomic-force microscope Table 1 . Cantilever dynamic model parameters
2018-12-18T15:05:45.105Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "4a7053986cef01c9b671272ba272f88ca34654a6", "oa_license": "CCBYNC", "oa_url": "http://elib.sfu-kras.ru/bitstream/handle/2311/72115/08_Marinushkin.pdf;jsessionid=53EC6CE198A85A53A7B77A8DFD1D6640?sequence=1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4a7053986cef01c9b671272ba272f88ca34654a6", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
4518631
pes2o/s2orc
v3-fos-license
SB‐216763, a GSK‐3β inhibitor, protects against aldosterone‐induced cardiac, and renal injury by activating autophagy Abstract Cardiovascular and renal inflammation induced by Aldosterone (Aldo) plays a pivotal role in the pathogenesis of hypertension and renal fibrosis. GSK‐3β contributes to inflammatory cardiovascular and renal diseases, but its role in Aldo‐induced hypertension, and renal damage is not clear. In the present study, rats were treated with Aldo combined with SB‐216763 (a GSK‐3β inhibitor) for 4 weeks. Hemodynamic, cardiac, and renal parameters were assayed at the indicated time. Here we found that rats treated with Aldo presented cardiac and renal hypertrophy and dysfunction. Cardiac and renal expression levels of molecular markers attesting inflammation and fibrosis were increased by Aldo infusion, whereas the treatment of SB‐216763 reversed these alterations. SB‐216763 suppressed cardiac and renal inflammatory cytokines levels (TNF‐a, IL‐1β, and MCP‐1). Meanwhile, SB‐216763 increased the protein levels of LC3‐II in the cardiorenal tissues as well as p62 degradation, indicating that SB‐216763 induced autophagy activation in cardiac, and renal tissues. Importantly, inhibition of autophagy by 3‐MA attenuated the role of SB‐216763 in inhibiting perivascular fibrosis, and tubulointerstitial injury. These data suggest that SB‐216763 protected against Aldo‐induced cardiac and renal injury by activating autophagy, and might be a therapeutic option for salt‐sensitive hypertension and renal fibrosis. mineralocorticoid receptor (MR) in the kidney. 1,2 Clinical studies demonstrated that inhibition of MR could decrease the risk of both morbidity and mortality in patients with heart failure, and inhibit albumin excretion in hypertensive and diabetic patients. [3][4][5] In addition, MR antagonists also present a renoprotective effect in several experimental models of kidney disease. 6,7 Aldo is implicated in cardiovascular and renal remodeling by inducing inflammation, oxidative stress, fibrosis, and hypertrophy. 2,8,9 Previous studies showed that chronic inflammation has a critical role in the pathogenesis of hypertension, 10,11 and renal inflammation is correlated with the development and progression of renal damage. 12,13 These findings suggest that Aldo-induced inflammation might be used as a potential therapeutic target for treating salt-sensitive hypertension and renal fibrosis. 14 Glycogen synthase kinase 3β (GSK3β) is a multifunctional serine/threonine kinase. GSK3β is involved in the growth of the heart during development and in response to stress. 15 However, its role in regulating cardiac and renal injury remains unclear. GSK3β has a broad range of substrates, and regulates inflammatory response, cell differentiation, and survival. 16,17 GSK3β is an important positive regulator of inflammatory process. [18][19][20][21] GSK3β-deficient cells become more sensitive to tumor necrosis factor α (TNFα)-induced apoptosis. 22 Recent studies demonstrated that the therapeutic effect of GSK3β inhibitor is associated with suppressing inflammatory response. Inhibition of GSK3β results in decreased activation of the pro-inflammatory transcription factor NF-κB. Additionally, GSK3β inhibition contribute to produce anti-inflammatory cytokine IL-10. 23 GSK3β inhibition triggers a profound autophagic response in cells under serum-free condition. 24 This phenomenon was also observed in vivo from ischemic mouse models. 25,26 However, the mechanism underlying GSK3β inhibition-triggered autophagy is not fully clear. Autophagy is a lysosome-mediated intracellular catabolic process by which cells remove their damaged organelles for the maintenance of cellular homeostasis. 27 Autophagy is induced in response to intracellular or extracellular signals, such as starvation, pathogen infection, and endoplasmic reticulum stress. 28,29 Emerging evidence has indicated that autophagy may have an essential role for the host during bacterial clearance and may also interact with inflammatory processes, which consequently may impact the outcomes of disease progression. 30,31 Based on the above findings, we investigated whether GSK3β inhibition protects against Aldo-induced cardiac and renal injury by activating autophagy. Current data suggest that rats treated with Aldo present cardiac and renal injury. The treatment of SB-216763 reverses these alterations. SB-216763 suppressed cardiovascular and renal inflammation by activating autophagy in cardiac and renal tissues. | Animal models The study was approved by the Ethics Committee of Nantong University. Adult male Wistar rats were obtained from the Chinese Academy of Sciences (Shanghai, China), and maintained in a pathogen-free facility. The animals were divided into four groups (n = 9/group): (1) Vehicle infusion group treated with vehicle alone; (2) Aldo-salt group treated with an infusion of Aldo-salt (1 mg/kg/day diluted in sunflower oil and administered by subcutaneous injection); (3) Aldo-salt plus SB-216763 group treated with an infusion of Aldo-salt plus SB-216763 at 1.5 mg/kg/day (MCE, Princeton, NJ); and (4) SB-216763 group. This dose was chosen on the basis of previous studies reporting its anti-inflammatory role. 32,33 After 4 weeks of treatment, urine was collected in metabolic cages, and hemodynamic parameters were assayed. 34 For example, blood pressure (BP) was measured in conscious but restrained animals, prewarmed to 34°C for 20 min. For each group, BP was measured three times on 3 separate days, and the mean value of all readings was taken as the average for the rat. Then blood samples, heart and kidney tissues were collected under sedation with sodium pentobarbital anesthesia. | Western blot and antibodies Western blot analysis to assess rat LC3-I, LC3-II, p62, and βactin protein expression was performed as previously described. 35 The anti-LC3-I/LC3-II/p62 primary antibodies were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). β-actin primary antibodies were purchased from Sigma (St. Louis, MO). Briefly, equivalent amounts of total protein (150 μg) were electrophoresed through a 12% SDSpolyacrylamide gel, and then wet electro-transferred to 0.2 μm PVDF membranes (Bio-Rad, Richmond, CA). The blots were incubated at 37°C for 2 h with indicated antibodies, and then incubated with a goat anti-rabbit HRP-conjugated secondary antibody (1:5000, Jackson, Bar Harbor, ME). Protein signals were visualized by enhanced chemiluminescence detection (Pierce Biotechonology, Rockford, IL). The intensity of the selected bands was quantified using Image J. | Enzyme-linked immunosorbent assay (ELISA) The hearts and kidneys were removed and homogenized at the indicated time. Homogenates were sonicated for 30 s and then centrifuged at 2500g and 4°C. The supernatants were used for measurement of TNF-a, MCP-1, and IL-1β. ELISAs were performed using a TNF-a kit (R&D Systems, Minneapolis, MN), MCP-1 kit (R&D Systems), and IL-1β kit (R&D Systems) according to the manufacturers' protocols. | Histological analysis Rats were treated with Aldo or SB-216763 for 4 weeks, and kidney tissues or left ventricles were quickly fixed with buffered 4% paraformaldehyde, embedded in paraffin and cut into 4 µm-thick sections. Periodic acid-Schiff and Masson's trichrome staining were performed using serial sections. Tubulointerstitial fibrosis areas were semiquantified using image J software and expressed as a percentage of the total area. Perivascular fibrosis was assessed by calculating the percentage of Trichrome-stained collagen deposits surrounding the vessel to the total perivascular area using the software's color cube function. | Fluorescence microscopy analysis Transverse sections at 3 µm thickness were fixed with 3% paraformaldehyde, and subjected to immunocytochemistry as previously described. 32,33 The sections were briefly rinsed in PBS and blocked in a solution containing 5% BSA (Sigma) and 0.1% Triton X-100 (Sigma) for 1 h at room temperature. The slides were then immunostained with primary antibodies against LC3β (3868#, Cell Signaling Technology, Danvers, MA). To visualize the primary antibodies, slides were stained with FITC-conjugated secondary antibodies. The slides were also stained with 40, 6-diamidino-2-phenylindole (DAPI) to visualize the nuclei. After washed thrice in PBS samples were examined under a fluorescence laser scanning confocal FV1000 microscope (Olympus). | Statistical analysis All data are expressed as mean ± SEM, computed from the average measurements obtained from each group of animals. Results were analysed using unpaired Student's t-test or the Mann-Whitney U-test. Analyses were conducted using GraphPad Prism (4.0) (software, Inc. San Diego, CA). Differences were deemed statistically significant at P < 0.05. | SB-216763 suppresses aldo-induced cardiorenal inflammation Aldo-treated rats present a significant increase in systolic blood pressure (SBP) and diastolic BP (DBP) ( Table 1). Meanwhile, Aldo treatment results in an increase in ratio of heart weight to body weight and a decrease in heart rate ( Table 1). Both cardiac dysfunction and hypertrophy are reversed by SB-216763, a GSK-3β inhibitor (Table 1). In addition, body weight, urine volume, serum creatinine, creatinine clearance, kidney weight/body weight ratio, and urinary protein excretion of each group at the end of the 4-week experiment are present in Table 2. Aldo-treated rats induce renal hypertrophy, increase glomerular filtration rate, and results in a significant increase in serum creatinine compared with the other groups. Both renal dysfunction and hypertrophy are prevented by SB-216763 treatment. We assayed the effect of SB-216763 on regulating GSK3β expression and activation. As shown in Figures 1A and 1B, SB-216763 increases phosphor-GSK-3βS9 (PSer9) levels, but not total GSK-3β levels in the heart and kidney. Previous studies demonstrated that inflammation plays an important role in the pathogenesis of hypertension, and the development and progression of renal fibrosis. 13,36,37 We then assayed the expression of various proinflammatory cytokines by qPCR and ELISA. Figure 1C-F showed that the cardiac mRNA and protein levels of TNF-a, MCP-1, and IL-1β are markedly increased by Aldo infusion, whereas SB-216763 treatment inhibits cardiac inflammatory cytokines levels ( Figure 1C-F). Similarly, the renal inflammatory cytokines levels (TNF-a, MCP-1, and IL-1β) are enhanced after Aldo treatment, whereas the expression of these genes is inhibited by SB-216763 (Figure 2A-D). | SB-216763 suppresses aldo-induced cardiorenal fibrosis To evaluate cardiorenal fibrosis, we assayed the expression of collagen type I (Col I) and transforming growth factor-β (TGF- , which are extracellular matrix protein and profibrotic marker, respectively. Although the Aldo-treated group shows an increase of Col I and TGF-β in rat heart, the expression Col I and TGF-β is inhibited by SB-216763 treatment ( Figure 3A). As in the case of heart, Aldo treatment upregulates the expression of renal Col I and TGF-β, and SB-216763 suppresses this increase ( Figure 3B). Perivascular fibrosis in the left ventricle was assessed by deposition of collagen around the vasculature. Figure 3C presents representative images of collagen deposition and quantitation of fibrosis. SB-216763 treatment markedly suppresses Aldo-induced perivascular fibrosis. Meanwhile, periodic acid Schiff-stained sections revealed that SB-216763 inhibits Aldo-induced tubulointerstitial damage ( Figure 3D). These results suggest that SB-216763 treatment inhibits cardiorenal inflammation and fibrosis induced by Aldo. | SB-216763 increases autophagy activation in cardiorenal tissues Recent studies demonstrated that GSK3β functions as a key regulator coordinating cellular homeostasis by suppressing autophagy in physiological and pathological processes such as cancer, 38 axonal degeneration, 39 and diabetes. 40 Moreover, autophagy plays a crucial role in inflammation and fibrosis. Saitoh et al 41 demonstrated that loss of the autophagy protein Atg16L1 enhances endotoxin-induced IL-1β production. Knockdown of autophagy increases the immune response in hepatitis C virus-infected hepatocytes. 42 The p62 protein, also called sequestosome 1 (SQSTM1), binds directly to LC3 and GABARAP family proteins via a specific sequence motif. The protein is itself degraded by autophagy and may be used as a marker to study autophagic flux. 43 Here we found that the Aldo treatment slightly increases the protein levels of LC3-II (the marker of autophagy activation) in the cardiorenal tissues as well as p62 degradation, whereas SB-216763 treatment results in marked activation of autophagy ( Figures 4A and 4B and 5A and 5B). The fluorescence analysis was carried out to further verify the activation of autophagy after Aldo or Aldo plus SB-216763 treatment. Figure 4C and Figure 5C showed that following SB-216763 treatment, there is a significant increase of LC3 green puncta representing autophagic vacuoles and an accumulation of LC3-II in cardiorenal tissues, indicating that autophagy is activated. | SB-216763 suppresses aldo-induced cardiorenal injury by regulating autophagy SB-216763 suppresses Aldo-induced cardiorenal injury and activates autophagy, and autophagy plays important role in inhibiting proinflammatory response. Therefore, we next investigated whether SB-216763 suppresses Aldo-induced cardiorenal injury by activating autophagy. A 3-methyladenine (3-MA), which inhibits the formation of autophagosomes, attenuates LC3-II upregulation. A 3-MA is usually used to inhibit and study the mechanism of autophagy. 44 As shown in Figures 6A and 6B, the level of perivascular fibrosis and tubulointerstitial injury score increases after Aldo treatment, whereas SB-216763 can reverse the alteration. More important, pharmacological inhibition of autophagy by 3-MA markedly inhibits the role of SB-216763 in suppressing perivascular fibrosis and tubulointerstitial injury. These results demonstrate that SB-216763, a GSK-3β inhibitor, protects against Aldo-induced cardiac, and renal injury by activating autophagy. | DISCUSSION In this study, we investigated the potential role of SB-216763, a GSK3β inhibitor, in treating salt-sensitive hypertension, and renal fibrosis. The current data demonstrate that: (i) SB-216763 inhibits Aldo-induced cardiorenal inflammation; (ii) SB-216763 inhibits Aldoinduced cardiorenal fibrosis; (iii) SB-216763 activates cardiorenal autophagy induced by Aldo; and (iv) SB-216763 inhibits cardiorenal injury by activating autophagy. These results reveal the important role of SB-216763 and autophagy in regulating salt-sensitive Renal and cardiovascular fibrosis is found to be linked to inflammation in Aldo-treated models. 45 Inhibition of inflammatory cytokines ameliorates renal and cardiac injury in several experimental models. 46 Here we demonstrate that Aldo treatment induces the production of pro-inflammatory cytokines (such as TNF-a, IL-1β, and MCP-1), whereas SB-216763 markedly reverses these alterations in kidney and heart. Autophagy is generally considered to be a cell survival mechanism that functions in response to various stress conditions and plays a critical role in human physiology and diseases, especially in inflammation and immunity. 47,48 Autophagy or autophagy-related proteins could control inflammatory signaling by regulating inflammatory transcriptional responses. For example, upregulated p62 in autophagy-deficient cells activate the pro-inflammatory transcription factor NF-κB. 49 Autophagy also prevents tissue inflammation due to its role in apoptotic corpse clearance. 30 Recent studies reported that impaired or deficient autophagy is believed to contribute to renal and cardiovascular disease as described in previous studies that focused on the role of autophagy in cardiac-renal disease, 50,51 but the mechanism is not understood clearly. GSK3β can regulate autophagy, but its 52 reported that lithium (an inhibitor of GSK3β activity) can induce autophagy by inhibiting inositol monophosphatase, but another study showed that lithium can reduce autophagy and apoptosis after neonatal hypoxia-ischemia. 53 Inhibition of GSK3β activity using SB-216763 or knockdown of GSK3βpromotes autophagy to reduce cadmium-induced apoptosis. 54 In the study we found that SB-216763 suppresses cardiovascular and renal inflammation and activates autophagy in cardiac and renal tissues. More important, inhibition of autophagy increases perivascular fibrosis and tubulointerstitial injury in the SB216763-treated group. These results demonstrate that SB-216763 protects against Aldo-induced cardiac and renal injury by activating autophagy. As shown in Figures 4 and 5, Aldo treatment can slightly increase the protein levels of LC3-II in the cardiorenal tissues as well as p62 degradation, indicating Aldo slightly activates autophagy. The results suggest that autophagy might be a protective mechanism following occurrence of cardiorenal injury. Unfortunately, the current level of autophagy activation is not competent to resist cardiorenal injury induced by Aldo. Therefore, additional autophagy activation induced by SB-216763 plays an important role in inhibiting Aldo-induced cardiorenal injury. | CONCLUSION The current study demonstrated that a GSK-3β inhibitor, SB-216763, suppresses aldosterone-induced cardiac, and renal injury in by increasing autophagy activation, thus offering a new target for prevention of cardiac and renal injury.
2018-04-04T00:06:17.649Z
2018-03-30T00:00:00.000
{ "year": 2018, "sha1": "fefbd09c4684dbdb33d20cf55c58c73896cbcd11", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcb.26788", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fefbd09c4684dbdb33d20cf55c58c73896cbcd11", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
22915307
pes2o/s2orc
v3-fos-license
“The psychological skeleton in the closet”: mortality after a sibling’s suicide To study the association between loss of an adult sibling due to suicide and mortality from various causes up to 18 years after bereavement. We conducted a follow-up study between 1981 and 2002, based on register data representing the total population of Swedes aged 25–64 years (n = 1,748,069). An elevated mortality rate from all causes was found among men (RR 1.26; 95 % CI: 1.14–1.40) and women (1.27; 1.11–1.45) who had experienced a sibling’s suicide. The standardized rate ratio of suicide of bereaved to non-bereaved persons was 2.46 (1.86–3.24) among men and 3.25 (2.28–4.65) among women. We also found some indications of an interrelation between sibling suicide and subsequent deaths from external causes other than suicide in men (1.77; 1.34–2.34) and deaths from cardiovascular disease in women (1.37; 0.99–1.91). An elevated all-cause mortality rate was found after the first year of bereavement in men, while bereaved women experienced higher mortality rates during the first 2 years and after 5 years of bereavement. Our study provides support for adverse health effects among survivors associated with sibling loss due to suicide. Sibling suicides were primarily associated with suicide in bereaved survivors, although there was an increased mortality rate from discordant causes, which strengthens the possibility that the observed associations might not be entirely due to shared genetic causes. Introduction 'The person who commits suicide puts his psychological skeleton in the survivor's emotional closet' [1] expresses the fact that every case of suicide leaves surviving kins to deal with grief that may have lingering adverse effects for several years following the loss [2]. Several scholars suggest that the bereavement process after suicide is different and more difficult to cope with as compared to other types of losses [3][4][5]. Suicide survivors often experience more severe and prolonged grief reactions as compared to other survivors [6]. Studies have also documented a higher prevalence of mental disorders, such as post-traumatic stress disorder, complicated grief disorder, and depression among family members who have experienced suicide when compared to natural deaths [7][8][9][10]. This indicates that more severe health consequences could follow family deaths due to suicide when compared to deaths from other causes. However, the empirical evidence documenting the adverse health impacts among suicide survivors remains sparse, due to the comparatively rare occurrence in the general population, as well as the lack of linked data [11,12]. Previous studies have examined the influence of spousal suicide on health and mortality among surviving spouses [13], while studies on bereavement following suicide among siblings have been largely overlooked. It could be that the suicide of an adult sibling has less of an impact than the death of other family members (spouse, children), since adult siblings normally do not live together [14]. The adult sibling relationship is characterized by lower frequency of contact when compared with other familial relationships [15]. Nevertheless, to the extent that siblings are also beloved and provide companionship, one would still expect that suicide of an adult sibling, just as suicide of a spouse, would be associated with adverse health effects. In fact, the death of a sibling often represents the loss of the most intimate and durable relationships of a person's lifetime [16]. There are specific qualitative aspects of the mourning process that are intensified and frequently more problematic for survivors of suicide than for other types of bereaved [17,18], which could contribute to adverse health effects. Most of these relate to the fact that suicide survivors are viewed more negatively by others as well as by themselves (''survivor guilt'') [18]. These negative consequences become additional stressors over and above the 'normal' grief process for the suicide survivor and may lead to an unsatisfactory bereavement outcome, which may be manifested as deterioration in physical or mental health, as well as put the survivor him/herself at risk of suicide. During the acute stage of grief following loss of a loved one, survivors are at risk of intense psychogenic shock, also known as the ''broken heart syndrome'' [19][20][21]. Chronic stress following suicide bereavement could additionally lead to pathophysiological changes in the sympathetic nervous system, the hypothalamic-pituitary-adrenal (HPA) axis, and the immune system [22,23]. Deleterious coping responses such as smoking, increased alcohol consumption, and poor diet and exercise habits may also follow the complicated bereavement process after suicide [24]. Such behaviors are likely to contribute to increased risk of both suicide and physical health over the longer term. The association between sibling suicide and mortality among bereaved siblings could additionally depend on the time since the death. One study found that suicide survivors took much longer for symptoms to abate and remained higher on some dimensions such as anxiety when compared to natural death survivors [25], while other studies have found that any initial differences disappeared 2-4 years after the death [6,12]. Whether the mortality risk remains over longer periods after the loss of a sibling remains unclear. Siblings share a biological predisposition to death and disease, which makes confounding by genetic inheritance likely. For instance, there is evidence that genetic factors can predispose people towards the development of psychiatric disorders that are associated with suicide, particularly depression and bipolar disorders [26]. Siblings also share many environmental exposures during childhood and adolescence, such as disorganization and breakup, parental loss, substance abuse, intrafamily violence, and sexual abuse that could be considered confounders [18]. An increased risk of suicide after a sibling's suicide might, therefore, be a partial marker of shared genetic predisposition or shared environmental determinants of suicide. One method of getting closer to causal inference is to examine deaths due to specific causes [27]. By studying whether pairs of siblings died of the same specific cause (both died of suicide) or a discordant cause may assist in teasing out causation from confounding. We sought to conduct a large-scale longitudinal study of mortality following the loss of an adult sibling due to suicide, using intergenerational linked longitudinal data from nationwide Swedish registers. We postulated that the association between sibling's death due to suicide and mortality will vary according to time since the sibling's suicide, gender of the bereaved sibling, and specific cause of the bereaved sibling's death. Methods The data come from the Swedish Work and Mortality Data (HSIA). HSIA is a multiple-linked data of national Swedish routine registers, maintained at the Centre for Health Equity Studies (CHESS) in Stockholm. The data material was approved by the Regional Ethical Review board of Karolinska Institutet in 2002-11-11 (decision no. 02-481) and the Central Ethical Review Board 2012-09-13 (application no. 2012/1260-31). These decisions approve that the data can be used for several purposes, including this study. Written consent was not needed since all information is anonymous and researchers did not have access to any personal information that could identify study participants (e.g., personal identity number, home address, etc.). Consequently, it was not possible to trace specific individuals included in the data material. We have also followed all other ethical principles and guidelines in handling the data. In the study, all persons born in Sweden during the period 1932-1962 and alive at the end of 1980 were linked to the mother, provided that she was born in Sweden and alive at the same time. Hence, sibling groups were identified through the mother. Siblings could not be linked unless the mother was alive at the end of 1980. Singletons (persons from one-child families) were excluded from analysis. To get a reasonable age balance and in order to use adequate control variables, we focused on people aged 25-64 years. The study persons were observed from the beginning of 1981 until 2002 (the last observation point in time in the dataset used) unless they died before that. We included individual-level information about basic socio-demographic variables (age, socioeconomic status, marital status, number of children, number of siblings, region of residence, and calendar year) to proxy social and regional mortality differences, and the month and specific cause of death for all persons who died during the study period. Socioeconomic status distinguished blue-collar workers, white-collar workers, self-employed, and individuals who were not active in the labor market. Marital status consisted of the categories married, previously married, and never married. Number of children and number of siblings were treated as categorical variables. Region of residence refers to each person's county of residence and consisted of 26 different categories. All covariates except age and calendar year (which are timevarying) were measured at the end of 1980, which antedated any death. Deaths due to suicide were distinguished by the ICD8 and ICD9 codes E950-E959, and ICD10 codes X60-X84. In persons who experienced a sibling's suicide we additionally separated between deaths from external causes other than suicide (ICD8 codes E807-E949 and E960-E999, ICD9 codes E800-E949 and E960-E999, and ICD10 codes V01-X59 and X85-Y98), cardiovascular diseases (ICD8 codes 410-438 and 795, ICD9 codes 410-438 and 798, and ICD10 codes I21-I52 and I60-I69), cancer (ICD8 and ICD 9 codes 140-239, and ICD10 codes C00-D48), and all other causes (all other codes). All people who experienced a sibling's death (from any cause) during the study period were included in the dataset used, whereas those who did not experience a sibling's death (from any cause) comprised a 10 % random sample. In the statistical analyses, people from each group were weighted according to their sampling proportion. Normalized weights were used to correct for inflated t-statistics. The suicide of a sibling was treated as a time-varying exposure, which means that when a sibling died due to suicide, the surviving sibling changed status from being a non-bereaved to being a bereaved person. If no sibling died, or if a sibling died from any other cause than suicide, the index person was categorized as 'non-bereaved'. We estimated standardized mortality rates in the index persons using Poisson regressions, and focused on the rate ratio of bereaved and non-bereaved persons. Separate analyses were conducted for men and women. Covariates included in the regressions were age, calendar year, region of residence, socioeconomic status, marital status, number of children, and number of siblings. Each control variable provided good statistical fit. Throughout the paper the level of statistical significance referred to is 0.05. Results In total, 6,833 men and 6,810 women experienced a sibling's suicide, and 357 and 217 of them subsequently died ( Table 1). The corresponding numbers in non-bereaved persons (persons who did not experience a sibling's suicide) were 46,248 deaths among 884,370 men, and 27,988 deaths among 850,056 women. Almost 15 % of all deaths in bereaved men and bereaved women, respectively, were due to own suicide, as compared to about 10 % in nonbereaved men and less than 7 % in non-bereaved women. The unstandardized mortality rate of bereaved persons was roughly twice that of non-bereaved persons (4.8/2.5 for men and 2.9/1.6 for women in Table 2). Bereaved persons were slightly older than non-bereaved persons, somewhat more of them had a lower socioeconomic position and were not married, and they had more siblings, which is expected considering that the likelihood of observing a sibling's death must be higher in larger sibling groups (Table 2). We accounted for distributional differences between bereaved and non-bereaved persons using the control variables. Hence, throughout the analyses we estimated standardized mortality rate ratios, i.e., the ratio of the death rate of persons who experienced the suicide of a sibling and the death rate of persons who did not experience the suicide of a sibling. In addition, the standardized mortality rate was notably higher in bereaved persons than in non-bereaved persons ( Table 3). Among men, the mortality rate ratio of bereaved to non-bereaved persons was 1.26 (95 % CI: 1.14-1.40), whereas in women it was 1.27 (1.11-1.45). In most subcategories of the control variables, there was an association between having experienced a sibling's suicide and own mortality, but the statistical power was generally too small to facilitate any detailed conclusions on this point (''Appendix''). The strongest association was for deaths due to suicide (Table 3). Bereaved men had a standardized suicide rate that was 2.38 times higher (1.81-3.14) than that of nonbereaved men, while the association was even stronger in women (3.25; 2.28-4.65). We also found associations of sibling suicide with other external causes of death (primarily accidents and other violent deaths) in men (1.77; 1.34-2.34). The broad category of deaths from any other cause (than external, cardiovascular disease, or cancer) was also associated with sibling suicide among men (1.25; 1.02-1.53). In women, the relatively small number of deaths implied broader confidence intervals. Results were Men did not experience an immediate elevation in allcause mortality rate after a sibling's suicide (Fig. 1). The standardized mortality rate was highest during the second year, when it was over twice that of non-bereaved men. Thereafter it was fairly stable at around 1.5 that of nonbereaved men. Women displayed a somewhat different pattern. During the first 2 years after a sibling's suicide, they had a mortality rate that was approximately 1.5 that of non-bereaved women. During the third to fifth year, the two groups were more or less in parity, whereas thereafter bereaved women generally experienced an elevation in the mortality rate. Discussion This large-scale follow-up study based on the Swedish population register examined bereavement following a sibling's suicide. We found elevated risks of all-cause mortality among both bereaved women and bereaved men. Sibling suicide primarily increased the risk of own suicide, and stronger associations were found in women than in men. However, there were also some indications of associations with deaths from cardiovascular disease in women and external causes other than suicide in men. An elevated risk starting in the second year after bereavement was found among men, while some support was found for both a short-term and a longer-term elevation in mortality following sibling suicide among women. There are reasons to believe that suicide among siblings have particularly severe consequences for the health of surviving siblings. Since the death of a sibling has been considered to have less impact than the death of other family members, the social support system may be unprepared to respond appropriately to the grieving sibling's needs after suicide [15,16]. Grief processes occurring within the family may also leave the remaining sibling(s) more vulnerable. Parents who lose a child often become preoccupied and absorbed with their own grief and post-traumatic stress. Under such circumstances, they may be unprepared to respond to the needs of the remaining children [16]. The fact that the social support system primarily focuses on the bereaved parents may leave remaining siblings unsupported in their grief process. Given the fact that bereavement after suicide is more difficult than grief following other types of losses it could have particularly severe consequences for the health of surviving siblings in circumstances where siblings lack support from their immediate social networks. The strongest association was found with respect to concordant causes of death, i.e., when both persons in a sibling pair committing suicide. This might be an indication of a more difficult bereavement process as compared to other types of losses [17,18]. The prevalence of mental disorders, such as post-traumatic stress disorder, complicated grief, and depression among those who have experienced suicides and violent deaths are comparatively higher in individuals bereaved by suicide [7][8][9][10] and could contribute to their higher risk of completed suicide. Qualitative aspects of the mourning process could all contribute to a higher suicide risk in bereaved siblings. Survivors of suicide seem to struggle more with making sense of the sudden and unexpected loss [28]. Since suicide violates fundamental notions of self-preservation, survivors often struggle to make sense of the motives and frame of the mind of the deceased. They often exhibit higher levels of guilt, blame, and responsibility for the death than other mourners [29] and experience feelings of rejection and abandonment by the loved one, along with anger toward the deceased [6]. Death by suicide is also stigmatizing to surviving family members and triggers a chain reaction of negative consequences [18]. Stigma is in turn linked to the lack of social support, and suicide survivors seem to receive less emotional support than natural death survivors [30]. We also found some evidence suggesting that sibling suicide might be associated with elevated cardiovascular mortality among women, albeit the association was only close to statistically significant due to the relatively few deaths. Extreme stress levels following the suicide of a sibling could lead to cardiovascular disease through psycho-physiological stress mechanisms [19][20][21]. Women have been suggested especially vulnerable to acute stress levels following grief (i.e., 'the broken heart syndrome') [20]. Chronic stress following suicide bereavement could also lead to pathophysiological changes in the sympathetic nervous system, the HPA axis, and the immune system [22,23]. In addition, deleterious coping responses such as smoking, increased alcohol consumption, and poor diet and exercise habits could also follow the complicated bereavement process after suicide [24]. Such behaviors could both contribute to increased suicide rates among bereaved and the excess risk of cardiovascular disease among women. Men's higher mortality rate from external causes (other than suicide) might reflect adverse coping behaviors such as violent and high-risk behavior that could lead to an increased risk of accidents and crime-related deaths. As compared to men, women were found to be more vulnerable to a sibling's suicide in terms of own risk of suicide and they also seemed to exhibit an excess rate of cardiovascular mortality following a sibling's suicide. These findings may reflect that women place more emphasis on social relationships than men do, particularly when it comes to family members [31]. The loss of a sibling could hence have stronger emotional consequences for women, and this could, in turn, account for poorer mental health and higher risk of suicide. The longer-term association between sibling suicide and mortality from all causes among bereaved women may also suggest that longer-term Risk ratio Years since suicide of a sibling Women Fig. 1 Standardized all-cause mortality rate after a sibling's suicide as compared with nonbereaved persons (with 95 % confidence interval) mechanism such as an extended and complicated grief process and severe depression underlie the association. We have previously reported that women's health is more influenced by bereavement than men's [27]. We found that the associations between concordant causes of death (both siblings died of suicide) were stronger than associations between discordant causes. This could, to some extent, indicate confounding by genetic resemblance or shared environmental risk factors. Genetic factors can predispose people towards the development of psychiatric disorders that are associated with suicide [26]. Siblings also share many environmental exposures during childhood and adolescence such as disorganization and breakup, parental loss, substance abuse, intrafamily violence, and sexual abuse [18]. On the other hand, we also found associations between sibling's suicide and mortality from discordant causes such as cardiovascular disease and other external causes (than suicide), which strengthen the possibility that the association may be causal. Confounding by genetic similarities or shared environmental conditions would seem more likely if we had found associations only when both siblings died of suicide. It could also be that many deaths from the same cause still reflect effects of bereavement. Suicide and poor mental health is strongly linked to the bereavement process. Even though siblings died of the same cause we cannot exclude the possibility that the association partially reflects bereavement rather than genetic confounding, i.e., one sibling dies of suicide and the remaining sibling takes his/her own life due to bereavement, rather than because of genetic vulnerability or shared environmental exposures. Despite the obvious strengths of this study such as the use of total population register data, large sample size, longitudinal follow-up, reliable information on mortality and other included variables, some limitations should be noted. Data in Swedish registers are collected systematically without the purpose of being used for specific research. Use of such data may reduce the risk of differential misclassification bias [32]. Nevertheless, suicides might be prone to misclassification during death ascertainment procedures [33]. It is possible that Swedish suicide rates may be influenced by death certification and registration procedures as well as substantive factors. For instance, it has been found that autopsy rates may spatially and temporally affect the validity of suicide statistics. One study found that Swedish suicide data are of inferior quality relative to the suicide data of some other countries [33]. Accordingly, it could be that many sibling suicides go underreported and that our analyses are biased to some extent while there is far less reason for concern about the validity of the other causes of death included in this study. Underreported suicide rates could also vary by sociodemographic variables such as gender, age, and socioeconomic position. However, non-reported suicides would then be classified as deaths from other causes in our study which would primarily lead to an underestimation the ''true'' association between sibling suicide and suicide risk among remaining siblings. Furthermore, more detailed individual information is required to uncover the actual causal mechanisms that link sibling suicide and mortality. Such information could also minimize the possibility of omitted variable bias. Ideally, one would like to have access to biological and genetic data, detailed information on diseases from medical records (including diagnosis of post-bereavement depression), more information on shared childhood social environment and family characteristics, and detailed data on personal and relational characteristics, which are unfortunately not included in the registers. On the other hand, our results likely underestimate the true bereavement effect, since we could study only deaths, and we know that all suicide attempts do not lead to death [34]. Examining attempted suicide, depression, variation in health and risk taking behaviors, etc., would presumably provide more precision and even greater statistical power to the analyses. Our way of treating deaths from concordant causes as an indication of confounding might further underestimate the true effect of bereavement, since many deaths from the same cause in a sibling pair could be related to bereavement processes. Our findings illustrate that a person's suicide can have adverse health consequences for their adult siblings. The health care system should incorporate broader collateral health effects when dealing with individuals and families exposed to suicide [35]. Considering that their loss and pain are often insufficiently acknowledged by the parents and the informal social support system [15,16], it is important that physicians and health care professionals recognize the needs of siblings bereaved due to suicide. Some possible clinical interventions have previously been suggested for suicide survivors [18] and these could also be relevant for individuals exposed to sibling suicide. For instance, bereaved should be offered the opportunity to interact with other suicide survivors in support groups, not just other mourners. With the elevated risk of suicidality associated with survivorship, management of survivors in the health care system should include not only support for their grief but also proactive monitoring of their risk of psychiatric disorders and suicidality. Furthermore, support services should target the interface between the survivors and their social network. Since many survivors feel stigmatized they need help in dealing with the social aftermath of suicide. Moreover, bereavement services, such as support groups and support services provided by the health care system, should be directed toward family systems given the risk of additional suicides within the family. This may be the most important multigenerational prevention available to mental health professionals. A unique Swedish example of such a service is ''Barntraumateamet'' at Vrinnevi hospital in Norrköping that supports families after the loss of a family member both immediately after the death as well as in a longer-term perspective. Similar support services for bereaved children and adults should be provided also in other parts of Sweden. Finally, our findings also conform to the view that it is important for mental health professionals to support surviving siblings due to suicide over time and in a longer-term perspective. In summary, our study provided the first large-scale evidence for mortality associated with sibling suicide at adult age. Bereavement related deaths may be prevented by targeted support for people who lost a sibling due to suicide.
2017-05-19T04:21:27.304Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "cf9e4a1231e6876e5ee842f940c5bed6848e0855", "oa_license": "CCBY", "oa_url": "https://dash.harvard.edu/bitstream/1/41288272/1/86563%20Social%20Psychiatry%20and%20Psychiatric%20Epidemiology,%2049(6),%20919-927.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e3d5b08856ce53d766a4b9b3ef257cd7ac06b1f2", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251808653
pes2o/s2orc
v3-fos-license
Fluid dynamic assessment of positive end-expiratory pressure in a tracheostomy tube connector during respiration High-flow oxygen therapy using a tracheostomy tube is a promising clinical approach to reduce the work of breathing in tracheostomized patients. Positive end-expiratory pressure (PEEP) is usually applied during oxygen inflow to improve oxygenation by preventing end-expiratory lung collapse. However, much is still unknown about the geometrical effects of PEEP, especially regarding tracheostomy tube connectors (or adapters). Quantifying the degree of end-expiratory pressure (EEP) that takes patient-specific spirometry into account would be useful in this regard, but no such framework has been established yet. Thus, a platform to assess PEEP under respiration was developed, wherein three-dimensional simulation of airflow in a tracheostomy tube connector is coupled with a lumped lung model. The numerical model successfully reflected the magnitude of EEP measured experimentally using a lung phantom. Numerical simulations were further performed to quantify the effects of geometrical parameters on PEEP, such as inlet angles and rate of stenosis in the connector. Although sharp inlet angles increased the magnitude of EEP, they cannot be expected to achieve clinically reasonable PEEP. On the other hand, geometrical constriction in the connector can potentially result in PEEP as obtained with conventional nasal cannulae. Graphical abstract Introduction High-flow oxygen therapy, including that administered via high-flow nasal cannula therapy (HFNC), has been applied as a promising treatment for patients with lung injury [8]. One approach to this therapy is to perform tracheostomy by surgically creating an opening through the neck into the trachea to allow direct access to a tracheostomy tube and a connector attached to the tube [10]. The tracheostomized patients can then breathe through the tube rather than through the nose and mouth. Administration of an air-oxygen mixture is required to achieve positive end-expiratory pressure (PEEP) to assist in breathing and avoid pulmonary collapse [21]. HFNC has conventionally been used with an inflow rate between 20-60 L/min [25,35], because it produces PEEP in the range of 2-8 cmH 2 O [6,26], and can also wash out CO 2 from the upper airways [22,23]. Furthermore, HFNC can decrease the work of breathing and also enhance neuroventilatory drive [35]. More recently, benchtop experiments were performed with a high-flow tracheostomy circuit, and the so-called potential PEEP, defined as the blow-off pressure of the open gas delivery system, was approximately 0.3-0.9 cmH 2 O (≈ 29.4-88.3 Pa) for an inflow rate of 40-60 L/min [38]. The ability to successfully achieve PEEP by tracheostomy cannulae is especially important in ill patients who need long-term (2-3 weeks) ventilation, because such cannulae are used for almost 90% of these patients and also because there is a correlation between high survival rates and short ventilation duration [7]. Hence, providing adequate gas exchange is necessary for early ventilator removal in tracheostomized patients. Considering fast-increasing worldwide incidence of COVID-19, it is currently of paramount importance to identify the mechanical conditions required for PEEP generation in lung therapy. The mechanical conditions necessary to produce PEEP are therefore fundamentally important not only for tracheostomized patients but also for individuals with acute lung injury or acute respiratory distress syndrome (ARDS, the most severe form of acute lung injury) [20] and for patients assisted by extracorporeal membrane oxygenation (ECMO) [28]. Since it is expected that PEEP generation results from the hydrodynamical interplay between pulmonary dynamics (e.g., stress and deformation) and the geometrical characteristics of the tracheostomy tube connector, which can be characterized as a bifurcated tube, understanding the outlet pressure of the connector during respiration is of fundamental importance in highflow oxygen therapy. However, much is still unknown about the geometrical effects of tracheostomy tube connectors (or adapters) on PEEP. Along with the aforementioned clinical studies, recent theoretical and computational approaches have successfully been used to investigate aspects of pulmonary dynamics such as stresses and deformation [29,31], as well as the fully turbulent nature of tracheal flow during inhalation [5,13,14,30,37,42]. For instance, Brouns et al. (2007) systematically investigated how pressure drop affected tracheal stenosis in the range between 50 and 90% and showed that the pressure drop over the normal glottis (~ 40% constriction) was negligible with respect to that induced by constrictions greater than 70%, which impaired breathing [5]. Their numerical results suggest that PEEP can be caused by luminal stenosis in the connector. In other numerical studies using a reduceddimensional (or lumped) model of pulmonary networks that included alveoli, the mechanical effects of downstream regions on airflow in upstream regions were quantified [12,15,17]. Such coupled analysis of three-dimensional (3D) fluid flow and reduced-dimensional models of the mechanical pulmonary response will be useful to understand the mechanical conditions of PEEP while considering both airflow in the connector and patient-specific spirometry. However, no such framework has been established yet. Therefore, the first objective of this study was to develop a computational platform to evaluate PEEP, taking into consideration the 3D nature of the airflow in the tracheostomy tube connector. The second objective was to quantify how luminal stenosis in the connector affected the magnitude of end-expiratory pressure (EEP). Methods EEP was calculated as the area-averaged tracheal pressure, which corresponded to the outlet pressure in the connector (P tr or P out3 ) as described below. Calculated EEPs for different inflow rates Q in were compared with those obtained experimentally. The effects of connector inlet angles θ and luminal stenosis on EEP were further investigated using a newly developed model. Lumped lung model In the lumped lung model, the lung tissue is modeled as an isotropic material, and only diagonal components of elastic stress P e are considered to effectively achieve lung volume change. Alveolar pressure P al is balanced with pleural pressure P pl , the latter of which is driven by respiratory muscle contraction as well as pressure (or isotropic elastic stress) P e due to lung elasticity acting on the lung tissue, i.e., P pl is given as the sinusoidal function where T is the respiratory period (5 s), P amp pl is the amplitude of pleural pressure (250 Pa [40]), and P 0 pl is the baseline pleural pressure (750 Pa [40]). Both the inspiration and expiration phases last for T/2 (2.5 s). The isotropic elastic stress P e can be given as [9]: where k is the coefficient of lung elastic stress (10.1 Pa), a is a model coefficient (3.0), and E (= (λ 2 -1)/2) is Green's strain defined by the stretch ratio λ (= (V (t)/V 0 ) 1/3 ) between the lung volume V(t) at time t and the reference lung volume V 0 . The total gas volume in the lung is about 3 × 10 -3 m 3 , and the volume inspired per breath during quiet breathing is about 0.45 × 10 -3 m 3 in a typical man about 40 years old and about 1.7 m tall [27]. Thus, in this study the reference lung volume was defined as V 0 = 1.5 × 10 -3 m 3 . Flow model and geometry of tracheostomy tube connectors Flow was assumed as incompressible, Newtonian viscous fluid flow, and hence, the governing equation of the airflow velocity v in the connector is expressed as where ρ is the air density (1.18 kg/m 3 ), μ is the viscosity (1.86 × 10 -5 Pa·s), and p is the pressure. The computational domain for 3D computational fluid dynamics (CFD) is shown in Fig. 1a, where the lumped lung model is attached to outlet 3, assuming an open area in the trachea. There are two other outlets (outlet 1 and outlet 2) in the connector, both of which are exposed to the open air (Fig. 1a). The geometry of a connector with 50% stenosis and its internal (1) P al = P pl + P e . (2) meshing are also shown in Fig. 1b and 1c. Here, the rate of stenosis was defined as (1-D min /D in ), where D in is the inlet diameter (11 mm) and D min is the minimum connector diameter. The length of the constricted portion of the connectors was set at 1 mm ( Fig. 1b and 1c. Unless otherwise specified, we show the results obtained with an inlet angle of θ = 60°. Numerical simulation The clinically relevant range of inlet velocity U in in the connector could be determined by inlet flow rates Q in (= U in π D 2 in /4) = 10, 30, and 50 L/min [24]. Hence, the inflow was characterized by Reynolds number (Re) from 1.2 × 10 3 to 6.1 × 10 3 ; Re was defined as ρD in U in /μ. Taking into account the connector stenosis and bifurcation, the local Re in the stenotic region was over 10 4 , and it was also expected that laminar, transitional, and turbulent flows would coexist in the flow field. In this study, a realizable k-ε turbulence model [32] was implemented to simulate the turbulence mean flow field. This model was successfully applied to steady inhalation in a simulation of tracheal flow in a human airway [13,16,37]. The CFD software Simcenter STAR-CCM + 2020.2 (Siemens Digital Industries Software Inc., Plano, TX) was used for mesh generation and to solve the Navier-Stokes equations. The flow was driven by Dirichlet boundary conditions, where the air velocity in the inlet (U in ), that in the outlet connected to the trachea (U out3 = U tr (t, P e , P al , P pl )), and the constant pressure in the outlet (P out1 = P out2 = 0 Pa) were defined. A polyhedral mesh was considered for the fluid mesh, and adaptive meshing, including prismatic layers, was also considered in the stenotic region and to line the walls; in total, approximately 40,000 meshes were considered in each airway model. The dependence of the meshes on EEP was also confirmed with double resolution (approximately 80,000 meshes in total) (see result in Sect. 3.1). Although several lumped models of airways consisting of different types of electrical components (lumped parameters) have been proposed [3,12], taking into account the structural hierarchy in the human trachea [39,41], the tracheal velocity U tr was simply defined as the Dirichlet boundary condition in the 3D CFD model, using the following linear equation: → U tr = P tr − P al ΓA tr Fig. 1 a Computational domain for 3D CFD involving a modeled connector and schematics of a lumped lung model. b 3D CFD computational model with 50% stenosis, and c generated meshes where adaptive mesh refinement and prismatic layers lining the walls are considered in addition to a polyhedral mesh. The boundary conditions of the lumped lung model were set as inlet velocity U in , outlet pressures P out1 and P out2 , and outlet velocity U out3 (= U tr (t, P e , P al , P pl )). The standard inlet angle was set as θ = 60°. The inlet and outlet diameters in connectors were commonly set as D in = 11 mm and D out = 15.4 mm. The rate of stenosis was defined using the minimum connector diameter D min as (1-D min /D in ). The length of the constricted portion of each connector was set as 1 mm where P tr is the tracheal pressure, Γ is the airway resistance (200 kPa/m 3 ), and A tr (= π D 2 out /4) is the opening area in the trachea (or tracheostomy tube) that is given as the outlet diameter of the connector D out (15.4 mm). In general, the end-expiratory phase was defined as the expiratory flow rate (≥ 0 L/min) reaching zero, as shown in expiratory and inspiratory flow-volume curves [40]. Therefore, in this study, the end of expiration was defined by U tr = 0. The present lung volume V(t) could be calculated as: where Δt is set as 0.05 s. 3D CFD was started from temporal tracheal velocity U tr = 0, and continued while updating U tr until the tracheal pressure P tr became almost constant such that | P n+1 tr /P n tr -1|≤ ε = 0.01, where the superscript n (or n + 1) is the number of trials at time t. The simulation was started with P 0 tr = P 1 tr . The boundary velocity U out3 , which changed over time, was determined using a coefficient α (0 ≤ α ≤ 1); U out3 = α U n+1 tr + (1-α)U n tr . In this study, to achieve numerical stability, α was set as 0.3 in a normal connector and 0.8 in a constricted connector. This computational algorithm is summarized in Fig. 2. Simulations lasted for three periods (3 T), during which the calculated variables reached a stable periodicity. As described below, the time history of P tr was preliminarily checked by experimental measurements as shown in Fig. 3c and found that the time history did not affect the EEP; i.e., the effect of airflow dynamics on EEP was negligible, at least for a physiologically relevant respiratory rate (0.2 Hz). Hence, in the model algorithm to update the flow fields, the steady state under the calculated U tr in the lumped lung model was considered at each time step. Experimental setup To simulate spontaneous breathing, a double-chamber Training and Test Lung model (TTL) (Michigan Instruments, Grand Rapids, MI) was used, following a previous study [43]. A surgical support and intensive care management system (Nihon Kohden, Tokyo, Japan) were used to compute P tr . Considering that patients who require tracheostomy tube connectors are in recovery (i.e., candidates for ventilator withdrawal) but are not fully healed, the compliance of the model lung was set as 0.08 L/cmH 2 O, which is slightly smaller than that in healthy subjects (0.094-0.136 L/cmH 2 O [18]). The resistance of the model lung was imposed with a parabolic airway resistor (5 cmH 2 O/L/s, Pneuflo resistor Rp5, Michigan Instruments). The PB840 ventilator parameters were set as follows: volume control mode, 500 ml of tidal volume; respiratory rate, 15 breaths/ min (0.25 Hz); PEEP, 0 cmH 2 O; and inspiratory time, 0.7 s. To easily detect EEP from periodic tracheal pressure profiles, the expiratory time was set to be relatively longer (3.3 s) than the inspiratory time (0.7 s). Figure 3a shows the experimental setup. A piezometer was attached to the connector to measure the pressure at the chamber inlet. Figure 3b shows a schematic of the experimental setup. The time history of tracheal pressure P tr in a normal connector at Q in = 30 L/min is shown in Fig. 3c, where the moving-average was obtained for the data of P tr with a window size of 76 ms. The parameters for the lumped lung model (a, k, Γ, and α) were determined so that the order of magnitude of the calculated EEP was the same as that obtained with experimental measurements (see Fig. 3d), while preserving the physiologically relevant lung deformation ΔV = 500 cm 3 [27,40] and pleural pressure difference ΔP pl = 500 Pa [19] during respiration under a baseline pleural pressure of 750 Pa [40] with an amplitude of 250 Pa [40]. The model parameters are summarized in Table 1. Figure 3d shows a comparison of the magnitude of EEP obtained with numerical simulation versus experimental measurements as a function of inlet flow rate Q in in a normal connector (i.e., inlet angle θ = 60° and without stenosis). It is expected that clinically reasonable PEEP is over 2 cmH 2 O (196.2 Pa [6,26]). Calibrations were performed (7), including the coefficient α for the temporal updating of U tr . The EEP values obtained via experimental measurements (EEP exp ) and numerical simulations (EEP sim ) are summarized in Table 2. For the smallest Q in = 10 L/min, the EEP value was very small and sometimes became negative. Thus, the EEP value for such small Q in (≤ 10 L/min) was defined as 0. When these values were evaluated in terms of the difference in the magnitude of EEP per 1 cmH 2 O between the numerical and experimental results |EEP exp -EEP sim |/cmH 2 O, the differences did not exceed than 0.061 for all Q in , and the calculated magnitudes of EEP sim were within the range of error of the experimental data (Fig. 3d). The results indicate that the developed model makes it possible to investigate the magnitude of EEP within an accuracy of 1 cmH 2 O, and thus, the same model parameters are used hereafter (see Table 1). The dependence of the meshes on the magnitude of EEP was tested, and the calculated EEP was 20.58 Pa with the present resolution (i.e., 40 000 meshes) and 20.84 Pa with double resolution (approximately 80,000 meshes in total). Because the relative difference in the magnitude of EEP between the present and higher resolutions was less than 1%, it was considered appropriate to examine the numerical results obtained with the present resolution. Model validation and EEP in a normal connector A different meshing style, involving increasing the number of prismatic layer and the adaptive meshes in the constricted area, was tested for severe geometry, characterized by 60% stenosis and the sharpest inlet angle (30°). The present model, with a total of ~ 60,000 meshes for this type of constricted connector, is called model C1, while the reconstructed model with a total of over 100,000 meshes is called model C2. The calculated magnitude of EEP obtained with model C1 for Q in = 30 L/min was 156.5 Pa, and that obtained with model C2 was 172.1 Pa. The relative difference in the The flow field in a normal connector and the tracheal pressure U tr during the respiratory period were investigated. Figure 4a shows the time history of the given pleural pressure P pl , the calculated alveoli pressure P al obtained with the lumped lung model, and the calculated tracheal pressure P tr obtained with CFD. Data are shown after P al and P tr have reached to the stable periodic phase (t ≥ 2.0 s). All data used hereafter were obtained after these calculated values reached a stable periodicity, in order to avoid the influence of the initial condition ( U 0 tr = U 0 out3 = 0). Figure 4b and 4c show snapshots of pressure and velocity fields, respectively, in a normal connector for each respiratory state. During inspiration, the pressure in the upper bifurcated area was relatively high because the inlet flow directly reached that location with large momentum and diverged to the tracheal and outlet regions (left in Fig. 4b and 4c. This high-pressure field shifted and expanded toward the tracheal regions during expiration. The direction of the inlet flow was sharply changed by the expiratory flow from outlet 3 (middle in Fig. 4b and 4c. At the end of expiration, defined by U tr = 0, a high-pressure field again emerged in the upper bifurcated area, and some amount of the inflow moved to the tracheal region, resulting in recirculation there (right in Fig. 4b and 4c. Figure 5a shows the time history of U tr and lung volume V during period T (= 5 s) at Q in = 30 L/min in a normal connector, where the data were obtained only after the stable periodic behavior was achieved. When U tr reaches zero (i.e., start and end of expiration), the lung volume approaches its maximum and minimum (Fig. 5a). Thus, there is a finite phase difference between the two waves. This phase difference remains the same even for different Q in (data not shown). Figure 5b shows the tidal volume ΔV tidal as a function of Q in . ΔV tidal was calculated as the volume change from minimum V min to maximum V max , i.e., ΔV tidal = V max -V min . The pressure difference between the tracheal pressure and alveolar pressure (P tr -P al ) in Eq. (7) decreased as Q in increased, resulting in a decrease in tidal volume; i.e., the magnitude of U tr decreased. Such passive regulation during Inlet exhalation qualitatively agrees with experimental measurements using high-flow nasal ventilation, especially in healthy subjects [4]. Figure 6 shows the calculated magnitude of EEP for different inlet angles θ (= 30° and 45°) as a function of inflow rate Q in . Although the magnitude of EEP increased as the inflow rate Q in increased and as the inlet angle decreased (Fig. 6), the relative difference in EEP between the normal (θ = 60°) and sharpest angle (θ = 30°) decreased as the inflow rate increased; (EEP| θ = 30°-EEP| θ = 60°) /EEP| θ = 60° = 0.47, 0.41, and 0.38 for Q in = 10, 30, and 50 L/min, respectively. Effect of connector stenosis on EEP The effect of connector stenosis on EEP was investigated. Figure 7a shows the time history of the pleural, alveolar, and tracheal pressures (P pl , P al , and P tr ) in a connector with 50% stenosis. The baseline of P al and P tr values were higher than those in a normal connector, even for the same amplitude of P pl (Fig. 7a). The mechanism of generating such large EEP involves the pressure field in the connector, as shown in Fig. 7b. The pressure field was constantly high during inspiration. This was especially true in the inlet region (i.e., upstream region before stenosis) and the upper bifurcated area; indeed, in the bifurcated area the pressure reached 250 Pa, which was approximately 7 times higher than that in a normal connector (Fig. 7b). Since a reduced area generates fast flow, flow administrated at the inlet can reach the tracheal region even during and at the end of expiration, as shown in Fig. 7c. These results suggest that connector constriction potentially generates PEEP. Figure 8a shows the effect of the stenosis rate on EEP and ΔV tidal at Q in = 30 L/min. The calculated EEP normalized by the EEP value obtained with a normal connector (0% stenosis) drastically increased for stenosis over 50% (Fig. 8a, left axis). Similar results were also obtained in previous numerical analyses of tracheal flow using the Yang-shih k-ε turbulence model [5], where the simulated pressure drop in the stenotic region dramatically increased only when far more than 70% of the tracheal lumen was obliterated, both for Q in = 15 and 30 L/min. The calculated EEP at 70% stenosis was almost 8 times higher than that in the normal connector (Fig. 8a, left axis). ΔV tidal normalized by that obtained with a normal connector sharply decreased for stenosis over 50% (Fig. 8a, right axis). ΔV tidal was commonly decreased in both a normal and constricted connector when Q in was increased, while the rate of decrease for Q in was almost unchanged in the constricted connector (Fig. 8b). Figure 9 shows calculated EEPs for different degrees of stenosis as a function of Q in . The EEP obtained with a normal connector, as shown in Fig. 6, is also displayed. Although PEEP at the smallest Q in (= 10 L/min) was not Discussion PEEP attained by high-flow oxygen therapy using a tracheostomy tube in tracheostomized patients has been shown to have various clinical benefits [8,35]. Although the relationship between the magnitude of EEP and inflow rates was previously investigated experimentally using high-flow tracheostomy [24,38], it was still unknown whether simple geometrical changes in tracheostomy tube connectors, including the stenosis rate and inlet angles, could potentially generate PEEP. Since it is thought that PEEP is a consequence of the balance between connector fluid flow and lung mechanical responses, the 3D CFD of airflow in the connector during respiration under boundary conditions will be useful to understand the mechanical conditions necessary for PEEP generation. This will also be true while considering geometrical effects on EEP, especially those related to tracheostomy tube connectors. However, such computational frameworks have not yet been established. Thus, a numerical platform was developed in this study to investigate connector airflow and the magnitude of EEP under respiration, as represented by a lumped (0D lung) model. This numerical model made it possible to investigate the flow field in the connector (Figs. 4 and 7) and to quantify the magnitude of EEP (Figs. 3d and 6). Furthermore, the developed model demonstrated passive regulation of tidal volume (Figs. 5 and 8), which was impeded by large inflow rates as reported by previous studies involving experimental measurements using high-flow nasal ventilation [4]. The effect of connector stenosis on EEP was also quantified, and the results showed that PEEP can be expected by simply creating a stenosis, at least for stenosis over 50% and for Q in ≥ 30 L/min (Fig. 9). The calculated EEP obtained with the largest degree of stenosis (= 70% stenosis) led to an eightfold greater EEP than that in the normal connector at Q in = 30 L/min (Fig. 8a). This was consistent with the results obtained with the largest inflow rate (Q in = 50 L/min) (Fig. 9), specifically 55.96 Pa in the normal connector and 465.8 Pa (≈ 4.7 cmH 2 O) with 70% stenosis. Since it is expected that clinically reasonable PEEP is over 2 cmH 2 O [6,26], numerical results suggest that geometrical constriction in a connector can potentially produce PEEP, which is conventionally obtained with nasal cannulae [6,26]. Although sharp inlet angles also increased the magnitude of EEP, they cannot be expected to achieve clinically reasonable PEEP, since the PEEP value was less than 1 cmH 2 O even for the sharpest inlet angle θ = 30° and largest inflow rate Q in = 50 L/min (Fig. 6). In experimental measurements using a lung phantom, the expiratory time (3.3 s) was set to be relatively longer than the inspiratory time (0.7 s) so that the EEP could be easily detected from periodic tracheal pressure profiles (Fig. 3c). Thus, comparison between the numerical and experimental measurements was focused on the magnitude of EEP, and they exhibited discrepancies in temporal values such as P tr and U tr except in the end-expiration phases. Despite these differences, the developed model makes it possible to investigate the magnitude of EEP within 1-cmH 2 O accuracy, since the differences in this magnitude per 1 cmH 2 O between the numerical and experimental results |EEP exp -EEP sim |/ cmH 2 O were less than 0.061 for all Q in , and the numerical results were in the range of errors of the experimental results (Fig. 3d). Since EEP is thought to depend on the length of the constricted portion of tracheostomy tube connectors, future studies should investigate the effect of this length (10 0 -10 1 mm) on the degree of EEP and also compare the calculated EEP with experimental measurements. The combination of a large degree of stenosis, sharp inlet angle, and large inflow rate may pose a risk of ventilator-associated (or -induced) lung injury [33], caused by a pressure of ≥ 30 cmH 2 O (≈ 29.43 kPa) [2]. Furthermore, a previous theoretical analysis by Mead et al. (1970) showed that 30 cmH 2 O of alveolar pressure produced 140 cmH 2 O of shear stress, which can potentially lead to ARDS [19]. Lesions in such cases are caused by overdistension, collapse and reopening, and oxygen toxicity [33]. Since the change of lung volume was simply modeled by isotropic deformation with isotropic lung tissue [9], prediction of the aforementioned mechanical damage in the lung will be required for more precise modeling that takes into account the viscoelasticity of extra-and intra-parenchymal lung bronchi [29,31]. Although the material deformability of the connector walls was neglected in this study, it may play important Normal 50% stenosis 60% stenosis 70% stenosis Fig. 9 EEP for different degrees of stenosis as a function of inlet flow rate Q in . EEP obtained with a normal connector, as shown in Fig. 6, is also displayed roles, especially for reducing sounds in clinical applications. Expiratory crackles were numerically investigated in terms of the relationship between airway closure dynamics and acoustic fluctuations in a study that considered the elastic deformation of the airway wall [11]. It would be interesting to study how much wall deformability reduces sound, even at high flow rates, while preserving PEEP. Numerical treatment of luminal surfaces may be also important, especially for wet surfaces. It is assumed that the device lumen becomes wet due to patient respiration, especially during long-term application, and therefore, the effect on PEEP of two-phase flow, such as that present at the liquid-air interface, is among the next challenges in terms of future research. Although the tested respiratory rate (0.25 Hz) did not affect the time history of tracheal pressure, as shown in Fig. 3c, more frequent respirations potentially generate harmonic flow behavior in the trachea (i.e., collapsing the airflow during inspiration and expiration). Furthermore, frequent respirations may also cause the pendelluft phenomenon, which decreases gas exchange and is defined as the movement of air within the lung from nondependent to dependent regions without changes in tidal volume during mechanical ventilation [1,44]. It is known that a humidified and warmed gas mixture favors mucociliary function and reduces upper airway resistance [34,36]. Thus, it would also be interesting to study whether the synergistic combination of PEEP and pulsatile airflow in the trachea enhance gas exchange or increase the level of oxygen in the blood. The developed numerical model made it possible to assess both PEEP and tidal volume based on fluid dynamics of the airflow in the connector. Numerical analysis that considers mechanical lung parameters representing patientspecific lung states will be helpful in the clinical care of tracheostomized patients, specifically in decision-making for achieving precise inflow rates while preserving PEEP and determining when to remove the ventilator. Numerical results based on mechanics may also facilitate therapeutic decision-making not only for tracheostomized patients but also for those with lung diseases such as ARDS and those assisted by ECMO. Conclusion A computational platform to evaluate PEEP in tracheostomized patients was developed. The airflow in the tracheostomy tube connector was simulated, and the tracheal pressure, which is the outlet pressure of the connector, was calculated by 3D CFD analysis coupled with a lumped lung model. The numerical results for the magnitude of EEP agreed well with experimental measurements and made it possible to investigate the detailed dynamics of airflow in the connector. This suggests that the model can be used to estimate the magnitude of PEEP while taking into account the 3D airflow field in the connector. Although sharp inlet angles increased the magnitude of EEP, they cannot be expected to result in clinically reasonable PEEP. On the other hand, geometrical constriction in a connector can potentially produce PEEP, which is conventionally obtained with nasal cannulae. The numerical results in this study may assist in decision-making regarding the treatment of tracheostomized patients as well as those with other lung diseases such as ARDS and those receiving ECMO.
2022-08-26T06:17:07.292Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "211434eed6aa7ab4c294ec9a31fd617187c66bd4", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11517-022-02649-2.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "23ad7405c2aa8b612358eb66127113660b7df070", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
44146659
pes2o/s2orc
v3-fos-license
Antifungals susceptibility pattern of Candida spp. isolated from female genital tract at the Yaoundé Bethesda Hospital in Cameroon Introduction Vaginal candidiasis is considered as an important public health problem worldwide and its incidence has increased nowadays. In recent years, inappropriate and disproportionate use of antifungal drugs, automedication as well as non compliance have caused drug resistance. Methods This study aimed at determining the in vitro antifungal susceptibility patterns of Candida speciesisolated from female genital tract at Yaoundé Bethesda Hospital in Cameroon. Two hundred and fourthy five women (age range: 15 years to 49 years) attending the hospital were recruited between January and June 2014 in this cross sectional study. Vaginal smears were collected using sterile swabs from each participant and cultured on sabouraud dextrose agar supplemented with chloramphenico l 0.5%; identification of Candida spp. was performed following standard methods. The disk diffusion method was used for antifungal susceptibility testing. Results Out of the 245 vaginal smears collected, 94 (38.4%) strains of yeast were isolates among which 43 (45.7%) were Candida albicans and 51 (54.3%) were non albicans. The highest susceptibility of the isolates was seen for nystatin 62 (83.78%), ketoconazole 61 (82.43%) and fluconazole 60 (81.08%). Conclusion Despite the noticeable resistance of Candida spp. isolates to miconazole and itraconazole, the results indicate that nystatin, ketoconazole and fluconazole are the drugs of choice for the therapy of vaginal candidiasis in this region. Introduction Vaginal candidiasis (VC) is the most common opportunistic fungal infection that affects millions of women worldwide every year. It is caused by Candida species with Candida albican being responsible for the majority (80%-90%) of the cases [1]. VC is presently the first cause of vulvovaginitis in Europe and the second greatest cause in the United States and Brazil, where it is exceeded only by bacterial vaginosis [2]. Approximately 75% of adult women have at least one episode of vulvovaginal candidiasis in their lifetimes and about half of these women experience more than one recurrence; 5%-8% have multiple episodes each year [2]. VC usually occurs when there is an overgrowth of Candida present in the vagina as a normal commensal [3]. Pregnancy, contraception drugs with high estrogen, antibiotic consumptions, uncontrolled diabetes, immunosuppressive drugs, unsafe or excessive sexual intercourse, chronic anemia and season allergy are some predisposing factors of VC. The most common signs of VC are vaginal itching, dysuria and malodorous-white vaginal secretions [4]. Treatment of VC varies substantially and the most common drugs used are azole agents [5]. However, the widespread use of these drugs as prophylactic and therapeutic agents has been associated with the selection of less susceptible Candida strains, thereby causing serious problems in successful treatment of vaginitis [6]. Thus, early identification of Candida species and attention to their antifungal susceptibility patterns is very important for establishing strategies to control and/or prevent candidiasis by novel therapeutic management. This study aimed at determining the in vitro antifungal susceptibility patterns of Candida species isolated from female genital tract at Yaoundé Bethesda Hospital in Cameroon. Sampling and culture It was a prospective, descriptive cross-sectional study carried out at the Bethesda Hospital. The sampling technique used was a non probabilistic method of convenience. Participants for the study were recruited between January and June 2014 and were made of women attending the hospital for a medical follow up and complaining of various genital symptoms. Women who had a douche on the day of specimen collection, who were under any form of antifungal therapy and who were not at least 2 days away from the menstruation prior to specimen collection were not enrolled in the study. The sample size was calculated using the standard formula for sample size calculation (Lorentz's formula): Where z = the standard normal deviation at 1.96 (which corresponds to a 95 % confidence interval), p = the prevalence of candidiasis in Cameroon estimated at 35.4% [7]; q = 1-p and d = the degree of precision expected = 0.05. Based on these, our minimum sample size was 343 patients. Vaginal discharges were collected using sterile swabs from each participant and transported to the laboratory of the Yaoundé Central Hospital where they were cultured on sabouraud dextrose agar supplemented with 0.5% chloramphenicol. Plates were incubated at 35°C for 48h. Differentiation between C. albicans and non albicans isolates was done using the germ tube test and growth on chromogenic agar (gelose chromIDTM Candida, Biomerieux, France). Permission to conduct the study was obtained from the Yaoundé Bethesda and Central Hospitals. Informed consent was obtained from all study participants before they were enrolled. Antifungal susceptibility test: The agar disk diffusion method was performed on the basis of the Clinical and Laboratory Standards Institute guidelines M44-A2 protocol for the evaluation of Candida species susceptibility to common antifungals [8]. All Candida species were subcultured at 35°C onto Sabouraud dextrose agar to ensure purity and viability then, plates surface containing Mueller-Hinton agar supplemented with 2% glucose and 0.5 µg/ml of methylene blue were inoculated using a sterile cotton swab dipped in a cell suspension adjusted to the turbidity of 0.5 McFarland standard. Itraconazole (10µg), nystatin (100 units), fluconazole (100µg), miconazole (50µg) and ketoconazole (50µg) discs (Becton Dickinson, Sparks, MD, USA) were placed onto the surfaces of the plates. Media were incubated for 48h for C. glabatra and 24h for the other Candida species at 35°C. The anti Candida activity was evaluated by measuring the diameter of the inhibition zone (mm) around the discs and the results were recorded as susceptible (S), susceptible dose dependent (SDD) and resistant (R). Reference strain of C. albicans (ATCC 2091) was used as control. Data analysis: Data were recorded and analyzed on SPSS version 11.0 (SPSS, Inc., Chicago, IL). Discrete variables were expressed as frequencies and percentages. Ethics: Permission to conduct this study was obtained from the School of Health Sciences ethics review committee and the Yaoundé Bethesda Hospital. Informed consent was obtained from all participants before their enrollment in the study. Limitation of the study: The sample size was 245 participants instead of 343 as calculated. There was a difference of 98 participants due to the fact that we did not have many participants during our study period. Discussion Researchers are interested in antifungal resistance because it is associated with elevated minimal inhibitory concentrations (MICs) that are associated with poorer clinical outcomes, breakthrough infections during treatment and increase healthcare costs. Our results further revealed that nystatin, fluconazole and ketoconazole were the most effective antifungal drugs and itraconazole had the poorest activity. Concerning Candida isolates susceptibility to nystatin (polyene), our results are in accordance with those of Jasem et al. [4]. Nevertheless, Ane-Anyangwe et al. [2] reported higher resistance (80%) of Candida isolates to nystatin which may be due to the excessive use of this drug as topical ointment or suppository as a result of its availability and low cost. As for Candida isolates susceptibility to fluconazole, the results of our study are supported by those of Pfaller et al. [10] who showed a high susceptibility to fluconazole of 90.2% out of 190,000 isolates from 41 countries; although evidence of resistance has been reported by some researchers [2][3][4][5]. VC is effectively treated with azole-based antifungal drugs [11] that may explain the resistance observed by these authors and the finding observed in our study with itraconazole. Conclusion Our results show a variation in the array of the susceptibility of the Candida species isolated to the different antifungals tested with nystatin, ketoconazole and fluconazole being the drugs of choice for the therapy of vaginal candidiasis in this region. As a consequence, laboratory tests including species identification and antifungal susceptibility testing should be requested for women with vaginal candidiasis prior to drugs administration.
2018-06-05T03:51:43.615Z
2017-12-06T00:00:00.000
{ "year": 2017, "sha1": "b951bbcca0e71c83663ca020f9306e7b866fa465", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11604/pamj.2017.28.294.11200", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b951bbcca0e71c83663ca020f9306e7b866fa465", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
203568657
pes2o/s2orc
v3-fos-license
The H3K9 Methylation Writer SETDB1 and Its Reader MPP8 Cooperate to Silence Satellite DNA Repeats in Mouse Embryonic Stem Cells SETDB1 (SET Domain Bifurcated histone lysine methyltransferase 1) is a key lysine methyltransferase (KMT) required in embryonic stem cells (ESCs), where it silences transposable elements and DNA repeats via histone H3 lysine 9 tri-methylation (H3K9me3), independently of DNA methylation. The H3K9 methylation reader M-Phase Phosphoprotein 8 (MPP8) is highly expressed in ESCs and germline cells. Although evidence of a cooperation between H3K9 KMTs and MPP8 in committed cells has emerged, the interplay between H3K9 methylation writers and MPP8 in ESCs remains elusive. Here, we show that MPP8 interacts physically and functionally with SETDB1 in ESCs. Indeed, combining biochemical, transcriptomic and genomic analyses, we found that MPP8 and SETDB1 co-regulate a significant number of common genomic targets, especially the DNA satellite repeats. Together, our data point to a model in which the silencing of a class of repeated sequences in ESCs involves the cooperation between the H3K9 methylation writer SETDB1 and its reader MPP8. Introduction Post-translational modifications of histones play key roles in DNA functions in the chromatin context. In this, methylation of histone H3 lysine 9 (H3K9) generally correlates with transcriptional repression and heterochromatin formation [1]. The main location of H3K9 methylation is on heterochromatin and more generally on repetitive elements, such as the major and minor satellite repeats in the mouse genome [1]. Highly condensed heterochromatin regions are enriched in trimethylated H3K9 (H3K9me3), whereas euchromatin regions are preferentially enriched in mono-and di-methylated H3K9 (H3K9me1 and H3K9me2, respectively) [2]. SET Domain Bifurcated histone lysine methyltransferase 1 (SETDB1 or ESET in mouse) is a key H3K9 lysine methyltransferase (KMT). SETDB1 is able to establish H3K9 mono, di-and tri-methylation, the latest in cooperation with its co-factor ATF7IP (Activating Transcription Factor 7-Interacting Protein 1, also called MCAF1), which is necessary for the conversion of H3K9me2 to H3K9me3 [3]. Thus, this ability of SETDB1 to establish the three H3K9 methylation levels makes this KMT important in both euchromatin and heterochromatin. SETDB1 is essential in mouse embryonic stem cells (mESCs) pluripotency and self-renewal [4][5][6] and its knockout (KO) is lethal at the peri-implantation stage at 3.5 dpc [7]. Setdb1-null mouse blastocysts fail to give rise to ESCs in vitro [7] and Setdb1 knockdown (KD) in mESCs results in loss of Oct4 expression and abnormal expression of various differentiation markers and de-repression of many repeated elements [6,8]. Consistently, SETDB1 has been found to occupy and silence trophoblastic and developmental genes, and retroviruses in mESCs [4,5,[8][9][10]. Expression of MPP8 is especially high in stem and germ cells and thus may play key roles in these cells' chromatin features. However, its role in ESCs has never been studied, contrary to the main H3K9 KMTs SETDB1, G9A/GLP and SUV39H1/2 that are extensively studied in mESCs (reviewed by Mozzetta, et al. in [1]). Here, we have combined biochemical and genomic strategies to get insights on the functions of the major H3K9 reader MPP8 in mESCs, where H3K9 methylation plays key roles in the non-coding genome and transposable elements silencing compared to committed cells. Our TAP-tag assay showed that MPP8 co-purifies with many H3K9 KMTs, with a higher score for SETDB1. Interaction of endogenous MPP8 and SETDB1 was confirmed in mESCs. Combined ChIP-seq and RNA-seq unraveled that MPP8 cooperates with SETDB1 to co-regulate repeated elements, including the minor and major satellite DNA repeats. Our results suggest a new regulatory mechanism for repeated sequences in mESCs which involves the cooperation between the major H3K9 methylation writer SETDB1 and its reader MPP8. Setdb1 cKO mESCs were established by the group of Prof. Yoishi Shinkai via standard gene targeting procedures [16]. To generate the Setdb1 cKO mESC line, Cre recombinase and estrogen receptor (Cre-ER) fusion gene was introduced into a clone containing targeted Setdb1 cKO and KO alleles. To induce deletion of the Setdb1 cKO allele, TT2 mESCs were cultured in 800 nM 4-Hydroxy Tamoxifen (Sigma; St. Louis, Missouri; USA) for 96 hours. siRNA Transfection HM1 ESCs were seeded in ESCs media to achieve 70-80% confluence and 24 h later were transfected with siRNAs (SETDB1, MPP8 and non-targeting scrambled control) to achieve a final siRNA concentration of 25 pM using the Lipofectamine RNAiMax reagent and OPTIMEM (Life technologies; Waltham, Massachusetts, USA). A second round of transfection was performed 24 h later using the same siRNA concentration. Cells were harvested after 72 h post first transfection. siRNA sequences are listed in the Table 1. Protein Complex Immuno-Purification We used HeLa-S3 cell lines stably expressing Flag-HA tagged MPP8 (only chromobox or full-length), established by retroviral transduction of human full-length transgenes, as described in [17]. A cell line transduced with the empty pREV vector has been used as control. We carried out double-affinity purification of Flag-HA-MPP8 from HeLa cells using Flag (Cat# F7425; Sigma; St. Louis, Missouri; USA) and HA (Cat# 3F10; Roche; Basel, Switzerland) antibodies and either nuclear soluble or chromatin fractions as described in [17]. Double-immunopurified complexes were resolved on 4-12% SDS-PAGE bis-Tris acrylamide gradient gel in MOPS buffer (Invitrogen; Waltham, Massachusetts, USA) and analyzed by mass spectrometry and Western blot. Chromatin Immunoprecipitation (ChIP) Cells were cross-linked directly in the culture plate with PBS supplemented with 1 mM MgCl 2 and 2 mM DiSuccinimidyl glutarate (Thermo-Fisher Scientific; Waltham, Massachusetts, USA) diluted in DMSO (Sigma; St. Louis, Missouri; USA) during 45 min at RT as described in [18]. Then, a second crosslinking was carried out by adding 1% formaldehyde (culture medium supplemented with 1% formaldehyde ( ChIP-Sequencing (ChIP-Seq) Five to fifteen nanograms of ChIPed DNA or un-enriched whole cell extract (Input) were prepared for sequencing on Illumina Hiseq 2000. We used the library kit (Truseq DNA sample prep kit V2, Illumina; San Diego, California; USA) with modifications as follows: DNA fragments were repaired to blunt ends, purified with magnetic beads (Agencourt AMPure XP, Beckman coulter) and a step of A tailing was performed before Illumina adapters ligation. Two steps of DNA purification with magnetic beads were carried out to eliminate un-ligated adaptors and then amplified with 15 PCR cycles. To remove un-ligated adapters and un-sequenceable large DNA fragments, DNA libraries were selected on E-Gel (2% SizeSelect, Invitrogen; Waltham, Massachusetts, USA) to obtain 280-330 bp DNA fragments (including 130 bp of adapters). Final library was quality checked by DNA high sensitivity chip (Agilent; Santa Clara, California; USA) and for positive target enrichment by qPCR. Quality-controlled samples were then quantified by qPCR and picogreen (Qubit®2.0 Fluorometer, Invitrogen; Waltham, Massachusetts, USA). Libraries were pooled thanks to various adaptors and qPCR relative measurements. Cluster amplification and following sequencing steps strictly followed Illumina standard protocol. Sequenced reads were de-multiplexed to attribute each read to a DNA sample and then aligned to reference mouse genome mm10 with bowtie (-t -q -p 8 -S -n 2 -e 70 -l 50 -maxbts 125 -k 1 -m 1 -phred33-quals). After PCR duplicates removal, enriched regions were detected by MACS 1.4 software package [19], using Input DNA as a control. MACS 1.4 was used to visualize read enrichments. The BEDTools [20] "intersect" and "mergeBED" were used to filter and merge overlapping peaks between replicates. Merged peaks were assigned to the closest gene within a 10 Kb distance using the Ensembl annotation. Merged peaks were also assigned to overlapping repeats using the RepeatMasker annotation. Peaks visualization was carried out with the Integrative Genomics Viewer -Broad Institute [21]. RNA and Quantitative Reverse Transcription-PCR (RT-qPCR) Total RNA was extracted using RNeasy mini-kit (Qiagen; Venlo, Netherlands) following manufacturer's procedures. DNase (Qiagen; Venlo, Netherlands) treatment was performed to remove residual DNA. With High Capacity cDNA Reverse Transcription Kit (Applied Biosystems; Waltham, Massachusetts, USA), 1 µg of total RNA was reverse transcribed. Real-time quantitative PCR was performed to analyze relative gene expression levels using SYBR Green Master mix (Applied Biosystems; Waltham, Massachusetts, USA) following manufacturer indications. Relative expression values were normalized to the housekeeping genes mRNA Cyclophillin A or GAPDH. Primers are listed in the Table 1. Using the Galaxy platform, high quality one-end reads for the three replicates (siRNA SETDB1, MPP8 and control) were mapped onto mouse genome (mm10) using Bowtie2 v2.3.4.1 [22] with default alignment parameters (-I 0 -X 500). Mus musculus GRCm38.9 GTF file was downloaded from Ensembl (https://www.ensembl.org) and the exon annotation lines were extracted from the file to use as GTF annotation file. FeatureCounts v1.6.0.2 [23] was used with this GTF file to treat the BAM alignment files and calculate the gene expression values in transcripts per million (TPM). Differentially expressed genes were identified using he DESeq2 Galaxy wrapper v2.11.40.6 [24]. mm10 repeat elements were downloaded from the RepeatMasker site (www.repeatmasker.org) and repeats annotated as "Low complexity" or "Simple repeat" were filtered out. RepEnrich [25] was used to annotate and count the reads aligned to repeats. The ARTbio Galaxy wrapper (edgeR-repenrich v1.5.3) of EdgeR [26] was used to identify differentially expressed repeats and repeat classes. The Database for Annotation, Visualization, and Integrated Discovery (DAVID) version 6.8 (https://david.ncifcrf.gov/) [27,28] was employed for Gene Ontology (GO) analysis. Additional Statistical Analyses The collected data was analyzed using Statistical Package for Social Sciences (SPSS Version 25, Chicago IL). Normality of the data was assessed using the Shapiro-Wilk test. Results were presented as mean and standard deviation (SD) or median and interquartile range (IQR). Differences between groups were compared using t-test or Mann Whitney's unpaired test based on normality test. Statistical significance was assessed at the α < 0.05 levels for RT-qPCR. Data Availability All NGS sequencing data that support the findings of this study are available in the BioProject database of NCBI (https://www.ncbi.nlm.nih.gov/bioproject) under the accession number PRJNA565573. MPP8 Protein Complex Characterization To investigate MPP8 functions, we first sought to globally identify its protein partners using a Tandem Affinity Purification (TAP)-tag strategy coupled to mass spectrometry (MS), as previously described (Fritsch et al., 2010). To this end, we carried out HA-Flag double-affinity immuno-purification from either nuclear soluble or nucleosome-enriched protein fractions ( Figure 1A and Supplementary Figure S1A) of HeLa cells stably expressing MPP8 chromobox (MPP8cbx). MS and Western blot (WB) analyses showed that MPP8 is associated with proteins linked to H3K9 methylation, such as many H3K9 KMTs and effector proteins, but also scaffold proteins from the nuclear architecture, proteins involved in the RNA processing, chromatin remodeling and DNA repair (Table S1). To get more insights on MPP8 functions within chromatin, we next sought to identify post-translational modifications (PTMs) on the histones co-purified with MPP8 by in-gel propionylation and trypsin digestion and Nano LC coupled to Ion Trapp mass spectrometer, as described in [29]. Regarding H3K9 methylation, our results show that MPP8 co-purified mainly with H3K9me2 (56%) and H3K9me3 (44%) (Supplementary Figure S1B), in accordance with previous results [29]. SETDB1 complex purification from HeLa cells did not co-precipitate enough histones to perform such analysis [29]. While G9A co-purified mostly with H3K9me1 and H3K9me2, as expected [29]. Concerning H3K36 methylation, which is associated with transcription elongation, we found in association with MPP8 43% of unmethylated H3K36, 17% of H3K36me1, 39% of H3K36me2 and only 1% of H3K36me3. Compared to the KMT G9A, we found 48% of unmethylated H3K36, 30% of H3K36me1, 20% of H3K36me2 and 2% of H3K36me3 (Supplementary Figure S1B). Thus, there is less H3K36me1 and more H3K36me2 in association with MPP8 compared to G9A, suggesting that MPP8 is recruited to active genes. Next, we decided to focus on the significance of the MPP8/SETDB1 interaction. To this end, we performed experiments in a more physiological cellular model, namely mouse embryonic stem cells (mESCs). Indeed, MPP8 is known to be highly abundant in ESCs, where SEDTB1 and H3K9 methylation play key roles in transcriptional silencing, in the absence of a complete establishment of DNA methylation before cell differentiation. Thus, to further support our findings, we sought to confirm the interactions revealed by TAP-tag/MS at the endogenous level by performing co-immunoprecipitation (co-IP) experiments from nuclear extracts of mESCs, followed by WB analyses. Our results showed that immunoprecipitation of MPP8 co-precipitated SETDB1 ( Figure 1C). Conversely, SETDB1 co-precipitated with MPP8 ( Figure 1C). We next studied the functional significance of such interactions. Note that we have loaded ten times more eMPP8 soluble complex (N) than chromatin-associated complex (C). MW, molecular weight marker, in KDa. (B) Scores of the top most abundant peptides as identified by MS. (C) Endogenous MPP8 and SETDB1 interact in mESCs. Nuclear extracts from HM1 mESCs were used for immunoprecipitation (IP) with the indicated Abs; IgG was used as negative control. The resulting precipitates were then subjected to western blot (WB) with indicated antibodies. Our data showed that the distribution of MPP8 binding sites on the genome is almost 60% on intergenic regions (including repeated elements), 37% on introns and 4% on exons (Figure 2A). Since MPP8 co-purified with the H3K9 KMT SETDB1, we next studied the extent of MPP8 co-localization with SETDB1. We found MPP8 co-localization on 6.5% binding sites with SETDB1 ( Figure 2B). We found that 1916 genes are bound by MPP8. Interestingly, we observed co-localization of MPP8 and SETDB1 genome-wide on 1329 genes, which are mainly involved in nucleosome assembly (Supplementary Figure S2 and Supplementary Table S2). Concerning transposable elements analyses, MPP8 peaks were proportionally located in LTR and LINE and SINE repeats, which is similar to the binding sites reported for SETDB1 ( Figure 2C). A total of 9499 transposable elements enriched in the binding sites of MPP8 were found, among which around 20% (1843/9499) are also bound by SETDB1 ( Figure 2D and Supplementary Table S3). Interestingly, MPP8 and SETDB1 co-bind more LINE and ERVK elements but also (peri-)centromeric satellite DNA repeats including GSAT_MM, SYNREP ( Figure 2D). Altogether, the ChIP-seq data suggest that MPP8 cooperate with SETDB1 in transposable elements and satellite DNA repeats silencing in mESCs. Regulation of Gene Expression by MPP8 and SETDB1 in mESCs In light of the co-immunoprecipitation and ChIP-seq results described above, showing a physical interaction between MPP8 and SETDB1 and genomic co-binding in mESCs, we further investigated the possible functional interplay between MPP8 and SETDB1. We asked whether MPP8 and SETDB1 co-regulate gene expression in mESCs. To further confirm the functional overlap between MPP8 and SETDB1, we performed RNA-seq analyses in HM1 ESCs after MPP8 KD or SETDB1 knockdown (KD). RNA-seq data were further analyzed by differential expression pattern of genes and transposable elements using the scrambled siRNA condition as a control on the whole dataset. siRNA-mediated acute KD of MPP8 or SETDB1 in HM1 ESCs showed a 70% decrease in mRNA level (not shown) of the specific targets and about the same decrease at the protein level ( Figure 3A). RNA-seq analyses showed that 733 genes were dysregulated upon MPP8 KD and 1605 upon SETDB1 KD (Log2FC > 1 and p-value < 0.05) ( Figure 3B and Supplementary Figure S3). Interestingly, a total of 190 genes were commonly upregulated upon MPP8 or SETDB1 KD ( Figure 3C and Supplementary Table S4). Gene ontology analysis indicate that commonly upregulated genes are mainly involved in the regulation of cell differentiation, cell proliferation and, interestingly, telomere maintenance ( Figure 3C and Supplementary Figure S3). We observed that 112 genes that are bound by MPP8 were also upregulated upon MPP8 KD ( Figure 3D). Interestingly, important biological processes such as regulation of transcription or DNA replication seems to be affected upon MPP8 KD ( Figure 3D). SETDB1 or MPP8 KD induced also gene expression downregulation ( Figure 3B). While gene upregulation in the case of SETDB1 and MPP8 KD is expected, the downregulation is less. In the majority of cases downregulation is due to secondary events. Indeed, crossing the list of genes that are downregulated upon MPP8 KD and bound by MPP8 showed only 20 genes which are bound by MPP8 and downregulated upon MPP8 KD contrary to the 42 upregulated genes. Altogether, our results suggest that MPP8 and SETDB1 not only are enriched in similar genome regions, but also co-regulate a set of genes. SETDB1 and MPP8 Cooperates to Silence Satellite DNA Repeats Concerning transposable elements, the differential expression compared to scrambled siRNA-transfected HM1 cells showed that the most de-repressed ones upon MPP8 KD are GSAT and SYNREP satellites, L1, X21 and Lx3 LINE repeats ( Figure 4A). Thus, our data showed for the first time that MPP8 is required in the silencing of satellite repeats in mESCs. Interestingly, the majority of these transposable elements were also upregulated upon SETDB1 KD ( Figure 4A and Supplementary Table S5). Of note, in accordance to previous reports [16,30], our results also show many families of ERV and LINE sequences upregulated after SETDB1 KD (Supplementary Figure S4). Next, we checked the overlap with the SETDB1 KD results. Interestingly, we found that the satellite repeats GSAT and SYNREP from the (peri)-centromeres are also de-repressed upon SETDB1 KD ( Figure 4B). RT-qPCR analysis of major and minor satellites sequences, as well as LINE repeats, in mESCs upon MPP8 KD and/or Setdb1 KO (cKO). mRNA levels were normalized to GAPDH or Cyclophilin A mRNA. For statistical significance, Student t-tests were applied to data following normal distribution, otherwise, Mann-Whitney's unpaired tests were applied. (n = 3 biological replicates) ** p < 0.05 or *** p < 0.01. (D) Genome browser representation including tracks for MPP8 ChIP-seq and SETDB1 ChIP-seq in mESCs at GSAT and SYNREP satellites. Repeats data was retrieved from Repeat Masker via UCSC genome browser. As shown above, MPP8 and SETDB1 independently regulate the expression of satellite DNA and LINE repeats. We thus tested whether the effect of MPP8 changes in presence or absence of SETDB1. To address this, we used conditional Setdb1 KO TT2 mESCs (cKO) [16], in which endogenous SETDB1 depletion is totally achieved after 96 h of treatment with 4-hydroxy-Tamoxifen (OHT). We induced siRNA-mediated acute KD of MPP8 and at the same time endogenous depletion of SETDB1 in these ESCs, in parallel to the ad hoc controls. MPP8 KD showed a 40% decrease in MPP8 mRNA level when depletion of SETDB1 was induced, and 67% decrease without OTH treatment. However, even though the efficiency of the MPP8 KD was not very high, a trend toward significance showing upregulation of minor and major satellites and LINE repeats upon MPP8 KD was observed, in accordance with our RNA-Seq data. Of note, the consensus sequence of the primers we used in RT-qPCR could amplify many families of satellite and LINE repeats, not only the specific ones that are upregulated in the RNA-seq. Interestingly, upregulation of major and minor satellites as well as LINE elements is significantly higher upon concomitant MPP8 and SETDB1 loss-of-function ( Figure 4C), suggesting their synergistic roles. Finally, the analysis of the MPP8 and SETDB1 ChIP-seq-associated reads confirmed a good recruitment on the GSAT and SYNREP repeats ( Figure 4D). Altogether, these data suggest that SETDB1 and MPP8 cooperate in satellite DNA repeats silencing in mESCs. Discussion We have combined biochemical and genomic strategies to get insights on the functions of the major H3K9 methylation reader MPP8 in mESCs. We first characterized the MPP8 complex partners and confirmed that MPP8 form complexes with many H3K9 methylation machinery members, including G9A and GLP H3K9 KMTs, as already known. Interestingly, we found that MPP8 interacts more robustly with the H3K9 KMT SETDB1 and its co-factor ATF7IP. We showed a physical and functional interaction between MPP8 and the major H3K9 KMT SETDB1 in mESCs. SETDB1 loss-of-function induces early embryonic lethality between 3.5 and 5.5 dpc [7]. Indeed, H3K9 methylation established by SETDB1 is essential in pluripotent mESCs where, in addition to coding gene repression, it is required for repetitive elements silencing ensuring the stability of genetic information. In general, H3K9 methylation and its effectors play key roles in the non-coding genome and transposable elements silencing in pluripotent ESCs compared to more committed cells in which an additional layer of DNA methylation is established [10,31,32]. Our data showed that in addition to co-regulating coding genes, the association of the H3K9 KMT SETDB1 with the H3K9 methylation reader MPP8 extend to more genomic elements, including major and minor satellite DNA as well as some LINE elements. The MPP8 gene is present only in vertebrates, reminiscent of some other proteins that are linked to the H3K9 methylation epigenetic mark such as TRIM28/KAP1 and ATF7IP (the SETDB1 cofactor), while some others are not that conserved, such as Heterochromatin Proteins HP1. MPP8 has been proposed to be involved in "position-effect variegation" (PEV)-like in the human and mouse genomes in the HUSH complex [33]. MPP8 has been described to repress the expression of a retrovirus incorporated into heterochromatin with a screen using siRNAs [33]. In vertebrates, none of the HP1 proteins seemed to induce a PEV-like phenomena on integrated reporter in human cells [34], two proteins of the HUSH complex do, SETDB1 and MPP8. This evolution in the H3K9 methylation by SETDB1 and its readers from fly to vertebrates is concomitant to the apparition of new factors MPP8, MCAF, KAP1 and a whole new zinc finger family of proteins. Expression of MPP8 is especially high in stem and germ cells and thus may play key roles in these cells chromatin features. As H3K9 methylation is mainly located in repeated elements, so do the main H3K9 KMTs and effector proteins that bind these motifs. Among these repetitive elements, the major and minor satellite DNA, the LTR-containing retroelements ERVs and the non-LTR-containing retrotransposons LINE, are epigenetically repressed by H3K9 methylation in ESCs [2,35], which then constitute a docking site for effector proteins which bind methylated H3K9 via specific domains, such as MPP8 via its chromodomain. Lysine methylation is not known to directly regulate chromatin structure, since addition of a methyl group to a lysine does not affect the charge, in contrast to acetylation for example [36]. Instead, methylated lysines are recognized by effector proteins called readers of lysine methylation, such MPP8 that binds methylated H3K9 via its chromodomain, and regulate chromatin structure and the subsequent biological response. Thus, the combined recruitment of a writer and a reader of lysine methylation, for instance SETDB1 and MPP8, would provide a mean for not only the establishment but also the spreading and maintenance of H3K9me3 at the targeted genomic regions, such as at repeated elements. MPP8, through its interaction with SETDB1, would participate in the spreading of the SETDB1-mediated H3K9 trimethylation to silence the transcription of repeated elements. In addition, MPP8 has been shown to bind the G9A-methylated form of the SETDB1 cofactor ATFIP7 [37], thus providing another mean to further stabilize the writer-reader interaction. In the mouse, the centromeric and pericentromeric regions are enriched by two conserved tandem repeats, which are the minor and major satellite DNA (such as the SYNEREP_MM and GSAT_MM, respectively) [38,39]. These satellite sequences are important for sister chromatid cohesion, kinetochore formation and spindle microtubule attachment during M-phase. Major and minor satellite repeats are transcribed during the cell cycle and during early development [39][40][41][42]. Interestingly, MPP8 is known to be phosphorylated at the M-phase of the cell cycle and its phosphorylation lowers its affinity towards H3K9me3 [14,43,44]. This is concomitant to the H3S10 massive phosphorylation during M-phase [45,46]. Thus, these two mechanisms could contribute to the satellite DNA transcription during the cell cycle, which is important for heterochromatin re-establishment during cell division. In summary, our data suggest that SETDB1 and MPP8 cooperate in the cell cycle-dependent repression of the satellite DNA. Figure S4: Fold changes of individual transposons upregulated upon SETDB1 KD (Log2FC > 1; p-value < 0.05) are shown in a barplot. Table S1: MPP8 complex components as revealed by mass spectrometry in HeLa cells. Table S2: Genes enriched in the binding sites of MPP8 and SETDB1 in mESCs. Table S3. Transposable elements enriched in the binding sites of MPP8 and SETDB1 in mESCs. Table S4: Upregulated genes identified by comparing RNA-seq data from biological triplicates upon MPP8 and SETDB1 KD in mESCs (Log2FC > 1; p-value < 0.05). Table S5: Upregulated transposable elements identified by comparing RNA-seq data from biological triplicates upon MPP8 and SETDB1 KD in mESCs (Log2FC > 0.5; p-value < 0.05).
2019-09-28T13:02:29.396Z
2019-09-25T00:00:00.000
{ "year": 2019, "sha1": "f76a92c3b795f792f020e9cc93fea9aad8a59023", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/10/10/750/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4cd53ff82da7344470c7bda01a5dbc6a62f479ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3429962
pes2o/s2orc
v3-fos-license
Older mothers and increased impact of prenatal screening: stable livebirth prevalence of trisomy 21 in the Netherlands for the period 2000–2013 In the Netherlands, there is no registry system regarding the livebirth prevalence of trisomy 21 (T21). In 2007, a national screening programme was introduced for all pregnant women, which may have changed the livebirth prevalence of T21. The aim of this study is to analyse trends in factors that influence livebirth prevalence of T21 and to estimate the livebirth prevalence of T21 for the period of 2000–2013. National data sets were used on the following: (1) livebirths according to maternal age and (2) prenatal testing and termination of pregnancy (ToP) following diagnosis of T21. These data are combined in a model that uses maternal age-specific risk on T21 and correction factors for natural foetal loss to assess livebirth prevalence of T21. The proportion of mothers aged ≥ 36 years has increased from 12.2% in 2000 to 16.6% in 2009, to gradually decrease afterwards to 15.2% in 2013. The number of invasive tests performed adjusted for total livebirths decreased (5.9% in 2000 vs. 3.2% in 2013) with 0.18% a year (95% CI: −0.21 to −0.15; p   < 0.001). Following invasive testing, a higher proportion of foetuses was diagnosed with T21 (1.6% in 2000 vs. 4.8% in 2013) with a significant increase of 0.22% a year (95% CI: 0.18–0.26; p < 0.001). The proportion of ToP subsequent to T21 diagnosis was on average 85.7%, with no clear time trend. This resulted in a stable T21 livebirth prevalence of 13.6 per 10,000 livebirths (regression coefficient −0.025 (95% CI: −0.126 to 0.77; p = 0.60). Introduction Trisomy 21 (T21) or also called Down Syndrome (DS), is the most common chromosomal disorder among liveborn infants and is associated with intellectual disability and other serious morbidity [1]. Studies have shown that T21 livebirth prevalence has been influenced mainly by two trends with counteracting effects [2][3][4][5][6][7]. Firstly, in developed countries, a rise in maternal age increases the chance of giving birth to a child with T21 [8]. For the Northern Netherlands, the percentage of mothers aged > 35 years of age has also increased from 12 to 17 percent in the period 1993-2004 [5]. Secondly, more precise and advanced technologies for T21 screening during pregnancy have become available and are being offered to pregnant women, which may have changed T21 livebirth prevalence. In the Netherlands, before 2007, prenatal screening with the Firsttrimester Combined screening Test (FCT) was only recommended for women older than 35 years. However, women over 35 years could also opt for direct prenatal diagnosis through an amniocentesis (AC) or chorion villous sampling (CVS) [9]. A national program for prenatal screening started in 2007 consisting of all pregnant women receiving information about FCT in the first trimester and furthermore offering a structural anomaly scan in the second trimester [10]. Risk assessment for trisomy 21, 13, and 18 with the FCT is based on maternal age, foetal nuchal translucency thickness and concentrations of maternal serum-free β-human chorionic gonadotrophin and pregnancy-associated plasma protein-A. If a woman receives a "high-risk" FCT result ( ≥ 1:200) or abnormal ultrasound findings are present, prenatal diagnostic testing by CVS or AC is offered for follow-up. The purpose of prenatal DS screening is to enable autonomous informed decision making with regard to carrying pregnancy to term or termination of pregnancy (ToP) [11,12]. Studies have shown that many women decline follow-up invasive testing, sometimes due to the risk of miscarriage of 0.11-0.22% [13][14][15]. Studies have shown that the vast majority (85-95%) of the women that receive a foetal T21 diagnose terminate their pregnancy [16]. It is unclear if actively informing all pregnant women about prenatal screening tests, starting around 2004, and resulting in a national screening program in 2007, led to a change in livebirth prevalence of T21. The aims of this study were to analyse trends in factors that influence T21 livebirth prevalence and to estimate livebirth prevalence of T21 in the Netherlands for the period 2000-2013. Model In this study, we will estimate T21 livebirth number in the Netherlands on the basis of a model with following variables: (1) number of total livebirths specified for maternal age; (2) maternal age-specific T21 risk; and (3) number of ToP in the case of T21. Livebirth prevalence is defined as the number of liveborn children with T21 per 10,000 livebirths ( Fig. 1). Data sets In order to assess the numbers of the above-mentioned variables and relevant trends of T21 in the Netherlands, the following two data sets were combined: the Central Bureau of Statistics (CBS) and the Working Party on Prenatal Diagnosis and Therapy (WPDT). The CBS collects and processes national data on a mandatory basis, anonymously. The published data covers multiple societal aspects of the Dutch population, including numbers and specifics with regard to childbirth. In this study, the total number of annual livebirths and maternal age distribution (each year by age of mother in completed years at the time of birth) were used. In the table of CBS, some children have been included in a specific year in which they were reported to the municipal administration, while they were born in the previous or next year. Therefore, numbers are slightly different than the CBS tables online [17]. The WPDT is part of the Dutch Society for Clinical Genetics (VKGN) and the Dutch Society for Obstetricians and Gynaecologists (NVOG). Since 1991, the WPDT collects the data in the Netherlands concerning prenatal T21 screening. The WPDT annual report contains statistics on performed prenatal diagnoses by AC or CVS, diagnosed T21 cases and the number of T21 pregnancies terminated after diagnosis. Different WPDT reports are circulating, we used the final versions [18]. This data set does not contain personal information and therefore is not privacy-sensitive. The Medical Ethical Committee of VU University Medical Center stated that no permission needed to be granted for this study in accordance with Dutch research legislation (WMO). Livebirth prevalence of T21 = (actual number of T21 livebirths / number of total livebirths) x 10 000 Actual number of T21 livebirths = (Expected number of T21 livebirths) -(corrected number of ToP) Expected number of T21 livebirths = Sum of (Number of women in each age category x maternal age specific T21 risk) for all age categories* Corrected number of ToP = (AC_ToP x 0.75) + (CVS_ToP x 0.68) AC_ToP= termination of pregnancy following positive amniocentesis. CVS_ToP = termination of pregnancy following positive chorionic villus sampling. Correction rate of natural foetal loss for women screened by amniocentesis was 25% (therefore survival to term 1-0.25=0.75) and foetal loss rate for women screened by CVS was 32% (therefore survival to term 1-0.32= 0.68) * each year by age of mother at the time of birth, from 15 years till 60 years old Fig. 1 Model for estimating livebirth prevalence of T21. Correction rates of natural foetal loss for women screened by amniocentesis was 25% (therefore survival to term 1−0.25 = 0.75) and foetal loss rates for women screened by CVS were approximately 32% (therefore survival to term 1−0.32 = 0.68)* each year by age of mother at the time of birth, from 15 years till 60 years old. AC_ToP termination of pregnancy following positive amniocentesis, CVS_ToP termination of pregnancy following positive chorionic villus sampling Expected number of T21 livebirths By multiplying the number of women that delivered a liveborn child in each age category by the age-specific T21 risks, and summing that for all maternal age categories, the annual number of expected T21 children could be estimated. The maternal age-specific risks of T21 proposed by Morris were used [19]. Compared to the other models, the analysis of Morris et al is the most recent model and based on the largest data set. Furthermore, data from Morris et al provide some evidence to show that the risk does not continue to increase exponentially for women over age 45 as previous estimates assume [20]. With the expected number of T21 livebirths, it was possible to estimate the "natural prevalence". The natural prevalence is the number of children with T21 that would have been born in the absence of prenatal screening and selective abortion per 10,000 livebirths. Actual number of T21 livebirths Next, the expected number of T21 livebirths is corrected for the effect of prenatal testing and subsequent termination of pregnancies. In 2011 and 2013, the WPDT annual reports contained no information on the number of ToPs of the centre in Nijmegen. We have estimated this number by assuming that, in Nijmegen, the proportion of ToP (separately for AC and CVS) was similar to that in the other centres [21]. Still, the number of induced abortions does not result in an equivalent reduction in the number of T21 livebirths due to the risk of natural foetal loss. Many of these induced abortions would not have survived to term and would not have been diagnosed, as miscarriages are generally not karyotyped. The risk of natural foetal loss for T21 varies between 19 and 44%, and depends on maternal age and gestational age at prenatal testing [22]. Since our data set on ToPs, information of maternal age was partially missing, overall estimates of foetal loss for women of any age were used. The correction factors of natural foetal loss that we used for women screened by AC was 25% and for women screened by CVS 32%. These factors are based on a study of Savva et al. where foetal loss at different maternal ages were estimated by survival analysis using follow-up of 5177 prenatally diagnosed cases [23]. The impact of prenatal screening is defined as the difference between the natural livebirth prevalence and actual livebirth prevalence, divided by the natural livebirth prevalence. Statistical analyses Linear regression analyses were used to analyse time trends in mean maternal age, the impact of prenatal screening and T21 livebirths prevalence. Χ 2 tests were used to investigate whether the proportions of invasive tests, positive T21 diagnoses and ToP subsequent T21 diagnoses were different between the period before (2000)(2001)(2002)(2003)(2004)(2005)(2006) and after (2007)(2008)(2009)(2010)(2011)(2012)(2013) the implementation of the national screening program. A p-value < 0.05 (two sided) was considered to be statistically significant. All statistical analyses were performed using IBM SPSS 20.0. Maternal age The mean maternal age has slightly increased since 2000 from 30.2 years to 30.5 years in 2013 ( Fig. 2), leading to a significant increasing trend of 0.014 year (95% CI: 0.001-0.027; p = 0.04). Because maternal age is reported as a discrete variable (age last birthday) in the CBS tables, one could add 6 months on top of the mean maternal age from 30.7 years in 2000 to 31.0 years in 2013. The gradual increase of mean maternal age would seem to have stopped in around 2006. The proportion of mothers aged ≥ 36 years has increased from 12.2% in 2000 to 16.6% in 2009, to gradually decrease afterwards to 15.2% in 2013 (Fig. 3). Invasive tests During the period 2000-2013, a total of 127 077 of invasive tests were performed. Expressed as proportion of the total livebirths, the number of invasive tests has steadily decreased with 0.18% a year (95% CI: −0.21 to −0.15), from 5.9% in 2000 to 3.2% in 2013 (p < 0.001).The proportion of invasive tests decreased from 5.4% before to 4.2% after implementation of the national screening program in 2007 (p < 0.001). In 2000, advanced maternal age was the main reason for invasive testing (73%), in 2013 this was reduced till 28%. In contrast to the decreasing trend of invasive tests performed, the proportion of positive T21 diagnoses from these tests has increased from 1.6% in 2000 to 4.8% in 2013, with a significant increase of 0.22% a year (95% CI 0.18 to 0.26; p < 0.001). The proportion of prenatal diagnosis after invasive tests increased from 2.1% before to 3,6% after 2007 (p < 0.001). The proportion of ToP subsequent to T21 diagnosis was on average 85.7%, fluctuating between 79.3% and 93.9%, with no clear time trend (p = 0.11). Also no significant change in proportion of ToP was found before and after 2007 (p = 0.08). After T21 diagnosis by AC, the pregnancy was terminated less often compared to CVS, with a range 73.6% till 93.6% and 83.6% till 96,4%, respectively ( Table 1). The impact of prenatal screening increased with 29.2% in 2000 till 40.8% in 2013 and showed a significant time trend of 0.86% a year (95% CI: 0.32 to 1.40; p < 0.001). Livebirth prevalence of T21 An integration of these data resulted in an estimated livebirth prevalence of T21 that remained quite stable, ranging from 12.4 to 14.7 per 10,000. The mean livebirth prevalence of T21 for the period of 2000-2013 was 13.6 per 10,000 livebirths (Fig. 4). No significant trend in mean livebirth prevalence of T21 over the years 2000-2013 (regression coefficient −0.025 (95% CI: −0.13 to 0.77; p = 0.60) was found. A decline of the total number of livebirths resulted in a decrease in the absolute number of livebirths with T21 from 284 to 227 (Table 2). Discussion In this study, we found a stable livebirth prevalence of T21 in the Netherlands of 13.6 per 10,000 livebirths from 2000 to 2013. The effect of actively informing all pregnant women about prenatal screening tests, starting around 2004, and resulting in a national screening program in 2007, has not led to a decrease in livebirth prevalence of T21, as estimated by our model. Model The model described in this study was based on maternal age and maternal age-specific T21 risk factors [19,20]. Using maternal age-specific risk factors to estimate expected T21 pregnancies is a method used internationally, as no large differences have been found when compared to the empirical data. Especially for time trends, no systematic differences are to be expected [24][25][26]. De Graaf et al. (2011) used a similar model to estimate the livebirth prevalence of T21 in the Netherlands. They validated the results using empirical data of postnatal T21 diagnoses. Birth numbers estimated by the theoretical model were 4% lower, resembling a mean difference of 0.5 per 10 000 births lower, compared with empirical data [6]. Although there was a slight underestimation of birth prevalence, the differences between both methods were small and the time trends in birth prevalence were similar. Trends in factors First, advancing maternal age is a known risk factor for T21. In the Netherlands, maternal age slightly increased till around 2006; however, it did not rise any further. The proportion of mothers aged ≥36 years has increased from 2000 to 2009, but gradually decreased till 2013. Furthermore, a decreasing trend in the number of performed prenatal invasive tests is present. Before 2007 FCT was already being used in pilot studies and women older than 35 years already had direct access to invasive testing. After introduction of the FCT and the structural anomaly scan prenatal diagnostic testing was used more effective, meaning mainly women identified as having a high risk for T21 based on FCT or based on ultrasound anomalies made use of prenatal invasive tests [27,28]. Because screening tests are more precise than using age criterion a lower proportion of women underwent invasive testing for maternal age criterion alone [10]. The increase in the proportion of positive T21 diagnoses through diagnostic testing from 2000 to 2013 can be explained by a more accurate risk assessment for pregnant women, as only high-risk women are referred for confirmation by invasive testing [28,29]. Furthermore there is a slight improvement to the test performance of the FCT [30]. The proportion of induced abortions subsequent to T21 diagnosis was on average 85.7%, fluctuating between 79.3 and 93.9%, with no clear time trend. More prenatal diagnoses of T21 in combination with a stable ToP rate after a prenatal diagnosis counterbalanced the increase of the natural livebirth prevalence by increasing maternal age (Fig. 4) increased. The decision for ToP is a difficult one to take and an emotional challenge influenced by many personal, social, cultural and psychological factors [16,31]. In general, the proportion of pregnancy terminations in the Netherlands is the lowest of all Western countries, being around 8.5-9 per 1000 [32,33]. This can be explained by the fact that norms and values of society with regard to childbirth and termination has considerable influence on the decision to terminate a pregnancy. In the Netherlands, contraception and communication concerning sexual activities and the potential consequences are readily available for young women. Furthermore, the natural character of pregnancy is highly valued; pregnancy and delivery are generally considered as non-medical events that one should not 'unnecessarily' interfere with [34]. Livebirth prevalence of T21 The correction factor for natural foetal loss for women screened by AC was 25% and CVS was 32% T21 trisomy 21, AC_ToP termination of pregnancy subsequent positive amniocentesis, CVS_ToP termination of pregnancy subsequent positive chorionic villus sampling the family). Since 2004, this was allowed which already caused an increase in the uptake of prenatal screening tests before 2007. However, no decline of livebirth prevalence was found in 2004. The reason for a stable livebirth prevalence may be the fact that FCT uptake remained relatively low (27%) compared to other European countries. It has been shown that the uptake of prenatal testing is strongly associated with the screening policy of a certain country [4,35]. Screening uptake in the Netherlands may have been influenced by the additional costs for T21 screening (€160) compared to the standard antenatal care. Furthermore, there is a public debate that precedes the implementation of the T21 screening program, and the 'right not to know' and respect for autonomous choice in the Netherlands [35]. A different aspect may be the way midwives are counselling pregnant women by focussing on relationship building and health education, instead of informed decision making [36]. Bakker et al. analysed 820 questionnaires of women in the Netherlands and the main reason for the low uptake of the FCT was the relatively positive attitude towards DS and a negative attitude towards TOP [37]. Reported data about T21 livebirth prevalence over the last decades vary. [41]. The reason for this slightly lower livebirth prevalence may be the lower maternal age of the northern region of the Netherlands compared to other regions [42]. A very recent study by de Graaf et al. (2017) shows an increase of livebirth prevalence from 11.6 per 10,000 in 1991 to a peak of 15.9 per 10,000 in 2002, followed by a slight decrease to 13.4 per 10,000 in 2013 (and 11.1 per 10,000 in 2015). Livebirth estimates were based on numbers of postnatal T21 diagnoses by cytogenetic centers and non-ToP numbers of prenatal T21 diagnoses in WPDT reports. A correction had to be made for livebirths and natural foetal loss in the non-ToP category. De Graaf et al. modelled this correction in two different ways. In both scenarios, there is a decreasing livebirth prevalence after 2002. Estimates for 2014 and 2015, as regards the percentage of livebirths after a prenatal diagnoses, were based on trend data from preceding years [39]. It is surprising that de Graaf et al. find a decreasing trend also only for 2000 till 2013 and in the current study no such trend was found. How can this difference in trend be explained? First of all, the major limitation of these studies is that the estimates of number of livebirths with T21 are not completely based on actual counts. Present study is based on a model using maternal age in general population, and the data on ToP and natural loss in case of T21. The study of de Graaf et al. models the number of livebirths with T21 in the number of reported non-ToPs. However, the current model of Morris et al. leads to an estimation of livebirth prevalence of on average 13.6 per 10,000 for the period 2000-2013, which is the same as found in the recent study of de Graaf et al. for the same period of time. There was no specific maternal age information, so the overall estimates of natural foetal loss rate were used. However, overestimates of the foetal loss rate in younger mothers will cause the weight of prenatally diagnosed cases to be too low, which will lead to an overestimation of the actual livebirth prevalence. Also, underestimation of the foetal loss rate in older mothers will cause the weight of prenatally diagnosed cases to be too high, which will lead to an underestimation of the expected livebirth prevalence. Finally, it may be possible that we found a stable livebirth prevalence by chance because the 95% confidence intervals in such a model approach as used in the current study on an annual basis are fairly large. Because NIPT became available in 2013 in neighbouring countries, overestimation of T21 livebirth prevalence is possible. In the Netherlands, NIPT became available only for women at increased risk for T21 after the FCT in April 2014. This led to about 3000-3600 Dutch women crossing the border to undergo NIPT at their own costs [43]. The numbers of T21 cases in this low-risk population is probably too small to affect our estimates, since perhaps almost all T21 diagnoses are confirmed using invasive diagnostic testing in the Netherlands and thereby registered in the database of the WPDT currently used. [44] In the Netherlands, there is no national registry system for T21. EUROCAT is a regional system of Northern Netherlands and does not cover the whole country. The Perinatal Registry Netherlands foundation (PRN) contains records of all infants born from 16 weeks of gestation under the care of a midwife at home or in a hospital, as well as those born under the care of an obstetrician in a hospital within the first 28 days of life. Similar to EUROCAT, this registration is done on a voluntary basis. In addition, no confirmatory genetic test is needed for the registration of DS. As a result from the voluntary basis, it is likely that underestimation is present. A national registry system that collects complete data of childbirth and T21 diagnosis is needed to ensure baseline data. Conclusion and future research This study showed a stable livebirth prevalence of T21 with a mean of 13.6 per 10,000 in the Netherlands during the period 2000-2013. The national screening program in 2007 for all pregnant women seems to have had limited impact on the livebirth prevalence of T21. Qualitative studies could provide more insights into whether and how parents make their reproductive decisions. Since April 2017 non-invasive prenatal testing (NIPT) is introduced in the Netherlands as a first-tier screening test as alternative for the FCT. The media has generated intense debate about DS and the NIPT screening test. The question remains if the livebirth prevalence of T21 will change. In our opinion a national registry is needed to ensure baseline data. Currently, the data on the number of postnatal diagnoses can be derived from the cytogenetic centers. However, individual centres have to be approached. The recommendation is to collect these data on annual basis in a national registry. Data on prenatal diagnoses and reported ToPs can be derived from the WPDT reports. However, accurate reporting is required to avoid that multiple versions of WPDT reports are circulating. Furthermore in the non-ToPs category, a distinction could be made in the categories 'livebirths', 'natural loss/still births', and 'unknown outcomes'. A national register could contain further maternal and neonatal information as well.
2018-04-03T04:42:48.758Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "627c5418f8745801213d63986e5fe95125089bb8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41431-017-0075-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ce6f6b42fb2e7938ae526d75715622c453f2e29d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22070560
pes2o/s2orc
v3-fos-license
Accessory Elements, Flanking DNA Sequence, and Promoter Context Play Key Roles in Determining the Efficacy of Insulin and Phorbol Ester Signaling through the Malic Enzyme and Collagenase-1 AP-1 Motifs* Insulin stimulates malic enzyme (ME)-chloramphenicol acetyltransferase (CAT) and collagenase-1-CAT fusion gene expression in H4IIE cells through identical activator protein-1 (AP-1) motifs. In contrast, insulin and phorbol esters only stimulate collagenase-1-CAT and not ME-CAT fusion gene expression in HeLa cells. The experiments in this article were designed to explore the molecular basis for this differential cell type- and gene-specific regulation. The results highlight the influence of three variables, namely promoter context, AP-1 flanking sequence, and accessory elements that modulate insulin and phorbol ester signaling through the AP-1 motif. Thus, fusion gene transfection and proteolytic clipping gel retardation assays suggest that the AP-1 flanking sequence affects the conformation of AP-1 binding to the collagenase-1 and ME AP-1 motifs such that it selectively binds the latter in a fully activated state. However, this influence of ME AP-1 flanking sequence is dependent on promoter context. Thus, the ME AP-1 motif will mediate both an insulin and phorbol ester response in HeLa cells when introduced into either the collagenase-1 promoter or a specific heterologous promoter. But even in the context of the collagenase-1 promoter, the effects of both insulin and phorbol esters, mediated through the ME AP-1 motif are dependent on accessory factors. Insulin regulates the transcription of more than 100 genes indicating that this represents a major action of this hormone (1,2). The stimulatory and inhibitory effects of insulin on gene transcription are mediated through various cis-acting elements collectively referred to as insulin response sequences or elements (IRSs/IREs). 1 Unlike cAMP, which regulates gene tran-scription predominantly through one cis-acting element (3), a single consensus IRS does not exist. Instead, six, distinct consensus IRSs have currently been well defined (2) in addition to several IRSs that appear to be unique to individual genes (1). Thus, this situation resembles that for phorbol esters, which are able to regulate gene transcription through at least eight distinct consensus sequences (4). One of these consensus IRSs has the sequence T(G/A)TTT(T/ G)(G/T) and mediates the inhibitory effect of insulin on phosphoenolpyruvate carboxykinase, insulin-like growth factor binding protein-1, apolipoprotein CIII, and glucose-6-phosphatase catalytic subunit gene transcription (1,2). The transcription factor FKHR binds this IRS but whether it mediates the action of insulin through this motif is unclear (5)(6)(7). The other five consensus IRSs all mediate stimulatory effects of insulin on gene transcription. They are the activator protein-1 (AP-1) motif, the serum response element (SRE), the Ets motif, the thyroid transcription factor-2 motif, and the sterol response element-binding protein (SREBP) motif (1,2). Multiple hormones other than insulin regulate gene expression through the AP-1 motif, the SRE, and the Ets motif (8 -10). In contrast, the thyroid transcription factor-2 motif has currently only been shown to mediate effects of insulin, cAMP (11), and cytokines (12) on thyroid gene expression. Insulin and thyrotropin, the latter acting through cAMP, both stimulate the expression of thyroid transcription factor-2 (11), which contributes to the induction of thyroglobulin and thyroperoxidase gene transcription by these hormones (1,2). Similarly, insulin and cAMP both regulate the expression of SREBP-1c, although in this case their effects are antagonistic (13). The AP-1 motif binds members of the Fos (c-Fos, FosB, Fra-1, and Fra-2) and Jun (c-Jun, JunB, and JunD) transcription factor families (8) and mediates the action of insulin on the expression of the hepatitis B virus X gene and the genes encoding collagenase-1 (henceforth referred to simply as collagenase) and malic enzyme (ME) (14 -18). The mechanism of insulin signaling through the AP-1 motif is poorly understood but appears to involve effects of insulin on both the phosphorylation state and mass of the AP-1 complex (1,2). Thus, the potential exists for insulin to have both short and long term effects on gene expression through the same element. However, the mechanism of insulin signaling appears to vary with the cell type studied (14,16,17,19). For example, the stimulation of Fra-1 gene expression by insulin is seen in some (19,20), although not in all cell types (21) and the protein kinases JNK1 and JNK2, which phosphorylate and activate c-Jun (8), are only activated by insulin in some cell types (22) and not others (19). We have previously shown that, whereas insulin stimulates collagenase-CAT and ME-CAT fusion gene expression in H4IIE cells, insulin only stimulates collagenase-CAT and not ME-CAT fusion gene expression in HeLa cells (16,17). In addition, phorbol esters stimulate collagenase-CAT but not ME-CAT fusion gene expression in HeLa cells (16). The experiments in this article were designed to explore the molecular basis for: (i) the differential regulation of collagenase and ME gene expression by insulin and phorbol esters in HeLa cells and (ii) the differential regulation of ME gene expression by insulin in H4IIE and HeLa cells. The results highlight the influence of three additional variables, namely promoter context, AP-1 flanking sequence, and accessory elements that modulate insulin signaling through the AP-1 motif. Plasmid Construction-The construction of a collagenase-CAT plasmid, containing the wild-type human collagenase promoter sequence from Ϫ158 to ϩ64, has been previously described (17). The TKCAT plasmid contains the herpes simplex virus-thymidine kinase (TK) promoter sequence from Ϫ105 to ϩ51 ligated to the CAT reporter gene and has a unique BamHI site in the polylinker at Ϫ105 (23). The TKC-VI plasmid contains the herpes simplex virus-TK promoter sequence from Ϫ480 to ϩ51 ligated to the CAT gene and has a unique BamHI linker between positions Ϫ40 and Ϫ35 (24). Various double-stranded complementary oligonucleotides, representing distinct regions of the ME or collagenase promoters (Table I), were synthesized with BamHI compatible ends and were ligated in multiple copies into BamHI-cleaved TK-CAT or as single copies, in either the same or inverted orientation relative to that in the endogenous gene, into BamHI-cleaved TKC-VI. Ligation of a single copy of the ME or collagenase AP-1 motifs into the BamHI site of TKC-VI fails to confer a phorbol ester response (data not shown). Plasmid XMB contains a minimal Xenopus 68-kDa albumin promoter ligated to the CAT reporter gene and has a unique HindIII site in the polylinker (17). Double-stranded complementary oligonucleotides, representing various regions of the ME or collagenase promoters (Table I), were synthesized with HindIII compatible ends and ligated into HindIII-cleaved XMB in multiple copies. The orientation and number of inserts in the TKCAT, TKC-VI, and XMB plasmid constructs were determined by restriction enzyme analysis and confirmed by DNA sequencing. A previously described three-step PCR strategy (25,26) was used to switch the collagenase AP-1 flanking sequence to that of the ME AP-1 motif in the context of the collagenase promoter. The resulting construct, designated Coll 158:ME (Fig. 8), was generated within the context of the Ϫ158 to ϩ64 collagenase promoter fragment. Briefly, two complementary PCR primers were designed; the sequence of the sense strand oligonucleotide was as follows (mutated nucleotides are underlined): 5Ј-CAAGAGGAT-GTTATCCGCCGTGAGTCAGCGAGCCTCTGGCTTTC-3Ј. The AP-1 core sequence was unchanged and is shown in italics. This sense strand oligonucleotide was used in conjunction with a 3Ј PCR primer to generate the 3Ј-half of the collagenase promoter, whereas the complementary antisense strand oligonucleotide was used in conjunction with a 5Ј PCR primer to generate the 5Ј-half of the collagenase promoter. These 5Ј and 3Ј primers were designed to maintain the 5Ј and 3Ј junctions of the collagenase promoter fragments to be the same as that in the wild-type Ϫ158 to ϩ64 collagenase-CAT fusion gene construct. The PCR products from these two reactions were then combined and used themselves as both primer and template in a second PCR reaction to generate a small amount of the full-length, mutated collagenase promoter fragment. Finally, the 5Ј and 3Ј PCR primers were then used to amplify this fragment. An identical strategy was used to generate a promoter fragment in which the orientation of the ME AP-1 motif in the context of the Coll 158:ME construct was switched by changing the core sequence from TGAGTCA to TGACTCA. Truncated constructs, designated Coll 79 and Coll 79:ME (Fig. 8), were then generated using the same 3Ј PCR primer described above and the following 5Ј PCR primers, respectively: 5Ј-CCGCTCGAGAAAGCAT-GAGTCAGACAG-3Ј; 5Ј-CCGCTCGAGCCGCCGTGACTCAGCGAGCCT-CTGGCTTTCTGG-3Ј (AP-1 core sequences are shown in italics and XhoI cloning sites are underlined). All promoter fragments were completely sequenced to ensure the absence of polymerase errors and plasmids were purified by centrifugation through cesium chloride gradients (27). Cell Culture and Transient Transfection-Rat H4IIE hepatoma cells were grown to 40 -70% confluence in T150 flasks in Dulbecco's modified Eagle's medium containing 2.5% (v/v) fetal calf serum and 2.5% (v/v) newborn calf serum and were transiently transfected using the calcium phosphate-DNA co-precipitation method as previously described (16). Transfected cells were incubated for 20 h in serum-free Dulbecco's modified Eagle's medium prior to harvesting. Human HeLa cervical carcinoma cells were grown to 90% confluence in T150 flasks in Dulbecco's modified Eagle's medium containing 10% (v/v) calf serum and were replated the day before use into 55-cm 2 culture dishes. Attached cells were then transiently transfected using the calcium phosphate-DNA co-precipitation method as previously described (16). In some experiments (Figs. 6 and 8) the reporter gene construct (15 g) and an expression vector for ␤-galactosidase (2.5 g) were co-transfected with an expression vector encoding the insulin receptor (5 g), courtesy of Dr. Jonathan Whittaker. After an overnight incubation the medium was replaced with serum-free Dulbecco's modified Eagle's medium supplemented, where indicated in the figure legends, with or without 100 nM PMA or 100 nM insulin. The cells were then incubated for a further 20 h prior to harvesting. For the analysis of basal gene expression, three independent preparations of each plasmid construct were analyzed in duplicate. CAT and ␤-Galactosidase Assays-Transfected HeLa and H4IIE cells were harvested by trypsin digestion and then sonicated in 300 l of 250 mM Tris (pH 7.8) containing 2 mM phenylmethylsulfonyl fluoride. The HeLa cell lysate was assayed for ␤-galactosidase activity as previously described (16). The remaining HeLa cell lysate and the H4IIE lysate were heated for 10 min at 65°C and cellular debris was removed by centrifugation. CAT assays were then performed on the supernatant as previously described (16). To correct for variations in HeLa cell transfection efficiency, the results were expressed as the ratio of CAT: ␤-galactosidase activity. In earlier studies we had found that phorbol esters and insulin stimulated Rous sarcoma virus-␤-galactosidase expression in HeLa cells (16) but that was not apparent in this series of experiments. To correct for variations in H4IIE cell transfection efficiency, CAT activity was corrected for the protein concentration in the cell lysate, as measured by the Pierce BCA assay. Gel Retardation Assays-To study AP-1 binding, the preparation of HeLa cell nuclear extracts, the labeling of double-stranded oligonucleotide probes, and gel retardation assays were performed under conditions exactly as previously described (16,28). Gel retardation competition experiments and partial proteolytic clipping bandshift assays were also performed as previously described (16). RESULTS The ME AP-1 Motif Markedly Enhances Basal Fusion Gene Transcription in Both HeLa and H4IIE Cells-We have previously shown that insulin stimulates collagenase-CAT and ME-CAT fusion gene expression in H4IIE cells but insulin only stimulates collagenase-CAT and not ME-CAT fusion gene expression in HeLa cells (16,17). In addition, we found that phorbol esters stimulate collagenase-CAT but not ME-CAT fusion gene expression in HeLa cells (16). We previously suggested that these results may be explained, in part, by the observation that the ME and collagenase AP-1 motifs are functionally distinct (16). Thus, transient transfection experiments in HeLa cells using heterologous TKCAT fusion genes showed that AP-1 binds the ME AP-1 motif, but not the collagenase AP-1 motif, in an activated state. As a consequence, only the collagenase AP-1 motif confers an additional stimulatory effect of phorbol esters on the expression of a heterologous TKCAT fusion gene (16). In these experiments multiple copies of a double stranded oligonucleotide representing the ME AP-1 motif were ligated into the polylinker of a heterologous TKCAT fusion gene and this resulted in a marked increase in basal fusion gene expression, relative to that obtained with the basic TKCAT vector alone (16). This marked increase in basal fusion gene expression was selectively seen with the ME AP-1 motif because ligation of multiple copies of a double stranded oligonucleotide representing the collagenase AP-1 motif into the TKCAT polylinker resulted in only a small increase in basal fusion gene expression (16). In contrast, when multiple copies of a double stranded oligonucleotide representing a mutated ME AP-1 motif, which fails to bind AP-1 in gel retardation assays, were ligated into the TKCAT polylinker there was no increase in basal fusion gene expression, relative to that obtained with the basic TKCAT vector alone (16). To extend these observations, we first determined whether the marked stimulation of basal fusion gene expression by the ME AP-1 motif was specific to the context of the heterologous TKCAT fusion gene. Therefore, experiments similar to those described above were repeated using a different heterologous fusion gene, designated TKC-VI (24). The TKCAT vector contains the herpes simplex virus TK promoter sequence from Ϫ105 to ϩ51 ligated to the CAT reporter gene and has a unique BamHI site in the polylinker at Ϫ105 (23). By contrast, the TKC-VI plasmid contains the herpes simplex virus-TK promoter sequence from Ϫ480 to ϩ51 ligated to the CAT gene and has a unique BamHI linker between positions Ϫ40 and Ϫ35 (24). Various double stranded oligonucleotides representing distinct regions of the ME or collagenase promoters (Table I) were synthesized with BamHI compatible ends and were ligated as single copies, in the same orientation as that found in the endogenous ME and collagenase promoters, into BamHIcleaved TKC-VI. Typically, a maximal oligonucleotide size of 42 bp can be cloned into the BamHI site of the TKC-VI vector without losing basal reporter gene expression (24), therefore, multimerized AP-1 motifs were not analyzed in this vector context. The level of reporter gene expression directed by the resulting constructs was then analyzed by transient transfection of HeLa cells (Fig. 1A). Oligonucleotides representing the ME promoter sequence between Ϫ161 and Ϫ123, between Ϫ138 and Ϫ101, and between Ϫ138 and Ϫ123, all of which contain the ME AP-1 motif (Table I), all conferred a marked increase in basal fusion gene expression, relative to that obtained with the basic TKC-VI vector alone (Fig. 1A). In contrast, the oligonucleotide repre-senting the collagenase AP-1 motif did not confer an increase in basal fusion gene expression (Fig. 1A). This result indicates that the activation of basal fusion gene expression by the ME AP-1 motif was not specific to the context of the TKCAT vector and, in addition, demonstrates that this effect does not require multimerization of the ME AP-1 motif. This result also raises the possibility that the slight activation of basal fusion gene expression by the collagenase AP-1 motif in the context of the TKCAT vector (16) may require the multimerization of this motif. Next, to determine whether the activation of basal fusion gene expression by the ME AP-1 motif was specific to HeLa cells, the plasmid constructs described above were transiently transfected into rat hepatoma H4IIE cells. As shown in Fig. 1B, similar results were also obtained in the H4IIE cell line. Thus, the ME AP-1 motif, but not the collagenase AP-1 motif, again stimulated a marked increase in basal fusion gene expression, TABLE I Sequence of oligonucleotides used in these studies All nucleotide positions are negative and are numbered relative to the transcription start site at ϩ1. The consensus Sp1, Efr-1, and AP-1 motifs are boxed. Non-wild-type sequence is shown in lower case letters. WT, wild-type; MUT, mutant. FIG. 1. The ME AP-1 motif markedly enhances basal TKC-VI gene transcription in both HeLa and H4IIE cells. HeLa (panel A) and H4IIE cells (panel B) were transiently transfected, as described under "Experimental Procedures," with various TKC-VI fusion gene plasmids. In addition, HeLa cells were co-transfected with an expression vector encoding ␤-galactosidase. The fusion gene plasmids represented either the basic TKC-VI vector or constructs in which a single copy of oligonucleotides representing the indicated wild-type (WT) ME or collagenase (Coll) promoter sequences, as shown in Table I, had been ligated into the BamHI site of the TKC-VI promoter in the same orientation as that in the native ME and collagenase promoters. Following transfection cells were incubated for 20 h in serum-free medium. Cells were then harvested and CAT activity, ␤-galactosidase activity, and protein concentration were assayed as previously described (16,17). The results are expressed as the ratio of CAT:␤-galactosidase activity, in HeLa cell transfections, or CAT:protein concentration, in H4IIE cell transfections. Results represent the mean Ϯ S.E. of three experiments, using an independent preparation of each plasmid construct, in which each construct was assayed in duplicate. relative to that obtained with the basic TKC-VI vector alone. Interestingly, an oligonucleotide representing the ME promoter sequence between Ϫ181 and Ϫ145, which contains overlapping Egr-1 and Sp protein-binding sites (Table I), only conferred an increase in basal fusion gene expression in HeLa and not H4IIE cells, relative to that obtained with the basic TKC-VI vector alone (Fig. 1). The potential significance of this observation to cell type-specific, insulin-stimulated ME gene expression is described under "Discussion." Mutation of the Flanking Sequence Either 5Ј or 3Ј of the ME AP-1 Motif Markedly Reduces the Enhancement of Basal TKC-VI Gene Transcription in Both HeLa and H4IIE Cells without Affecting AP-1 Binding Affinity-The selective activation of basal fusion gene expression by the ME and not the collagenase AP-1 motif was surprising because the AP-1 complex binding both the ME and collagenase AP-1 motifs in HeLa cells (16) and H4IIE cells (data not shown) is predominantly a heterodimer of Fra-2 and JunD. However, partial proteolytic clipping bandshift assays indicated that AP-1 binds the ME and collagenase AP-1 motifs in distinct conformations so we postulated that this could explain the distinct functional characteristics of the two AP-1 motifs (16). Surprisingly, the ME and collagenase AP-1 motifs both share an identical core consensus sequence, TGACTCA (Table I). However, the 5Ј and 3Ј sequences flanking the core are distinct (Table I). To determine whether the distinct AP-1 flanking sequence could explain the discrete functional characteristics of the ME and collagenase AP-1 motifs, the effect of mutating this flanking sequence was investigated. Double stranded oligonucleotides representing the ME AP-1 motif but containing mutations of the 5Ј (MUT1), core (MUT2), or 3Ј-flanking sequence (MUT3) ( Table I) were synthesized with BamHI compatible ends and were ligated as single copies into BamHI-cleaved TKC-VI in either the same or the inverted orientation relative to that found in the endogenous ME promoter. The level of reporter gene expression directed by the resulting constructs was then analyzed by transient transfection of HeLa (Fig. 2, A and C) and H4IIE (Fig. 2B) cells. Similar results were obtained in both cell lines (compare Fig. 2, A and B) and with both orientations of the ME and collagenase AP-1 motifs (compare Fig. 2, A and C). The MUT2 oligonucleotide that contains a mutation of the core sequence of the ME AP-1 motif, which abolishes binding of AP-1 in gel retardation assays (16) (Fig. 3), failed to confer an increase in basal TKC-VI fusion gene expression (Fig. 2). Similarly, mutating the 3Ј-flanking sequence of the ME AP-1 motif resulted in markedly reduced basal TKC-VI fusion gene expression compared with that conferred by the wild-type ME AP-1 motif (Fig. 2). Similarly, mutation of the 5Ј-flanking sequence also reduced the activation of basal fusion gene expression, although it was less deleterious than the 3Ј-flanking sequence mutation (Fig. 2). These results could potentially be explained by an effect of the flanking sequence mutations on the affinity of AP-1 binding. To investigate this possibility the oligonucleotides shown in Table I were used as competitors, at a 100-fold molar excess, in a gel retardation assay with the ME 138/123 WT oligonucleotide as the labeled probe (Fig. 3A). As expected, the wildtype collagenase AP-1 motif and the oligonucleotides containing the wild-type ME AP-1 motif, namely ME 161/123 WT, ME 138/101 WT, and ME 138/123 WT, all competed effectively against the labeled probe for formation of the AP-1 protein-DNA complex (Fig. 3A). In contrast, the ME 181/145 WT oligonucleotide that does not contain the ME AP-1 motif (Table I), and the ME 138/123 MUT2 oligonucleotide that contains a mutation of the AP-1 core sequence (Table I), did not compete for AP-1 binding (Fig. 3A). Importantly, the oligonucleotides containing the 5Ј or 3Ј ME AP-1 flanking sequence mutations, designated ME 138/123 MUT1 and MUT3, respectively, both competed effectively against the labeled probe for formation of the AP-1 protein-DNA complex when used at a 100-fold molar excess (Fig. 3A). Moreover, competition experiments in which the labeled ME 138/123 WT oligonucleotide was preincubated with various concentrations of the unlabeled ME 138/123 WT, MUT1, and MUT3 oligonucleotides indicated that these mutations do not markedly affect the affinity of AP-1 binding (Fig. 3B). Thus, all three oligonucleotides competed equally effectively for formation of the AP-1 complex (Fig. 3B). Mutation of the Flanking Sequence Either 5Ј or 3Ј of the ME AP-1 Motif Markedly Reduces the Enhancement of Basal TK-CAT Gene Transcription and Restores Phorbol Ester Responsiveness-To determine whether the effects of mutating the flanking sequence on either side of the ME AP-1 motif were specific to the context of the heterologous TKC-VI vector, experiments similar to those shown in Fig. 2 were repeated using the heterologous TKCAT vector. Multiple copies of the double stranded oligonucleotides containing the various mutations of the ME AP-1 motif described above were ligated into the polylinker of the TKCAT vector. The level of reporter gene expression directed by the resulting constructs in the presence and absence of a phorbol ester, PMA, was then analyzed by transient transfection of HeLa cells (Fig. 4). Oligonucleotides representing the wild-type ME promoter sequence between Ϫ161 and Ϫ123 or between Ϫ138 and Ϫ123, both of which encompass the ME AP-1 motif, both conferred a marked activation of basal fusion gene expression (Fig. 4B) but neither oligonucleotide was able to confer a stimulatory effect of PMA on fusion gene expression beyond that seen with the basic TKCAT vector (Fig. 4A). With the multimerized ME 138/ 123 WT oligonucleotide the spacing between individual AP-1 motifs was similar to that obtained with the multimerized collagenase 63/78 WT oligonucleotide (Table I). The latter does confer a phorbol ester response (Fig. 4A) but not an activation of basal fusion gene expression (Fig. 4B). Thus, the inability of the longer ME 161/123 WT oligonucleotide (Table I) to confer a phorbol ester response was not indicative of an inability of the individual AP-1 motifs in the multimerized oligonucleotide to synergize because of increased spacing between individual AP-1 sites. As seen in the context of the TKC-VI vector, mutation of the 5Ј-or 3Ј-flanking sequence of the ME AP-1 motif reduced the activation of basal TKCAT fusion gene expression, with mutation of the 3Ј-flanking sequence again being more deleterious (Fig. 4B). Importantly, in contrast to the wild-type ME AP-1 motif, these mutated ME AP-1 motifs were now able to confer a stimulatory effect of phorbol esters on fusion gene expression that was similar in magnitude, when the data was expressed as fold induction, to that obtained with the collagenase AP-1 motif (Fig. 4A). Fig. 4C shows that the wild-type ME and collagenase AP-1 motifs conferred a similar level of maximal, phorbol esterstimulated fusion gene expression. Thus, the level of phorbol ester-stimulated collagenase AP-1 TKCAT fusion gene expression was similar to that of basal ME AP-1 TKCAT gene expression. Whereas the mutated ME AP-1 motifs were able to confer a stimulatory effect of phorbol esters on fusion gene expression (Fig. 4A) only the ME 5Ј-flanking mutant directs a maximal level of CAT expression similar to that obtained with the wildtype ME and collagenase AP-1 TKCAT fusion genes (Fig. 4C). This suggests that even phorbol ester treatment was unable to fully activate AP-1 bound to the ME AP-1 3Ј-flanking mutant. Unfortunately, in HeLa cells, unlike H4IIE cells (16), insulin markedly stimulates CAT expression directed by the control TKCAT fusion gene (data not shown). Therefore, it was not possible to determine whether insulin can also selectively activate gene transcription through the collagenase but not the ME AP-1 motif, in the context of the TKCAT vector. Mutation of the Flanking Sequence 5Ј or 3Ј of the ME AP-1 Motif Affects the Conformation of AP-1 Binding-The proteolytic band shift assay (29) was used to examine the possibility that the ME AP-1 5Ј-and 3Ј-flanking mutations affected the conformation of AP-1 binding. As described above, we previously used this assay to demonstrate that AP-1 bound to the wild-type collagenase and ME AP-1 motifs have different surfaces exposed to proteolytic digestion indicative of a difference in binding conformation (16). This difference in binding conformation was hypothesized to be the basis for the selective activation of basal TKCAT fusion gene expression by the ME AP-1 motif (16). To study the effect of partial protease digestion, HeLa cell nuclear extract was preincubated with the labeled collagenase 63/78 WT, ME 138/123 WT, ME 138/123 MUT1, or ME 138/123 MUT3 oligonucleotides (Table I) prior to the addition of various concentrations of chymotrypsin (Fig. 5). A distinct proteolytic product that selectively binds the collagenase 63/78 WT, ME FIG. 2. Mutation of the flanking sequence either 5 or 3 of the ME AP-1 motif markedly reduces the enhancement of basal TKC-VI gene transcription in both HeLa and H4IIE cells. HeLa (panels A and C) and H4IIE cells (panel B) were transiently transfected, as described under "Experimental Procedures," with various TKC-VI fusion gene plasmids. In addition, HeLa cells were co-transfected with an expression vector encoding ␤-galactosidase. The fusion gene plasmids represented either the basic TKC-VI vector or constructs in which oligonucleotides representing the indicated wild-type (WT) or mutated (MUT) ME or collagenase promoter sequences, as shown in Table I, had been ligated into the BamHI site of the TKC-VI promoter in a single copy in either the same (correct; panels A and B) or inverted (panel C) orientation relative to that in the native ME and collagenase promoters. Following transfection cells were incubated for 20 h in serum-free medium. Cells were then harvested and CAT activity, ␤-galactosidase activity, and protein concentration were assayed as previously described (16,17). The results are expressed as the ratio of CAT:␤galactosidase activity, in HeLa cell transfections, or CAT/protein concentration, in H4IIE cell transfections. Results represent the mean Ϯ S.E. of three experiments, using an independent preparation of each plasmid construct, in which each construct was assayed in duplicate. FIG. 3. Mutation of the flanking sequence 5 or 3 of the ME AP-1 motif does not affect the affinity of AP-1 binding. Panel A, the labeled ME 138/123 WT oligonucleotide probe was incubated in the absence (Ϫ) or presence of a 100-fold molar excess of the unlabeled oligonucleotide competitors shown (Table I) prior to addition of HeLa cell nuclear extract. Protein binding was then analyzed using the gel retardation assay as described under "Experimental Procedures." In the representative autoradiograph shown, only the retarded complexes are visible and not the free probe, which was present in excess. A nonspecific (NS) protein-DNA interaction is indicated by an arrow as is the AP-1 complex. Panel B, the labeled ME 138/123 WT oligonucleotide probe was incubated in the absence (Ϫ) or presence of various concentrations of the unlabeled ME 138/123 WT (f), ME 138/123 MUT1 (OE), and ME 138/123 MUT3 (q) oligonucleotide competitors prior to addition of HeLa cell nuclear extract. Protein binding was then analyzed using the gel retardation assay as described under "Experimental Procedures." Protein binding was quantified by using a Packard Instant Imager to count 32 P associated with retarded complexes. The data represents the mean Ϯ S.D. of two experiments. 138/123 MUT1, and ME 138/123 MUT3 oligonucleotide probes but not the ME 138/123 WT oligonucleotide probe was detected (Fig. 5, see arrow). This selectively bound product migrates faster than a nonspecific protein-DNA interaction detected in this assay (Fig. 5) so it was possible that this product was derived from proteolysis of the nonspecific protein-DNA interaction rather than the AP-1 complex. However, competition experiments revealed that the unlabeled wild-type collagenase 63/78 WT oligonucleotide competed effectively for the formation of this protein-DNA complex (data not shown), whereas it does not compete for formation of the nonspecific complex (Fig. 3A). This result demonstrates that the proteins bound to the wild-type collagenase AP-1 motif and the ME AP-1 5Ј-and 3Ј-flanking mutants have similar surfaces exposed to proteolytic digestion, indicative of similar binding conformations. Thus, the ME AP-1 5Ј-and 3Ј-flanking mutants bind AP-1 in a conformation more similar to that of AP-1 bound to the collagenase AP-1 motif rather than AP-1 bound to the wild-type ME AP-1 motif. These observations could explain why the oligonucleotides containing the ME AP-1 5Ј-or 3Ј-flanking mutations, just like the collagenase AP-1 motif, do not enhance basal TKC-VI (Fig. 2) or TKCAT (Fig. 4) fusion gene expression, but do mediate a phorbol ester response (Fig. 4). The Wild-type ME AP-1 Motif Can Confer a Stimulatory Effect of Insulin and Phorbol Esters on the Expression of a Heterologous Xenopus Albumin-CAT Fusion Gene- The basic heterologous TKC-VI (Fig. 1) and TKCAT (Fig. 4) vectors both direct a high level of basal CAT expression, even without the ME AP-1 motif ligated into their respective polylinkers. In contrast, a heterologous Xenopus albumin-CAT fusion gene, designated XMB, has previously been shown to direct no basal CAT expression in HeLa cells (17). The ME and collagenase AP-1 motifs were therefore ligated into the polylinker of the XMB vector to determine whether, in this context, the functional characteristics of the two AP-1 motifs would be distinct. Fig. 6 shows that, in the context of the XMB vector, the ME AP-1 motif still confers a greater increase in basal fusion gene FIG. 4. Mutation of the flanking sequence either 5 or 3 of the ME AP-1 motif markedly reduces the enhancement of basal TKCAT gene transcription and restores phorbol ester responsiveness. HeLa cells were transiently co-transfected, as described under "Experimental Procedures," with a ␤-galactosidase expression vector and either the basic TKCAT vector or constructs in which oligonucleotides representing the indicated wild-type (WT) or mutated (MUT) ME or collagenase (Coll) promoter sequences, as shown in Table I, had been ligated into the BamHI site of the TK promoter in multiple (3)(4) copies. Following transfection cells were incubated for 20 h in serum-free medium in the presence or absence of 100 nM PMA. The cells were then harvested and both CAT and ␤-galactosidase activity were assayed as previously described (16,17). In panel A, results are presented as the relative ratio of CAT:␤-galactosidase activity, in PMAtreated versus control cells and are expressed as fold induction. In panels B and C, results are presented as the ratio of CAT:␤-galactosidase activity in either control or PMA-treated cells, respectively, and are expressed as arbitary units. Results represent the mean Ϯ S.E. of five experiments, in which each construct was assayed in duplicate. FIG. 5. Mutation of the flanking sequence 5 or 3 of the ME AP-1 motif affects the conformation of AP-1 binding. HeLa cell nuclear extract from control cells was incubated with the labeled ME 138/123 WT (ME WT), ME 138/123 MUT1 (MUT1), ME 138/123 MUT3 (MUT3), or collagenase 63/78 WT (Coll) oligonucleotide probes for 10 min at room temperature prior to the addition of various amounts of chymotrypsin and incubation for an additional 2 min at room temperature. Protein binding was then analyzed using the gel retardation assay as described under "Experimental Procedures." In the representative autoradiograph shown, only the retarded complexes are visible and not the free probe, which was present in excess. A nonspecific (NS) protein-DNA interaction is indicated by an arrow, as are the AP-1 complex and a proteolytic fragment that specifically binds the Coll, MUT1, and MUT3 probes but not the MEOV probe. expression than the collagenase AP-1 motif. However, in this context, both the ME and collagenase AP-1 motifs can mediate both a phorbol ester (Fig. 6A) and insulin (Fig. 6B) response in HeLa cells. When ligated into the XMB polylinker, oligonucleotides containing mutations of the ME or collagenase AP-1 motifs (Table I), which abolish AP-1 binding (16,17), fail to confer basal reporter gene expression or an increase in expression in the presence of insulin or phorbol esters (Fig. 6). Thus, this result suggests that the inability of insulin to induce ME-CAT gene expression in HeLa cells, in contrast to H4IIE cells, was because of some cell type-specific feature relating to the specific context of the ME promoter and not a difference in the insulin signaling pathway in these two cell lines. Similarly, this result further suggests that the inability of phorbol esters to induce ME-CAT gene expression in HeLa cells was also because of the same issue of ME promoter context. Indeed, both insulin and phorbol esters induce AP-1 binding to both the collagenase and ME AP-1 motifs in HeLa cells (Fig. 7). In summary, the data in Figs. 4 and 6 suggest that the functional characteristics of the ME and collagenase AP-1 motifs, with respect to basal activa-tion and insulin/phorbol ester responsiveness, are determined by both flanking sequence and promoter context. The ME AP-1 Motif Can Mediate an Insulin and Phorbol Ester Response in the Context of the Collagenase Promoter- Because the multimerized ME AP-1 motif was able to mediate both an insulin and phorbol ester response in the context of the heterologous XMB vector (Fig. 6) the molecular basis for the inability of insulin and phorbol esters to stimulate the activity of the native ME promoter in HeLa cells was further investigated. Fry and Farnham (30) recently reviewed various aspects of promoter context that are important in the regulation of gene transcription, one of which are the presence of accessory elements. We have previously shown that the stimulatory effects of insulin and phorbol esters on collagenase-CAT fusion gene expression are markedly enhanced by accessory elements in the collagenase promoter (17). Expression of a truncated collagenase fusion gene construct that contains the AP-1 motif but lacks these accessory elements was minimally induced by insulin and phorbol esters (17). We therefore speculated that perhaps the ME promoter lacks accessory elements that could enhance insulin and phorbol ester signaling through the ME AP-1 motif in HeLa cells. To indirectly address this potential role for the absence of accessory elements in the ME promoter, a collagenase-CAT fusion gene was generated in which the flanking sequence of the collagenase AP-1 motif was replaced with that of the ME AP-1 motif in the context of a collagenase promoter fragment with a 5Ј end point of Ϫ158. This fragment contains the accessory elements necessary for full induction of gene expression by insulin and phorbol esters (17). This strategy allowed us to ask the question as to whether the ME AP-1 motif could mediate an insulin and phorbol ester response if it were associated with accessory elements and located in the same context as the collagenase AP-1 motif. The effects of insulin and phorbol esters on the expression of this fusion gene, designated Coll 158:ME, were assessed by transient transfection of HeLa cells (Fig. 8). One possible outcome of this experiment was that the ME AP-1 motif could have maximally activated basal collagenase-CAT fusion gene expression such that no effect of insulin and phorbol esters would be seen despite the presence of accessory elements. But in fact the data shows that the presence of the ME AP-1 motif in the collagenase promoter actually led to a decrease in basal fusion gene expression (Fig. 8A), and in this FIG. 6. The wild-type ME AP-1 motif can confer a stimulatory effect of insulin and phorbol esters on the expression of a heterologous Xenopus albumin-CAT fusion gene. HeLa cells were transiently co-transfected, as described under "Experimental Procedures," with various XMB fusion gene plasmids and an expression vector encoding ␤-galactosidase. The fusion gene plasmids represented either the basic XMB vector or constructs in which oligonucleotides representing either the wild-type (WT) or mutated (MUT) ME or collagenase (Coll) promoter sequence from Ϫ138 to Ϫ123 and Ϫ63 to Ϫ78, respectively, as shown in Table I, had been ligated into the HindIII site of the Xenopus albumin promoter in multiple (4 to 5) copies. Following transfection cells were incubated for 20 h in serum-free medium in the absence (C) or presence of 100 nM PMA (P) or 100 nM insulin (I). Cells were then harvested and CAT and ␤-galactosidase activity were assayed as previously described (16,17). The results are expressed as the ratio of CAT:␤-galactosidase activity and represent the mean Ϯ S.E. of six experiments, in which each construct was assayed in duplicate. FIG. 7. Insulin and phorbol esters stimulate protein binding to both the collagenase and ME AP-1 motifs. Nuclear extracts were prepared from HeLa cells incubated for 5 h in serum-free medium (C) or serum-free medium supplemented with either 100 nM insulin (I) or 100 nM PMA (P). Protein binding to the labeled ME 138/123 and collagenase (Coll) 63/78 oligonucleotide probes was then analyzed using the gel retardation assay, as described under "Experimental Procedures." In the representative autoradiograph shown, only the retarded complexes are visible and not the free probe, which was present in excess. A nonspecific (NS) protein-DNA interaction and the AP-1 complex are indicated by the arrows. context the ME AP-1 motif was able to confer a similar induction of fusion gene expression by insulin (Fig. 8B) and phorbol esters (Fig. 8C) as obtained with the native collagenase AP-1 motif. Changing the orientation of the ME AP-1 motif in the context of the Coll 158:ME construct (Fig. 8) by switching the core sequence from TGAGTCA to TGACTCA had no effect on basal expression or the magnitude of the insulin and phorbol ester response (data not shown). When the accessory elements in the Coll 158 or Coll 158:ME fusion genes were deleted, the truncated collagenase promoter constructs, designated Coll 79 and Coll 79:ME, respectively, mediated a minimal induction of collagenase-CAT fusion gene expression by insulin and phorbol esters (Fig. 8). These results suggest that the absence of accessory elements in the ME promoter may partly account for the inability of insulin and phorbol esters to induce ME-CAT fusion gene expression in HeLa cells. DISCUSSION The experiments in this article were designed to explore the molecular basis for: (i) the differential regulation of collagenase-1 and ME gene expression by insulin and phorbol esters in HeLa cells and (ii) the differential regulation of ME gene expression by insulin in H4IIE and HeLa cells. We hypothesize that the former is partly explained by the observation that the ME and collagenase AP-1 motifs are functionally distinct (16). Thus, AP-1 can bind the ME AP-1 motif, but not the collagenase AP-1 motif in an activated state and, in a heterologous context, this precludes further activation by phorbol esters. This observation was surprising because both motifs share an identical core consensus sequence (Table I) and predominantly bind a heterodimer of Fra-2 and JunD (16) with similar affinities (Fig. 3). We show here that this binding of AP-1 to the ME AP-1 motif in an activated state is determined by the specific sequence flanking the core AP-1 motif (Figs. 2 and 4). Phorbol ester-insensitive AP-1 motifs have also been identified in the stromelysin (31), JE (32), and glutathione S-transferase P1-1 promoters (33,34) and it has also been previously shown that the sequence flanking the core AP-1 motif can influence phorbol ester responsiveness (35,36). However, this has been attributed to changes in the affinity of AP-1 binding and/or the composition of the AP-1 complex (32,37). The ME promoter is therefore distinct in that neither of the latter parameters differs in comparison with the phorbol ester-sensitive collagenase AP-1 motif (Fig. 3) (16). Instead, the flanking sequence of the ME AP-1 motif appears to affect phorbol ester responsiveness by altering the conformation of AP-1 binding (Fig. 5). Thus, these studies on the ME AP-1 motif are consistent with the emerging realization that hormone response elements are not inert but can act as allosteric regulators by affecting the conformation of the factors they bind (38). The specific functional characteristics of the ME AP-1 motif are also affected by promoter context. Thus, when ligated to the heterologous XMB promoter, even though the ME AP-1 motif stimulates basal fusion gene expression, it does not do so sufficiently to prevent a further induction by phorbol esters and insulin (Fig. 6). Similarly, when switched with the collagenase AP-1 motif in the collagenase promoter, the ME AP-1 motif can again mediate both an insulin and phorbol ester response in HeLa cells (Fig. 8). The specific context-dependent features of the XMB promoter that allow for insulin-and phorbol esterdependent activation of the ME AP-1 motif are unclear. However, in the collagenase promoter, one critical context-dependent characteristic is the presence of accessory elements; the effects of insulin and phorbol esters are markedly reduced if these accessory elements are deleted (17) (Fig. 8). It is possible that the differential regulation of ME gene expression by insu-FIG. 8. The ME AP-1 motif can mediate an insulin and phorbol ester response in the context of the collagenase promoter. HeLa cells were transiently co-transfected, as described under "Experimental Procedures," with an expression vector encoding ␤-galactosidase and either collagenase-CAT fusion genes with 5Ј deletion end points of Ϫ158 or Ϫ79, designated Coll 158 and Coll 79, respectively, or constructs, designated Coll 158:ME and Coll 79:ME, in which the collagenase AP-1 flanking sequence was replaced with that of the ME AP-1 motif within the context of the Ϫ158 or Ϫ79 end points, respectively. Following transfection cells were incubated for 20 h in serum-free medium in the absence or presence of 100 nM PMA or 100 nM insulin. Cells were then harvested and CAT and ␤-galactosidase activity were assayed as previously described (16,17). The ratio of CAT:␤-galactosidase activity in control cells (panel A) and the relative ratio of CAT:␤-galactosidase activity in insulin-treated cells (panel B) or phorbol ester-treated cells (panel C) versus control cells were then calculated. The mean induction of Coll 158 expression by insulin and phorbol ester was ϳ14and 43-fold, respectively. The results are presented as a percentage relative to the 158 Coll fusion gene and represent the mean Ϯ S.E. of three to seven experiments, using several independent preparations of each plasmid construct, in which each construct was assayed in duplicate. lin in H4IIE and HeLa cells may also, in part, reflect the importance of an accessory element. We have previously shown that, in H4IIE cells, an accessory element, located between Ϫ180 and Ϫ152, enhances insulin signaling through the ME AP-1 motif (16). This region of the ME promoter contains overlapping binding sites for Egr-1 and Sp proteins (Table I), however, using nuclear extract prepared by the method of Shapiro et al. (28), only the insulin-induced binding of Egr-1 was detected; we therefore proposed that Egr-1 was the accessory factor binding this element (16). Using nuclear extract prepared by the method of Andrews and Faller (39), modified by the incorporation of Nonidet P-40 to lyse cells and isolate nuclei (40), the insulin-induced binding of both Egr-1 and Sp factors to this accessory element can be demonstrated. 2 Gel retardation assays reveal an inverse relationship between the abundance of insulin-induced Egr-1 and Sp proteins in H4IIE and HeLa cells with Egr-1 more abundant than Sp proteins in the former. 2 Therefore, we hypothesize that the selective regulation of ME gene expression in H4IIE and HeLa cells may be explained, at least in part, by the differential binding of these factors to the same accessory element in the ME promoter in the two cell types. Competition between Egr-1 and Sp proteins for overlapping binding sites is known to be important in the regulated expression of other genes (41). Barroso and Santisteban (18) have shown that Egr-1 competes for Sp1 binding in the ME promoter but it remains to be determined whether Egr-1 or a specific Sp protein is the true accessory factor that enhances insulin signaling through AP-1. This is a complex question because interactions between various members of the AP-1 family and a wide variety of structurally unrelated transcription factors have been reported to contribute to the functional specificity of AP-1 (42), suggesting that the action of many of these accessory factors may be manifest in the absence of direct contact between the accessory proteins and AP-1, although the latter do exist (43,44). Interestingly, Barroso and Santisteban (18) showed that overexpression of Egr-1 actually represses basal ME-CAT fusion gene expression in H-35 hepatoma cells; we have also observed Egr-1 repression of basal ME-CAT gene expression in H4IIE hepatoma cells but not in HeLa cells. 2 The physiological significance of such an action of Egr-1 is unclear given that insulin stimulates both ME and Egr-1 gene expression. However, these results may imply that it is the loss of Sp binding that explains why deletion of the ME promoter region located between Ϫ180 and Ϫ151 reduces the stimulation of ME-CAT fusion gene expression by insulin in H4IIE cells (16). A further complication stems from studies on the insulin-stimulated expression of calmodulin (45,46) and apolipoprotein A-1 (47) gene expression, which suggests that Sp1 could act directly as an insulin response factor rather than just as an accessory factor that enhances insulin signaling through the AP-1 motif. Although experiments in rat H4IIE hepatoma cells have implicated the ME AP-1 motif as the target of insulin signaling (16), recent studies in mice have suggested that a SREBP may be involved in the stimulation of ME gene transcription by insulin in vivo. The SREBPs are unusual transcription factors that are released from the endoplasmic reticulum by proteolytic cleavage (48). ME gene expression is increased in mice overexpressing SREBP-1a, SREBP-1c, or SREBP-2 (49), whereas the induction of ME gene expression by high carbohydrate feeding, a manipulation associated with elevated insulin levels, is abolished in SREBP-1 (50) and SREBP-1c (51) knockout mice. Insulin selectively induces the expression of SREBP-1c (52) but, through the stimulation of the MAP kinase pathway, might also activate SREBP-1a and SREBP-2 (53). SREBP-1c has also been implicated in the induction of pyruvate kinase and fatty acid synthase gene expression by glucose (13), although this may represent an indirect effect of SREBP-1c on glucose flux, resulting from its stimulation of glucokinase gene expression (54). This potential connection between SREBP-1c and glucose-regulated gene expression is interesting because there is some controversy as to the exact relationship between insulin and glucose in the stimulation of ME gene expression (55)(56)(57). One report suggests that insulin has little or no direct effect but has a permissive action on the response to glucose (55). By contrast, other investigators have reported that insulin has a direct effect in the absence of glucose (56,57). Clearly, it will be of interest to delineate the relative contributions of insulin, glucose, AP-1, and SREBP in the regulation of ME gene expression in vivo.
2018-04-03T01:00:43.742Z
2002-08-02T00:00:00.000
{ "year": 2002, "sha1": "2cd63579e67a63cc18854b6a62e252e7ca9b7322", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/31/27935.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "a1f542476cb8b8b5209a4490d3c205b4f971a6cf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257006719
pes2o/s2orc
v3-fos-license
Homogeneity in Surgical Series: Image Reporting to Improve Evidence Good clinical practice guidelines are based on randomized controlled trials or clinical series; however, technical performance bias among surgical trials is under-assessed. The heterogeneity of technical performance within different treatment groups diminishes the level of evidence. Surgeon variability with different levels of experience—technical performance levels even after certification—influences surgical outcomes, especially in complex procedures. Technical performance quality correlates with the outcomes and costs and should be measured by image or video-photographic documentation of the surgeon’s view field during the procedures. Such consecutive, completely documented, unedited observational data—in the form of intra-operative images and a complete set of eventual radiological images—improve the surgical series’ homogeneity. Thereby, they might reflect reality and contribute towards making necessary changes for evidence-based surgery. A recent review of the effectiveness of ten orthopedic procedures [1] noted "that most of these procedures recommended by national guidelines and used by surgeons have insufficient readily available high-quality evidence on their clinical effectiveness, which is mainly because of a lack of definitive trials." In the absence of clinically meaningful evidence from high-quality trials, clinicians are obliged to follow the advice of the late David Sackett when discussing options for treatment with patients: "integrating individual clinical expertise with the best external clinical evidence from systematic research" [2], which often relies on consensus statements or advisory guidelines from specific institutions or professional bodies, e.g., NHS England. Evidence-Based Interventions: Guidance for CCGs [3]. A question that follows from the conclusions of this otherwise excellent article concerns whether the essential reasons for this thought-provoking conclusion have been identified, from which reliable solutions can be derived. We offer some points for debate and discussion, with a potential way forward for this challenging problem. One obvious factor implicated, but rarely measured or assessed, in the variance within operative and non-operative treatment groups is the inter-operator variance in technical performance, whether of operative or non-operative treatment. This inevitably produces a technical performance bias (TPB) as a fundamental problem for surgical trials. The variance occurs not only during surgeons' learning curves but also among certified professionals. The value and expectations of evidence-based medicine are undisputed [1,[4][5][6][7][8]; however, for surgical trials, TPB limits the scientific adequacy of a trial and its applicability (generalizability) and acceptance [6,9,10]. Insufficient contemporaneous intraoperative performance documentation confounds a secondary analysis of the technical quality of the reported surgical procedures, as required by Item 5 of the CONSORT guidelines [7]. It is not easy to conceive how this should be achieved without documenting the technical details of the surgical procedure with still images or video clips of the operation field and all intra-operative images [11]. It is interesting that Blom et al. [1] report that total knee replacement, a procedure highly dependent on the proper use of instrumentation, is one of only two procedures of the ten studied for which there is sufficient evidence to support its use in the specific indication of end-stage osteoarthritis of the knee. By removing variability in the surgeons' performance through instrumentation, including augmented or robotic assistance, the variance in the procedure outcome could be reduced, thus making a comparison with non-operative interventions more meaningful, measurable, and relevant (for instance, in cost-analysis comparisons of treatments). It will be interesting to speculate whether navigated ('robotic') knee replacement will take this further [12], making the individual surgeon's performance even less influential for the outcome [13]. The second procedure, for which there is sufficient evidence for efficacy (carpal tunnel decompression, a procedure in which the essence of technical success is soft tissue handling, i.e., surgical competence), comprises fewer 'steps-to-success' to master, and variability may therefore be minimized between surgeons. The quality of the various technical aspects of surgery, such as the expertise demonstrated in soft tissue handling or the number, force, and amplitude of maneuvers needed for fracture reduction-essential for an assessment of performance and procedure outcome-are not documented in most studies and cannot, therefore, be considered. The homogeneity of the technical aspects of different treatment groups in a clinical study is indispensable in a skill-dependent field such as surgery [14,15] but is rarely reported. Current methods for documenting and selectively recording xrays without unedited contemporaneous, e.g., a video-photographic representation of procedures, do not appear sufficient to guarantee the needed homogeneity. In addition, the complete documentation of all the surgical procedures helps to build up supervised machine-learning models. The latest artificial intelligence (AI) technology assists in the automatic post-production of the key steps of still images and short video clips for a rapid use with a high accuracy. An AI-based surgical platform has played a role in some specific endoscopically assisted procedures [16], and a similar technology may apply to other surgeries in the future. Currently, the homogeneity of a technical performance within different treatment groups appears so sufficiently poor that the evidence level deteriorates [17]. This has inevitably occurred in frequently cited randomized controlled trials (RCTs) such as the ProFHER study [18,19] regarding the treatment of proximal humerus fractures and the UK heel fracture trial [20] and the UK DRAAFT trial regarding the treatment of distal radius fractures [21]. The conclusions of such studies lead to recommendations that may not be directly relevant to the individual patient and are therefore of limited value in clinical practice [22] Efforts are needed in surgery to produce evidence levels similar to those generated in internal medicine. Justifications for surgical decision-making, such as 'this works in my hands' or 'what my mentor taught me' [23], should be replaced by scientific evidence. Operative procedures, in particular, the experience and preferences of surgeons, which reflect the surgeons' performance, must be stratified. The goal(s) of the treatment must be defined before the intervention, independent of the chosen treatment modality. Subsequently, a surgical outcome is influenced by preoperative expectations [24,25] and surgical performance. The post-procedure assessment of whether the goals were met in the different treatment groups is indispensable: the decrement (including complications caused by suboptimal surgical performance) after the procedure matters as least as much to patients as the increment of functionality gained. The reasons for differences (decrements) between 'work as planned' and 'work as done' must be analyzed. Goals-such as an 'anatomical' reconstruction of a fracture, not an approximation to it-are sometimes only reached by technically highly skilled surgeons, especially for infrequent pathologies. An unrecorded but poor performance from non-specialized surgeons with wildly different experience levels might lead to poorer outcomes and failure to attain the desired goals [26][27][28]. TPB compounds the problem of 'group inhomogeneity' inherent to many classifications of disease used in such trials: inconclusive results are almost inevitable. Clinical trials reported without the contemporaneous recording of imaging data, including video-photographic documentation, permitting an independent retrospective evaluation of both group homogeneity (of the classifications used, patients' characteristics, etc.) and the technical performance quality, lose scientific value. The technical performance quality is measurable and correlates with the outcomes and costs [14,29] in cardiac, visceral, and video-assisted surgery studies. It is difficult to imagine that such correlations should not be valid for other fields of surgery if the technical metrics are adapted. The performanceoutcome effect might increase with the complexity of the procedure: discussions could then arise about what is technically straightforward and what is not and at what level of expertise a surgeon must be to accomplish a particular procedure. From one surgeon to another, a critical variability exists in soft tissue handling and the sequence of intricate actions to reach articular congruity. This produces an inevitable and undesired inhomogeneity. The inherent heterogeneity of complex interventions [17] is well known; nevertheless, surgical RCTs seldom consider potentially different quality levels of the technical performance [30]. This is relevant to RCTs in medicine; as a doctor (surgeon), dependent factors are much more critical. Defining necessary and homogeneous performance quality factors can therefore improve the outcomes. The absence of standards of performance assessments for every surgical specialty cannot be a reason not to initiate an effort to establish them. Intra-operative procedural documentation will be needed to determine a 'performance gap': the difference between a high and a low level of performance of a specific technical act. Quality levels can be defined on the basis of complete intra-operative image documentation [14]. This might comprise a rating of a specific procedure step or the entire procedure; surgical time-to-completion does not necessarily reflect either expertise or accuracy but is often used as a surrogate for these performance dimensions. Such performance assessments are still to be clearly defined but all will likely be image-based [31]. To assume that a defined written protocol guarantees that all procedures follow a uniform sequence of actions according to the protocol are illusory. This is particularly true in trauma due to the essential variations from one case to another, which are difficult to depict in a classification. In one attempt to contribute to this lack of standards, the ICUC working group [32] has developed a concept for complete and detailed image-based reporting, including unedited, contemporaneous, and complete photo-documentation of entire procedures. Such documentation has the potential to overcome the previously mentioned TPB as it allows secondary, retrospective, and independent analysis [32]. The completeness of the record allows significant help for learning by providing images of technical details. It also defines the value of the initiative: all critical or key steps and potential shortcomings are included [6,33]. The evidence-based justification of technical practices based on RCTs in (orthopedic) surgery is a laudable goal but equally challenging to realize. There are relevant reasons for this reality. First, the standardization of the key steps of any surgical procedure is not only difficult-especially in multi-center trials-but also insufficient if no agreed metrics for secondary analysis and comparison exist. Second, technical performance bias or inhomogeneity (within study groups containing very different elements or classifications applied to such groups) are the basis of imprecise or even incorrect conclusions, which therefore 'permit' a reversion to less evidence-based medicine. Finally, RCT data represent 'work as planned' (according to a research protocol); the attainment of 'work as planned' (the ideal outcome) rather than 'work as done' (the actual outcome) is possibly only realized by a minority of surgeons, and not representative of what most surgeons do in their daily practice. Consecutive, completely documented, unedited observational data might reflect reality more precisely while fulfilling the requirements of the Cochrane Collaboration [11]. Consequences and Conclusions Transparent (unedited) intraoperative image data, allowing a retrospective analysis, are indispensable to avoid a technical performance bias and assure the homogeneity of treatment groups in surgical trials. Complete, continuous clinical series can represent 'real world data' better than RCTs if they avoid these biases. The incidence of inconclusive results, frequent in surgical RCTs, could diminish. Following the ICUC concept of a complete intra-operative image documentation of surgical procedures, we can obtain data allowing for a retrospective analysis. This would contribute to necessary changes toward evidence-based surgery (EBS).
2023-02-19T16:18:04.205Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "1be60d393d9bb470a37be37fabb153b42d95ccd2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "00755836a665f7bd4c953ad5689cef18571bf0cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
265545875
pes2o/s2orc
v3-fos-license
Immunoassay Urine Drug Testing among Patients Receiving Opioids at a Safety-Net Palliative Medicine Clinic Simple Summary A urine drug test (UDT) is often used in the treatment of cancer pain to monitor compliance with opioid treatment. Two types of UDT are commonly for this purpose: the immunoassay test, and the mass spectrometry method. Only a few studies have examined the use of immunoassay UDT for cancer patients in palliative care clinics. In this study, we examined the frequency of immunoassay UDT abnormalities, and the factors associated with aberrant findings at a safety-net hospital palliative medicine clinic. Electronic medical records of 913 patients were reviewed. We found that 27% had aberrant UDT results; 35% of these were positive for cocaine. Non-Hispanic White race, history of illicit drug use, and history of marijuana use were associated with an aberrant finding. Despite limitations of immunoassay UDT, it could detect aberrant drug-taking behaviors in a significant number of patients. These findings support the utility of immunoassay UDT in clinical settings with less resources. Abstract Background: Few studies have examined the use of immunoassay urine drug testing of cancer patients in palliative care clinics. Objectives: We examined the frequency of immunoassay urine drug test (UDT) abnormalities and the factors associated with aberrancy at a safety-net hospital palliative medicine clinic. Methods: A retrospective review of the electronic medical records of consecutive eligible patients seen at the outpatient palliative medicine clinic in a resource-limited safety-net hospital system was conducted between 1 September 2015 and 31 December 2020. We collected longitudinal data on patient demographics, UDT findings, and potential predictors of aberrant results. Results: Of the 913 patients in the study, 500 (55%) underwent UDT testing, with 455 (50%) having the testing within the first three visits. Among those tested within the first three visits, 125 (27%) had aberrant UDT results; 44 (35%) of these 125 patients were positive for cocaine. In a multivariable regression model analysis of predictors for aberrant UDT within the first three visits, non-Hispanic White race (odds ratio (OR) = 2.13; 95% confidence interval (CI): 1.03–4.38; p = 0.04), history of illicit drug use (OR = 3.57; CI: 1.78–7.13; p < 0.001), and history of marijuana use (OR = 7.05; CI: 3.85–12.91; p < 0.001) were independent predictors of an aberrant UDT finding. Conclusion: Despite limitations of immunoassay UDT, it was able to detect aberrant drug-taking behaviors in a significant number of patients seen at a safety-net hospital palliative care clinic, including cocaine use. These findings support universal UDT monitoring and utility of immunoassay-based UDT in resource-limited settings. Introduction Patients treated with opioids for cancer pain in palliative care outpatient clinics may be at a high risk for nonmedical opioid use (NMOU) [1] and substance use disorder.NMOU [2] refers to misuse of opioids to self-treat non-pain symptoms, concurrent use of illicit drugs, diversion to unintended users, and varying degrees of opioid use disorder.It is also characterized by behaviors such as excessive, unjustifiable use of opioids or self-escalation of opioid dosage.NMOU is associated with a number of negative outcomes for patients and others in the community, including increased morbidity, opioid-related overdose death, and involvement in illegal activities [3].Substance use disorders have been associated with social instability [4,5] and poor symptom control [6], and may potentially contribute to poor patient adherence to cancer treatments.Cancer patients with substance use disorders have two potentially fatal and disabling conditions, both of which require the attention of clinicians [7][8][9].It is necessary to effectively screen for substance use disorder and monitor opioid use in palliative care clinics in order to ensure early identification and management of such complications.This has become particularly more relevant in palliative oncology settings because with the early integration of the palliative care model into oncologic care [10,11], palliative care clinicians are caring for patients earlier in the disease trajectory and therefore encountering an increasing number of patients with chronic pain receiving opioids.Clinical evidence suggests that one in five of such patients might be at risk for NMOU [1,2,[12][13][14]. Prescribers of opioids have long been required by federal and state law to use caution and ensure that the medications are prescribed and used appropriately.This includes careful screening and monitoring of patients at risk for NMOU, as well as timely identification of those who are actively engaging in NMOU behaviors and substance misuse [15,16].Urine drug testing for drugs of abuse, and for the presence or absence of the prescribed opioids, is a risk assessment measure often employed in the treatment of chronic cancer and non-cancer pain [12,17].There are two types of a urine drug test (UDT) commonly employed for this purpose, the immunoassay test, and the more expensive, specific, and sensitive mass spectrometry methods, which may be used for initial screening or to confirm a positive result on the UDT.Due to the expense of mass spectrometry testing, it may not be affordable in resource-limited clinics.It takes a longer time to receive the results of the mass spectrometry testing and clinical treatment decisions may have to rely on the UDT in some circumstances. A majority of studies on UDT have reported on the use of mass spectrometry; few studies have examined how the use of immunoassay UDT could inform clinical practice.This paper reports on the results of a study of immunoassay UDT conducted in an ambulatory palliative medicine clinic located in a resource-limited safety-net county hospital caring for predominantly indigent and uninsured patients.The objective of this study was to examine the frequency of UDT abnormalities found with the immunoassay test.We also examined the patient characteristics and factors associated with the UDT results that were considered aberrant findings by the clinical team.Successful examination of UDT abnormality rates using the immunoassay test will underscore its significance and utility in routine opioid therapy especially in resource-limited settings where this test might be the only available or most viable option. Study Participants and Procedure We conducted a retrospective review of the electronic medical records of consecutive eligible patients seen at the outpatient palliative medicine clinic at Lyndon B. Johnson General Hospital (LBJ) in Houston, Texas, between 1 September 2015 and 31 December 2020.LBJ is a safety-net hospital that serves predominantly low-income and uninsured patients.Approximately 85% of them are either uninsured or underinsured.A significant proportion of its revenue is generated from Medicaid Supplemental Programs.In Fiscal Year 2021, it provided over USD 720 million in charity care [18].Patients were eligible for the study if they were 18 years of age or older, had a diagnosis of cancer (with or without active disease), and were receiving opioids for cancer-related pain at any time during the study period.The study was approved by the institutional review board of UTHealth and Harris Health Systems. Data Collection Patients' baseline demographic and clinical characteristics were obtained within the first three clinic visits.These included patient age; sex; race and ethnicity; marital status; cancer type; and cancer stage.Also obtained within the first three clinic visits were pertinent risk factors for nonmedical opioid use such as history of illicit drug use; history of tobacco use; history of alcohol use; history of depression; history of bipolar disorder; history of schizophrenia; family history of illicit drug use; personal history of criminal activity (other than marijuana use); and contact with persons involved in criminal activity (other than marijuana use).Information regarding their opioid intake at the time of urine testing was obtained to assist with determining the morphine equivalent daily dose (MEDD) and facilitating interpretation of the UDT results.Weekly meetings were held among the clinician investigators to ensure uniformity in the data collection process.Efforts were made during the data collection process to maintain the confidentiality and privacy of study subjects in view of the sensitive nature of the health information obtained. Clinic Process and Instruments As part of the standard procedure in the clinic, patients receiving chronic opioid therapy were screened using risk assessment questions and the state prescription drug monitoring program (PDMP) database.Prior to opioid initiation, clinicians were encouraged to ask patients to complete a written pain treatment agreement and provide a verbal consent.Patients perceived to be at a high risk for NMOU based on the risk assessment tools and clinical interviews were monitored more closely on an ongoing basis including close observation of certain behavioral patterns suggestive of NMOU.Clinicians were encouraged and reminded to routinely obtain a baseline UDT within the first three clinic visits on every patient receiving opioids.If the clinician believed that the patient was at an elevated risk of NMOU, risk mitigation measures were implemented such as increasing the frequency of visits to the clinic; more cautious and limited opioid prescription; more frequent urine drug testing; intensive counselling; and referral to psychology and psychiatry as available. The Urine Drug Test The specific UDT reagents used in this study were manufactured by either Siemens Vista or Beckman Coulter.These were immunoassay tests designed to screen for the presence of opiates, amphetamine, cocaine, phencyclidine, benzodiazepines, barbiturates, and cannabinoids.The UDT is based on the reaction of the drug being tested for (analyte) with a reagent that binds to it.The binding reagents may react with other substances in the urine besides the analyte, causing there to be false positive test results.This may happen when the reagent used to detect the presence of benzodiazepines registers a positive result when the patient consumes, for instance, sertraline instead of benzodiazepines.The other substances in the urine that cause false positive results on the UDT may vary by the manufacturer of the reagent used for the test [19].The reagent used to detect opiates will bind to morphine or codeine but may fail to bind to synthetic or semi-synthetic opioids, because of their differences in chemical structure, thereby leading to false negative results.Because of the potential for false positive and negative results, confirmatory testing with mass spectrometry is often used along with the immunoassay UDT.Confirmatory testing was not available in this clinic during the study period, rendering the urine testing results presumptive and not conclusive, except for positive cocaine results, which were considered conclusive [20].Because of the limitations of the type of UDT used, if a result indicated the consumption of an unauthorized or illicit substance, or a failure to detect a drug expected to be present, a review of the record and a conversation with the patient were used by the clinician to determine whether or not the UDT result reflected NMOU behavior.False positive and negative UDT result information was ultimately discarded and not utilized in guiding further therapeutic decisions for patients in the clinic.For this study, aberrant results were determined based on any of the following: unexpected presence of unprescribed opioids, unexpected absence of prescribed opioids, or presence of illicit drugs in the urine. The prescribing clinicians consulted the literature, which described potential false positive or false negative results that may be encountered using the UDT [19,21,22].For example, consumption of substances such as methylphenidate, trazodone, tyramine, labetalol, propranolol, bupropion, ephedrine, and pseudoephedrine may all lead to a positive reading for an amphetamine [23].If the UDT was positive for benzodiazepines, consumption of sertraline could lead to a false positive.If the UDT was positive for cannabinoids, the consumption of dronabinol, non-steroidal anti-inflammatory drugs, or proton pump inhibitors might be the reason.A positive result on the UDT for phencyclidine might result from the consumption of dextromethorphan, diphenhydramine, or tramadol.If the UDT was positive for a barbiturate, the use of primidone, ibuprofen, or naproxen could be the reason [21].However, the literature indicates that a positive result on the UDT for cocaine is reliable due to the consumption of cocaine, crack, coca leaf tea, or other cocaine-containing products.Thus, when the UDT was positive for cocaine, the urine test was always deemed aberrant.On the other hand, false negative results usually occur if a sample has a low drug concentration, or the test has a relatively high cut-off calibration [17].Most immunoassays can only recognize classes of drugs (class assays) and are unable to distinguish between drugs in the same class.They also miss compounds such as oxycodone and synthetic opioids such as fentanyl and methadone [24].All these can lead to false negative results.Details of substances that can potentially result in false positive and false negative results during clinical urine drug testing can be found elsewhere [23]. In order to minimize the impact on our interpretation of UDT results by potential patient dilution or substitution of samples submitted, a urinalysis, or a urine creatinine, was often ordered along with the UDT.A urine pH of 3-11, a specific gravity of 1.002-1.020,or creatinine of ≥5 mg/dL indicate an unadulterated urine sample.Collection of the samples was not observed, so use of another person's urine was also possible. Statistical Analysis Descriptive statistics such as frequency and percentage for categorical data, and median with inter-quartile range (IQR) for continuous variables, were used to summarize the results.The chi-squared test or Fisher's exact test were used to assess the association between categorical variables and UDT findings.The t-test was used to assess association between continuous variables and UDT findings.Univariate and multivariable logistic regression analyses were used to explore the demographics and clinical factors associated with aberrant UDT findings.Aberrant UDT (yes, no) was the main outcome.The independent variables evaluated were age, sex (male, female), race and ethnicity (non-Hispanic White, non-Hispanic Black, non-Hispanic and other race, and Hispanic and any race), marital status (married, single), cancer type (gastrointestinal, respiratory, gynecological, genitourinary, breast, head and neck, heme, and other), cancer stage (locally advanced, localized, recurrent, advanced, first line, and metastatic), history of illicit drug use (yes, no), history of marijuana use (yes, no), history of tobacco use (yes, no), history of alcohol use (yes, no), history of depression (yes, no), history of bipolar disorder (yes, no), history of schizophrenia (yes, no), family history of illicit drug use (yes, no), personal history of criminal activity (yes, no), and contact with persons involved in criminal activity (yes, no).A p-value cut-off <0.05 was considered statistically significant.The data were analyzed with STATA software, version 17 (Stata Corporation, College Station, TX, USA). Results Table 1 provides information on demographic and clinical characteristics of consecutive study patients seen at the palliative care clinic and those who underwent a baseline UDT within the first three clinic visits.Of 913 study patients seen in the clinic, 455 (50%) patients underwent a UDT within the first three visits and of those, 125 (27%) patients were found to have aberrant UDT results.The median age of patients seen in the clinic was 55 years.The majority were female (480, 53%), Hispanic, any race (425, 47%), and single (610, 67%).Half of the patients in the study did not receive a UDT within the first three visits. Table 2 shows the frequency and percentage of patients who were seen in the clinic, underwent the UDT, and had aberrant UDT results during the clinic visits.The majority of patients seen in the clinic underwent at least one UDT (455 (50%) during the first three visits and 500 (55%) during the entire study period).The UDT was most frequently administered during the initial visit.Of the patients who had a UDT, 91% had the test within the first three visits.Approximately 27% and 29% of the tests were deemed aberrant within the first three clinic visits and during the entire study period, respectively.Aberrant results triggered a record review and conversation with the patient.None of the aberrant UDT results were caused by cross-reaction of prescribed or over-the-counter medications. Figure 1 A depiction of the types and distribution of illicit substances present in the UDT of the patients tested during the study period.Of the 125 patients who had aberrant urine samples in the first three clinic visits, the following numbers of patients had positive results for common illicit drugs screened for: amphetamine (9, 7%); barbiturate (2, 2%); benzodiazepines (15, 12%); cannabinoids (87, 70%); cocaine (44, 35%); and PCP (3, 2%).The patients who had cocaine in the urine constituted 9.7% of all patients that had urine tested during the first three visits. In a multivariable analysis of factors associated with the ordering of UDT (Table 3), the odds of ordering a UDT within the first three visits to the clinic decreased by 3% with each 1-year increase in the age (OR: 0.97; 95% CI: 0.96, 0.99).The odds of ordering a UDT among non-Hispanic Whites was 2.02 times (95% CI: 1.37, 2.98) and among non-Hispanic Blacks it was 1.86 times (95% CI: 1.30, 2.65) that of Hispanics.Moreover, patients with head and neck cancer had 2.18 (95% CI: 1.25, 3.79) times the odds of ordering a test than those with a gastrointestinal cancer.Patients with a locally advanced cancer stage had 58% (OR: 1.58; 95% CI: 1.10, 2.25) higher odds of undergoing a test as compared to those with metastatic cancer.Additionally, patients with a prior history of illicit drug use had 1.81 times (95% CI: 1.08, 3.04) and those with a history of marijuana use had 1.65 times (95% CI: 1.09, 2.50) the odds of undergoing UDT within the first three visits.Also, non-Hispanic Whites had about twice (OR: 2.13; 95% CI: 1.03, 4.38) the odds of aberrant results as compared to Hispanics.Figure 1 A depiction of the types and distribution of illicit substances present in the UDT of the patients tested during the study period.Of the 125 patients who had aberrant urine samples in the first three clinic visits, the following numbers of patients had positive results for common illicit drugs screened for: amphetamine (9, 7%); barbiturate (2, 2%); benzodiazepines (15, 12%); cannabinoids (87, 70%); cocaine (44, 35%); and PCP (3, 2%).The patients who had cocaine in the urine constituted 9.7% of all patients that had urine tested during the first three visits.In a multivariable analysis of factors associated with the ordering of UDT (Table 3), the odds of ordering a UDT within the first three visits to the clinic decreased by 3% with each 1-year increase in the age (OR: 0.97; 95% CI: 0.96, 0.99).The odds of ordering a UDT among non-Hispanic Whites was 2.02 times (95% CI: 1.37, 2.98) and among non-Hispanic Blacks it was 1.86 times (95% CI: 1.30, 2.65) that of Hispanics.Moreover, patients with head and neck cancer had 2.18 (95% CI: 1.25, 3.79) times the odds of ordering a test than those with a gastrointestinal cancer.Patients with a locally advanced cancer stage had 58% (OR: 1.58; 95% CI: 1.10, 2.25) higher odds of undergoing a test as compared to those with metastatic cancer.Additionally, patients with a prior history of illicit drug use had 1.81 times (95% CI: 1.08, 3.04) and those with a history of marijuana use had 1.65 times (95% CI: 1.09, 2.50) the odds of undergoing UDT within the first three visits.Also, non-Hispanic Whites had about twice (OR: 2.13; 95% CI: 1.03, 4.38) the odds of aberrant results as compared to Hispanics. Discussion Half of the 913 patients with cancer pain included in this study underwent a UDT during the first three clinic visits; of these, 125 (27%) had aberrant UDT results.In total, 44 (35%) of these 125 patients had UDT results positive for cocaine.The number of patients found to have aberrant UDT results was high.Studies have found similar abnormal urine testing results in other palliative medicine clinics [2,25].Overall, our study shows that patients with cancer and on opioids have a significant risk for NMOU that could be detected with immunoassay UDT in routine clinical practice [25,26].Cancer patients, like the general population, may have pre-existing drug-related issues.This, coupled with the increased exposure to opioids for cancer pain management, increases the risk for NMOU [3,27].It is important to note that the use of UDS in the palliative care setting might be of more relevance among ambulatory palliative care patients with relatively longer survival than among those close to the end of life who have a very short life expectancy. The rate of cocaine use in our population was higher than some other studies involving populations with different socioeconomic factors [13].In one study conducted at another palliative care clinic whose patients have a different demographic mix, with a high percentage of insured and racial-majority patients, 8.2% of these patients who underwent risk-based urine drug testing had cocaine in the urine [12] and only 1% of patients who were randomly selected for testing irrespective of risk tested positive for cocaine [28].Our study was conducted in a safety-net palliative medicine clinic with predominantly ethnic and racial minorities where most of the patients were uninsured, underinsured, and had less resources.Of the 913 consecutive patients in our study, 47% were Hispanic and any race, 28% were Black non-Hispanic, and 3% were of other races and non-Hispanic.The percentage of patients testing positive for cocaine in the urine was higher than the risk-based testing protocol in the study mentioned above and would most likely have been even higher in our clinic if the testing was only directed toward patients with a high perceived risk of NMOU.Future studies are needed to ascertain whether less favorable social determinants of heath are key predictors of NMOU and OUD.Particularly, race or ethnicity has not been found to be a risk factor for opioid misuse although data in both palliative care and non-palliative care populations have revealed disparities in UDS ordering disproportionately affecting minoritized patients [29][30][31]. The high number of patients testing positive for cocaine is significant because concurrent use of cocaine or other illicit drugs and opioids can result in increased morbidity and mortality.Since substance use of one drug is often accompanied by misuse of other substances [32], cocaine use disorder might be indicative of problematic use of other substances as well as opioids.Cocaine is a highly addictive substance that can cause multiple serious health risks such as intravenous-drug-use-related infections, cognitive deficits, overdose deaths, as well as long-term cardiovascular, respiratory, gastrointestinal, and neurovascular complications [33,34].More than 500,000 people sought medical attention in an emergency room (ER) for cocaine-related complications in 2011, accounting for over 40% of all ER visits involving illicit drug use [34].These issues, coupled with the potential dangers of aberrant opioid use and complications from cancer and its treatment, pose significant problems for cancer patients with comorbid cocaine use disorder and NMOU.Early identification of patients who actively engage in cocaine use presents an opportunity for clinicians to take the necessary steps to avert potential harm to their patients and make the appropriate referrals of these patients to receive specialist care.This underscores the important role that immunoassay UDT might have in less-resourced populations where more expensive UDTs might not be available.It has been found that patients with substance use disorders who are undergoing cancer therapies and cancer symptom management face more challenges and may have worse outcomes [7,8,35].Further studies are needed to investigate the impact of cocaine and other substance use disorders on the adherence and outcomes among patients undergoing anticancer therapies. The immunoassay test, although limited, was useful in identifying a significant number of patients consuming illegal and unauthorized substances who required greater vigilance and assistance.The UDT, along with a review of the patient record and a conversation with the patient, can be of value although the results of the UDT are said to be presumptive except when cocaine is detected.The findings support the notion that this type of UDT may be useful as a routine risk mitigation tool in patients with chronic cancer pain.Entities such as the Centers for Disease Control explicitly excluded cancerrelated chronic pain from their guidelines [36].However, it is becoming more evident from multiple studies that urine drug testing is useful in chronic cancer-related pain.Also, universal screening of all patients for substance use disorder and NMOU with UDT in a palliative care clinic [35] would possibly reduce the potential negative impact of selective UDT testing on the physician-patient relationship.The patient is likely to see it as part of the clinic's routine policy and not feel targeted if the UDT was required of all patients.The immunoassay UDT is inexpensive enough to use on entire clinic populations.The 2019 Medicare Clinical Laboratory Fee Schedule indicates that the reimbursement rate for a 9-panel immunoassay drug test is USD 65 while 1-7-panel confirmatory drug testing is USD 114 and 8-14-panel confirmatory definitive testing is USD 157 [37].The relatively lower cost of the immunoassay test makes it more feasible and affordable than the more expensive gas chromatography mass spectrometry test in our patient population who are likely to experience significant financial constraints. Limitations One limitation of this study was that the data were collected retrospectively, thereby limiting our ability to obtain detailed real-time information during the sample collection process.Future studies should utilize a prospective study design to avoid this potential limitation.Moreover, it was conducted at a single center, so the results are not easily generalizable to other centers or patient populations, particularly those with different socio-economic and demographic characteristics.The UDT was not obtained for every patient prescribed opioid medications although clinicians were encouraged to obtain the UDT within the first three visits regardless of perceived risk of NMOU.It is possible that some patients were, in effect, selected to undergo the test based on their risk profile or the clinician's suspicion of NMOU behavior, while others were tested regardless of perceived risk.This might have increased the potential for selection bias and is a common limitation in multiple UDT studies in palliative care settings [1,12,38,39].Lastly, the UDT in this study utilized the immunoassay technique, which has the potential for false positive results and is limited in the opioids it may detect.It was also unable to detect compounds such as oxycodone and synthetic opioids such as fentanyl and methadone.All these could have potentially resulted in false negative results.Ordering physicians often had to make further investigations to determine the aberrancy of a result.These inherent limitations of the immunoassay test could lead to an under-estimation and misrepresentation of the overall frequency of NMOU detected by abnormal UDT results in our study population. Conclusions Among patients receiving opioids for cancer pain at an ambulatory safety-net palliative medicine clinic who underwent immunoassay UDT, 27% and 29% of them were deemed aberrant within the first three clinic visits and during the entire study period, respectively.A significant number of them tested positive for cocaine.The findings suggest that the immunoassay UDT test might have a role in opioid therapy among patients seen in underresourced clinical settings, especially when coupled with review of the patient record and a conversation with the patient.Future studies are needed to further examine the clinical effectiveness and benefits of immunoassay UDT in different clinical settings and to justify policy changes related to its utility in patients with cancer. Figure 1 . Figure 1.Frequency of illicit substances present among patients with aberrant urine drug test at consultation (n = 100), within first 3 clinic visits (n = 125), and for all visits (n = 144). Figure 1 . Figure 1.Frequency of illicit substances present among patients with aberrant urine drug test at consultation (n = 100), within first 3 clinic visits (n = 125), and for all visits (n = 144). Table 1 . Demographic and clinical characteristics of all patients seen at the palliative care clinic and those who underwent UDT within the first 3 clinic visits (n = 913). Table 2 . Frequency and percentage of patients who were seen, completed UDT, and had aberrant UDT findings during various clinic visits. Table 3 . Multivariable regression analysis of factors associated with urine drug test ordering and aberrant UDT findings within the first three clinic visits. a Reference category for each of the NMOU risk factors was no history of the individual risk factor.
2023-09-15T13:03:20.132Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "5f5c87ff76902fe60c23977295edfb1c095a9893", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/15/23/5663/pdf?version=1701338047", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78353efbac2a8a54732d651ab74723834a5b3640", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
272725441
pes2o/s2orc
v3-fos-license
The Use of Artificial Intelligence for Orthopedic Surgical Backlogs Such as the One Following the COVID-19 Pandemic Abstract ➤ The COVID-19 pandemic created a persistent surgical backlog in elective orthopedic surgeries. ➤ Artificial intelligence (AI) uses computer algorithms to solve problems and has potential as a powerful tool in health care. ➤ AI can help improve current and future orthopedic backlogs through enhancing surgical schedules, optimizing preoperative planning, and predicting postsurgical outcomes. ➤ AI may help manage existing waitlists and increase efficiency in orthopedic workflows. Introduction: COVID-19 and the Surgical Backlog C OVID-19 was pronounced a global pandemic in March 2020, leading to significant stress on the US health care system 1 .Resources were reallocated to treat infected patients and prevent further infection, causing a widespread limitation to elective surgical services 2 .Every surgical field was affected, and over 28 million surgeries were estimated to be delayed or canceled worldwide secondary to the pandemic 3,4 .Elective surgeries were drastically reduced, leading to numerous potential problems in the postpandemic health care landscape 4 . Orthopedic surgeries were one of the most impacted surgical specialties with over 80% being canceled during the initial 3 months of the pandemic 3 .Although surgical volumes returned to prepandemic baselines 5 , delays in surgical care caused distress for patients and health care systems 6 .Jain et al. studied arthroplasty and spinal fusion cases in 2020, estimating 7 to 16 months to return to 90% of prepandemic surgical volumes, with over 1 million cases awaiting completion after episodic stoppages of elective surgery during 2 years of the pandemic 7 .Another 2020 study modeled a one-time, 3-month shutdown and predicted, the best case scenario, the health care system would require 16 months to clear the backlog on total knee arthroplasties (TKA) alone, with some of the approximately 300,000 deferred patients during the pandemic waiting over 6 months for a procedure, also leading to extended wait times for new surgical patients 6 .These quantitative models, while interesting, may not be generalizable to the entirety of orthopedics.More recently, a study using data through April 2021 found a backlog of 26,412 knee procedures and 26,412 shoulder procedures, a number that steadily increased despite return to prepandemic surgical volumes 8 .With demand for procedures such as arthroplasties projected to increase over the coming years 9 , these backlogs could worsen if not appropriately and quickly addressed.It is difficult for the current health care system and surgeons to increase surgical volume for delayed patients while also keeping pace with the needs of new patients. The postpandemic backlog may also potentially cause significant health care distress.Cisternas et al. reviewed studies discussing potential consequences of increased surgical wait time on orthopedic patients, which include poorer postoperative outcomes 10 , potential for opiate dependence 11 , and worsening of functional abilities and other comorbidities 6,12,13 .Patients suffering from canceled orthopedic procedures reported increased pain, analgesic use, and psychological distress 14 .In the United States, the estimated loss of net income for hospitals was between $4 and 5.4 billion per month 15 , and surgical providers have reported increased stress 16 .The backlog strains the health care system, both from a provider and patient perspective, and will continue to until it can be addressed safely and efficiently. The Rise of Artificial Intelligence in Orthopedic Surgery A rtificial intelligence (AI) involves computer algorithms to solve problems using pattern recognition 17 .Various subtypes exist.Machine learning (ML) allows computers to recognize patterns in data sets and can be either guided by human labeling and feedback (supervised) or permitted to repeatedly find patterns on their own (unsupervised) 18,19 .Within ML, deep learning (DL) is a more complex system using many layers of algorithms, called artificial neural networks (ANNs), with many times the parameters of ML 17 . Publications discussing AI and its applications in orthopedics have sharply increased recently 18 .Predicted uses include radiologic advances, data extraction from medical records, improved resident training, and algorithms predicting patient clinical courses 20 .Although likely years away, AI may be used with robotics to improve the efficacy of surgery itself 21 .As outlined by Farhadi et al., AI may also afford health care systems increased efficiency, including improved workflow, postoperative complication prediction, and increased intraoperative precision 18 . This review discusses how these and other applications of AI might be leveraged to ease the surgical backlog of orthopedic procedures caused by COVID-19.Applying this technology may also provide new workflows to help surgeons accommodate the increasing need for orthopedic procedures in our aging population. Improved Surgical Scheduling N umerous studies have sought to optimize surgical sched- uling and decrease operating room (OR) delays [22][23][24][25] .Financially, hospitals desire improved efficiency because the OR generates substantial revenue.A large 2023 study found over 60% of elective surgeries were scheduled for longer durations than needed, with a median overestimation of 29 minutes.In addition, 37% of surgeries were scheduled for shorter durations than needed, with a median underestimation of 30 minutes 26 .Both affect workflow, with overestimation leading to inefficient OR usage and underestimation causing case cancelations and rescheduling. Inputting procedural characteristics with patient and surgeon profiles within AI could allow more accurate predictions of surgical operating times.Zaribafzadeh et al. developed a ML program with this technique 27 .They analyzed a large set of surgical case data to develop historical norms using numerous variables, including age, sex, surgeon-predicted case length, and relative value units of the case.They then developed a 3-step similarity cascade to compare new cases with existing data and predict future operating times.The ML model was used conjunctively by surgical schedulers and allowed 4.3% fewer underpredicted cases and a 3.4% increase in cases scheduled within 20% of the actual length, with just a 1% increase in overpredicted cases.While the improvements now are small, AI's effectiveness in scheduling may increase with time. Other studies have similarly used ML to generate improved surgical scheduling [28][29][30][31] .Jiao et al. found their ANNs made lower time error than a Bayesian approach, an established statistical method of making updated decisions based on new information 31 .Another study created 2 ML programs and found the surgeon-specific scheduler was more accurate than the specialty-specific scheduler, indicating individual surgeons may be more important in estimating case time than specialty grouping 28 .Although likely well known in the surgical community, AI may provide tools to better analyze it and correctly adjust scheduling to improve OR utilization.Although many of these ML programs are still in infancy, their precision may continue to improve as they are provided with and respond to more data. With both TKA and total hip arthroplasty (THA) being removed from the Medicare inpatient-only list, there is increased focus on day surgery arthroplasty procedures at ambulatory surgical centers.Appropriate selection of patients for outpatient arthroplasty surgery could minimize complications and increase case volume.Lopez et al. developed a ML model for selecting patients based on numerous modifiable and nonmodifiable factors and achieved relatively high predictive and discriminative value for same-day discharge 32 .As an increasing proportion of surgeries are now performed outpatient settings, identifying patients well-suited for same-day discharge could ensure more efficient scheduling of surgeries. The practical concern is whether this will truly allow additional case volumes.Improved OR turnover time is not always significant enough to enable an additional case in a day 33 .Despite a paucity of data to prove AI will increase scheduled surgeries, the technology should become more efficient and grow in its applications.As AI further develops, it may also be applied to resource allocations and aid surgeons inside and out of the OR. In addition, ML programs may dynamically adjust to changes in scheduling better than manual schedulers, providing rapid responses to the unpredictable nature of the OR.Finally, significant benefit may simply exist in preventing cancelations.Over 11% of orthopedic cases are canceled with over half citing lack of time as the primary reason 34 .Preventing case cancelation through improved scheduling itself could help improve the backlog. Preoperative Planning C onsiderable time and resources are spent planning or- thopedic surgeries.First, the patient must be clinically evaluated to determine if operative treatment is warranted.ML has been used to accurately determine whether a patient should undergo surgery in hip complaints using the hospital data 35 .While AI should not replace provider judgment and caution is necessary to avoid a biased clinical assessment, it could prove a valuable tool in increasing efficiency in clinic visits, allowing providers to assess more patients. Once surgical necessity is determined, imaging is often used to plan component size and position 36 .Allowing AI to help draft, preoperative plans could optimize the planning process.Lambrechts et al. found preoperative AI-generated TKA plans required 39.7% fewer adjustments by the surgeon compared with standard manufacturer-provided plans 37 , allowing surgeons to develop patient-specific surgical plans more quickly.Adjacently, considerable interest surrounds using patient-specific instrumentation (PSI) in arthroplasties to decrease waste and provide more personalized prostheses.Overall, PSI has not been shown to be cost-effective and can be time-consuming both developmentally and intraoperatively if changes are necessary [38][39][40] .In 2023, Li et al. used neural network structures to accurately interpret computed tomography (CT) images and provide more accurate specifications for PSI for TKA without increasing preoperative time 41 .By quickly providing accurate PSI measurements, AI could decrease time spent in the OR trialing different component sizes. Will increased efficiency in preoperative planning translate to more procedures and a decreased surgical backlog, though?Prior analyses have generally found approximately a 5minute reduction in surgical time when using PSI compared with standard instrumentation 38,39,42,43 .This change alone would likely not be sufficient to add additional surgical cases.The previously mentioned study using CT imaging found their model took approximately 3.74 ± 0.82 minutes for the CT interpretation and 35.10 ± 3.98 minutes for the PSI design, compared with a respective 128.88 ± 17.31 minutes and 159.52 ± 17.14 minutes for standard methods 41 .This represents a significant reduction in time to generate PSI.Hopefully, as AI models improve, surgeons' preoperative planning time will continue to decrease, and intraoperative time will decrease as implants become more precise and require fewer adjustments.Together, this may become efficient enough to increase weekly surgical volumes. Predicting Postsurgical Outcomes U sing AI to predict clinical courses following orthopedic surgery, and the risk of potential complications has been excellently summarized by several review articles 18,20,44,45 .This can be useful for identifying which patients may require planned, extensive care or control of comorbidities.ML has been used to effectively predict improvement after THA using partially modifiable risk factors, which could help providers optimize patient health before surgery 46 .Failure to improve postoperatively or readmission both divert resources from future surgeries and may be minimized by appropriate planning and risk reduction.Another application of ML includes studies predicting length of hospital stay for arthroplasty patients 47,48 .Valid estimates of length of stay translate to more efficient hospital scheduling and optimization of procedural volume.Overall prognosis and morbidity are important, too, not just for hospital efficiency but also for patient safety.ML models were used retrospectively to demonstrate superior prediction of mortality and adverse events following spine surgery 49 .These models may even identify patients at too high risk for adverse events from surgery.AI could prevent poor surgical candidates from being scheduled and increase availability for better candidates that will benefit from surgery. Numerous complications of orthopedic surgery can occur and may require dedicated follow-up 50 .Revision arthroplasty is often time-consuming with significant resource burden 51 .ML programs have predicted major complications from THA more effectively than current risk calculators 50 .Similarly, programs have accurately predicted the risk of postoperative falls, allowing for implementation of fall prevention measures 52 .Postsurgical falls represent a significant resource burden and can result in complications such as pain, wound dehiscence, dislocation, and fracture [52][53][54][55] .Fall prevention measures are economically beneficial 56 and will decrease the need for additional office visits, revision surgery, or fracture care, allowing orthopedists to focus on new elective procedures. The effectiveness of predicting postsurgical outcomes on overall case volume is difficult to report.However, it makes intuitive sense that hospitals that are well-planned for potential complications can achieve greater efficiency with their resources.Predicting length of stay particularly could work in conjunction with AI-influenced surgical scheduling to improve OR efficiency. Managing Waitlists T he surgical waitlist itself could be a target for AI.Re- searchers in China developed an AI-assisted module to help patients order necessary laboratory and imaging tests automatically based on their symptoms before the clinical evaluation 57 .This algorithm used DL to analyze medical records and develop likely diagnostic classifications based on patient clinical features.While this incurs the risk of burdening the system with unnecessary testing and should not replace a clinical visit, similar modules could be helpful for primary care physicians and mid-levels to improve the workup for orthopedic referrals.These modules could help providers work through an orthopedic-specific workflow, guiding them through an algorithm for orthopedic visits akin to one used in orthopedic office visits and better identify surgical candidates.This could reduce nonoperative visits for orthopedists and allow them to see more surgical patients. With a waitlist, prioritizing patients appropriately is important to minimize harm and maximize resources.Considerable research involves ethical methods of prioritizing elective surgical candidates 58 .This has been of increasing interest to countries like the United Kingdom that have dealt with ongoing waitlists worsened following the COVID-19 pandemic 59 .Researchers from the United Kingdom undertook a pilot study with their augmented intelligence system, COMPASS, to aid in prioritizing surgical candidates 59 .Although only 29 patients were included, they found significantly decreased rates of complications and mortality using their program.Similar methods could feasibly be used to manage US orthopedic waitlists to prioritize the appropriate patients.Table I summarizes key applications of AI to address the orthopedic surgical backlog. Anesthesia and Anesthetic Time T he effectiveness of AI in anesthesia may be similar to prior areas of focus, including improved preoperative analysis to determine the difficulty of the airway and postoperative programs to calculate the risk of patient mortality 60,61 .There has also been considerable focus on closed loop systems and pharmacological algorithms, as summarized by Singh and Nath, that provide more precise release of anesthetic medications, vasopressors, and paralytics 62 .It is impossible to measure a quantitative impact of these programs with current data, but more precise medication doses theoretically could prevent wait time due to overshooting of medication and thus increase OR efficiency.Although speculative, AI could help with anesthesia coordination to decrease time between cases and improve time allotted to preoperative blocks. Future of AI in Orthopedics U ndoubtedly, AI will aid extensively in radiographic inter- pretation.Studies have reported AI's ability to accurately diagnose musculoskeletal trauma, degenerative disease, and musculoskeletal tumors 18,20,[63][64][65][66][67] .We chose not to focus on this area because it seems unlikely to significantly affect the current backlog. AI is likely to manage robotic-assisted surgeries to improve procedural safety and efficiency 44 .Li et al. demonstrated DL's effectiveness in robotic-assisted TKA by generating 3D models from CT scans 68 .Someday AI-directed robotics may even operate autonomously, aiding surgeons in the OR 69 .The future may also see advances in regenerative orthopedics with AI programming, including tissue regeneration, stem cell technology, and genomics/epigenomics 70 .Although these developments will influence surgical procedures and patient care, their widespread implementation will not come in time to deal with the current backlog.Table II summarizes likely future AI applications. Limitations and Challenges T his study is limited by the lack of long-term data on AI use in health care.Most studies involving AI in orthopedics have been published recently 18 and involve ML applications at single institutions.No data exist, to our knowledge, of AI applications across multiple health care systems for an extended time.Systematic reviews and meta-analyses are also lacking.Thus, modeling how much AI can reduce the surgical backlog is challenging.While AI can improve efficiency, its effectiveness within surgical backlogs is presently only speculative and cannot provide definitive or quantitative conclusions.Furthermore, the correction of the surgical backlog will likely require a multidisciplined approach involving many factors beyond AI, including systemic, institutional, and provider factors.This review therefore exists to commentate on how advances in AI could be useful in decreasing the surgical backlog but cannot provide quantitative estimates.This review is also not intended to be exhaustive for potential uses of AI in orthopedics. More broadly, AI implementation faces logistical challenges.While AI has been projected to save health care systems' considerable capital long term 71 , its initial application could prove expensive and labor intensive 72,73 .These factors could prevent many health care systems from adopting AI use until costs decrease.Many technologies drastically decrease in cost over time, though, such as genome sequencing falling from tens of millions of dollars to around 1,000 dollars over just 2 decades 74 .We are hopeful that AI costs will similarly become less expensive in time. Questions also exist concerning the generalizability of AI in health care 18 .ML algorithms designed in one location with specific patient populations may not be as accurate in other locations.Ultimately, clinicians must note the limitations of AI's effectiveness and treat it as an adjunctive tool in diagnosis and not a replacement for their expertise 75 . Finally, public response to AI implementation must be considered.Privacy is a concern when dealing with large data sets, and we are cognizant that advances in AI are outpacing regulatory oversight 76 .Health care systems must proceed cautiously to avoid protected patient information exposure and consider that the public might view AI in health care with mistrust, particularly should any large-scale data leak occur.Indeed, many patients are concerned about potential loss of confidentiality, biases in algorithms, and communication barriers AI may create between them and their physician 77 .Efforts must be made to maintain security and trust amidst these coming changes. Conclusion O rthopedic surgery was highly affected by the COVID-19 pandemic due to its high rate of elective procedures 3,6,7 .There now exists a persistent backlog of many procedures as patients are waitlisted to receive care 8 .AI has emerged as a powerful potential tool with numerous applications in orthopedic surgery 18 .Several demonstrated uses could prove helpful in improving the current backlog: improved surgical scheduling [27][28][29][30][31] , efficient and precise preoperative planning 35,37,41 , accurate postsurgical predictions [46][47][48][49][50]52 , and management of surgical waitlists 57,59 . Weare optimistic that AI's use in orthopedic surgery will evolve and help the health care system while being mindful that AI's implementation faces numerous challenges [71][72][73]76,77 .In addition, the technologies developed and implemented will likely play an important role in managing future surgical backlogs that may occur.This review, to our knowledge, is the first exploring the applications of AI in orthopedic surgery in the context of the current surgical backlog.n TABLE I Summary of 4 Key AI Applications to Reduce the Orthopedic Surgical Backlog: Surgical Scheduling, Preoperative Planning, Postsurgical Outcome Prediction, and Waitlist Management
2024-09-20T05:10:05.740Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "5e52ec9cd5749c8b7be6f978906b5e57e4c156ef", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5e52ec9cd5749c8b7be6f978906b5e57e4c156ef", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
233706713
pes2o/s2orc
v3-fos-license
Size effect alive or dead: Evidence from European markets Abstract In this paper, we examine whether the size effect is present in four European markets viz. France, Germany, Spain and Italy. We also investigate whether the size effect can be explained through the sources as available in the literature. We employ prominent asset pricing models to ascertain if size anomaly in our sample countries passes the risk story. We find single-factor model i.e.capital asset pricing model to be still relevant in explaining size anomaly for Spain and Italy. We find FF3 factor model to be a suitable model to be explain for alphas in Germany, while we find that none of the asset pricing model is able to fully explain the size effect for France. Hence, we conclude that though size anomaly does not provide any opportunities to portfolio managers for making extra normal returns for their investors in three of the four sample countries. France, however, provides an opportunity to portfolio managers for exploiting size anomaly. Our findings have implications for portfolio managers, academia as well as regulators. PUBLIC INTEREST STATEMENT In this paper, we have tested an important asset pricing anomaly i.e. size anomaly for four west European markets. It is one of the prominent equity market anomalies which states that small capitalization firms provide higher returns as compared to large capitalization firms, and investors can make profitable trading strategies using them. We test the efficacy of size anomaly using the prominent asset pricing models and find that size effect in Spain and Italy gets subsumed by capital asset pricing model which means that investors cannot form risk-adjusted trading strategies based on size for these markets. Similarly, size effect in Germany gets explained by FF3. However, we find that France is the only exception where none of the model is able to subsume size effect. Hence, we recommend that global fund managers can use this anomaly for France to create profitable trading strategies for their investors. Introduction In the past few decades, many asset pricing anomalies also named ascapital asset pricing model (CAPM) anomalies have been investigated in the asset pricing literature (size anomaly (Banz, 1981), Value anomaly (Stattman, 1980), momentum anomaly (Carhart, 1997), volatility anomaly (Clarke et al., 2006;Pandey & Sehgal, 2017) and net stock issues (Loughran & Ritter, 1995;Sehgal & Pandey, 2013) to name a few. Portfolio managers are constantly looking to exploit such anomalies in order to generate extra normal returns for their investors Pandey (2012, 2014)). The prevalence of such anomalies suggests that CAPM is unable to fully explain the variation in cross-section average stock returns. Out of various anomalies, the size anomaly, as first observed by Banz (1981), is the most controversial and explored anomaly. Banz found that small-sized firms, due to various risks present in them, provide higher returns as compared to large cap firms over a long period of time. However, since past three decades, the research in regard to size anomaly has been paradoxical. Initial observations, especially for mature markets, were that the size effect persists after adjusting for market risk (Berk, 1996;Carlson et al., 2004;Gomes et al., 2003). Few studies document the presence of size effect in micro-firms within the small size firms (Fama & French, 2008;Horowitz et al., 2000a;Michou et al., 2010). Similarly, emerging market also confirmed the presence of size effect (Chan & Chien, 2011;Hilliard & Zhang, 2015;Mohanty, 2001;Sehgal & Tripathi, 2006). However, more recent literature on size is providing mixed results. Few studies are documenting the diminishing effect of the size anomaly both in matured (Cederburg & O'Doherty, 2015;Crain, 2011;Dijek, 2011) and emerging markets (Pandey & Sehgal, 2016;Wu, 2011). However, recent studies have ignited the debate of size anomaly by showing its persistence in matured markets (Asness et al., 2018;Ciliberti et al., 2017;Leite et al., 2018). Similarly, several studies on size effect have been conducted for European markets as well (Cakici et al., 2013;Fama & French, 2012;Roy & Shijin, 2018;Zaremba & Czapkiewicz, 2017). Thus, there has been sufficient literature available for both matured and emerging markets. However, the literature on the size anomaly, especially for Western European markets, is limited. This motivated us to conduct this study to find out the presence of size effect for four major West European countries, namely, France, Germany, Spain and Italy. The reason for choosing these economies was that they are large Eurozone countries with well-developed security market. The second reason for taking these economies is that in order to test cross-sectional variation in returns, it is important to have large samples and these are the only economies in West European markets with a sample of at least 250 companies trading in their respective stock markets. Another important rationale for conducting this study has been that though there has been sufficient literature available examining the validity of size effect; however, there has been limited research being carried out to find out the rationale sources of size effect. Existing literature provides various explanations for potential size effect as described below: (1) Non-Synchronous Trading: Roll (1977), Scholes and Williams (1977), and Dimson (1979) have shown that shares of infrequently traded firms tend to have biased betas and non-synchronous trading biases their betas downwards. Since small size firms tend to trade infrequently, their betas are underestimated and alphas overestimated. Dimson (1979) estimates that market sensitivities (betas) correct for thin trading by taking lead and lags of betas. (2) Business risks and financial distress: Small firms are expected to be operationally riskier compared to large firms owing to less diversified product base, less efficient workforce, lower bargaining power in procurement of raw materials, less sophisticated technology, lower customer loyalty and less committed workforce. Besides, higher operational risk small firms also tend to have greater financial risk exposure owing to higher cost of borrowing. In addition, small firms may be relatively distressed i.e. they exhibit low sales and earnings growth rates and hence exhibit low or negative economic profits. This relative distress is reflected by a low price-to book value (P/B) ratios for such firms. Fama and French (1993) introduced a three-factor asset pricing model which incorporates size and a value factors in addition to the market factor of CAPM. These size and value factors proxy for business risks (operating as well as financial) and relative distress, respectively. These size and value premiums tend to explain extranormal returns relating to several company characteristic sorted portfolios including firm size. Fama and French also introduced two additional factors namely, investment rate and profitability to their existing three-factor model and popularly named as Fama French fivefactor model French (2015, 2017) to explain various anomalies including size effect. (3) Stock Momentum: Momentum in stock market terminology means that past winners shall remain winners in future and past losers shall remain losers in future (over the next 12 months). Jegadeesh and Titman (1993) found that a trading strategy to buy stocks that have provided higher returns and sell stocks with low returns over a period of last 3-12 months leads to generate supernormal profits. The behavioral models (see Barferis et al. (1998), Daniel et al. (1998), Hong et al. (2000) and Chordia and Shivkumar (2002)) have shown that investor's underreaction or overreaction to specific news is the reason for creation of momentum anomaly. In contrast, Fama and French (1996) and Conrad et al. (1991) provide a risk argument in favor of stock momentum factor. Chordia and Shivkumar (2002) provide an economic foundation for stock momentum factor by showing that past returns contain information about future returns which are predicted based on economic fundamentals. Further, stock momentum may proxy for sector momentum as winning stocks may belong to winning sectors, while losing stocks may be a part of low performing sectors (see Moskowitz & Grinblatt, 1999;. Thus, stock momentum may proxy for sector momentum assuming that winning sectors exhibit higher risk owing to stronger growth potential as compared to losing sectors (see, Liu & Zhang, 2008;. Carhart (1997) gave a four-factor model, an extension to Fama French three-factor model by including momentum as a fourth risk factor in explaining stock returns. (4) The business cycle: Merton (1973), Ross (1976), Cox et al. (1985) and Chen et al. (1986) tested the firm size effect relating it to business cycle proxied by the yield spread of a portfolio of lowgrade bonds vis a vis portfolio of long-term government bonds which is termed as the premium factor. They find a substantial part of size effect got subsumed by the premium factor. Their argument was that since small firms are riskier than large firms they are more sensitive to the changes in economic conditions. In the Indian context, Sehgal and Tripathi (2006) find that returns on size sorted portfolios are not sensitive to business cycle conditions. (5) The January Effect: Keim (1983), Brown et al. (1983aBrown et al. ( ), (1983b, and Daniel and Titman (1997) show that the majority of small firm effect occurs in the first week of January. The argument given is that since December is the financial year closing so in order to get tax incentive firms tend to sell their stocks in December and start buying them back in the first week of January. It is called as tax loss selling hypothesis. Another explanation is the window dressing hypothesis which states that in order to show more profits to investors, portfolio managers tend to sell speculative stocks (majorly small size firms) and buy winners at the end of the year. Once the New Year starts they again start buying speculative stocks. The last argument for January effect could be information patterns. Since December is the fiscal year closing thereof, the month of January leads to increased uncertainty and anticipation due to the forthcoming release of important information, especially for small size companies as less information is available in the public domain for such companies. Moor and Sercu (2013) conducted a comprehensive study to test whether potential sources of size effect explain size anomaly in 39 matured and emerging markets for a period of January 1980 to May 2009. They find that none of the existing sources could fully explain the size anomaly. On the other hand, Pandey and Sehgal (2016) did a similar study for Indian market and found size effect to be explained by their rationale sources mainly market, size value and default premium. Mere confirmation of size effect is not sufficient, unless it persists after controlling for risk factors. Thus, we conduct this study with the following objectives: to confirm if the size effect persists in West European markets for a recent time period and to evaluate whether rationale sources explain the size effect for our sample countries. The paper is divided into five sections including the present one. We discuss data in Section 2, while section 3 deals with research methodology and estimation procedure. The empirical results are provided in Section 4, and summary and concluding remarks are discussed in the last section. Data Month end adjusted closing prices have been taken from January 2008 to March 2018 for 505 companies of France and Germany, 427 companies of Spain and 503 companies of Italy. Prices have been converted into returns to carry out further estimations. The selected companies have been selected on the basis of their market capitalization, and the rationale for taking 505 companies is to match our sample size with S&P 500 Index which is constituted by 505 companies. Both for France and Germany, we get the requisite number of companies but for Spain and Italy we could get 427 and 503 companies, and hence, we have taken all the companies for these two countries. CAC 40, DAX 30, IBEX 35 and FTSE MIIB indices have been taken to measure market return for France, Germany, Spain and Italy. 91 days US treasury bills have been taken to proxy for risk-free rates. In order to create size and value factors, we use market capitalization and price-to-book value for our sample companies. All the year-end corporate attributes have been taken as on end of December for the sample period. Momentum factor has been created as an average of 6 months past returns. The default spread has been defined as the difference between the AAA and BBB+ for France; AAA securities and BBB-for Germany; AA-and BBB for Spain and Italy based on data availability. All data have been obtained from Bloomberg database. Methodology We start our investigation by first examining the presence of size effect in France, Germany, Spain and Italy. Quintile portfolios have been formed based on market capitalization to proxy for size for our sample period. In December of year (t), we rank securities on the basis of market capitalization. Subsequently, the ranked stocks are divided into five portfolios, i.e. P1 to P5 and equally weighted monthly returns are estimated for these portfolios for the next 12 month (January of year t to December of year t + 1). We call them unadjusted returns. P1 is the small size portfolio, which contains least 20% of the stocks as measured by market capitalization, while P5 is the big size portfolio consisting of 20% stocks with the highest market capitalization. Portfolios are rebalanced in December of each year, and this process continues till the last year of our sample period. In the next stage, we test whether the size effect can be explained by its rationale source for our sample countries. We start with the standard capital asset pricing model (CAPM) to evaluate if market factor is able to absorb the cross-section of average returns for the sample portfolios. The familiar excess return version of market model is used to operationalize CAPM wherein excess returns are regressed on excess market returns as shown below: where Rp t -Rf t = excess return on sample portfolio, Rm t -Rf t = excess return on the market factor, α and β are the estimated parameters and e t = error term. In order to test for non-synchronous trading bias, we augment CAPM with the lagged value of market return (see Dimson, 1979) and further test if the size effect gets absorbed by this modification in the estimation procedure. The equation for the same is as below: where Rm t-1 -Rf t-1 is lagged excess return on market factor. Other terms have the same meaning as in Equation (1). In order to capture the January effect, we introduce a dummy variable to Equation (2) which takes a value of 1 for January months and 0 for all other months. We use the following equation to test for January effect: where D t is the dummy variable having a value of 1 from January months and 1 for other months. Other terms have the same meaning as in Equation (2). We further employ multifactor model to account for the role of missing risk factors i.e size and value factors. We examine if the Fama French (F-F) three-factor model augmented by lagged excess market returns could explain the returns missed by CAPM. The F-F model equation is as follows: where, SMB and LMH proxy size and value factors. s and l are coefficients of SMB and LMH factors respectively. Other terms have the same meaning as in Equation (3). SMB and LMH factors are constructed by the intersection of two independently sorted size as well as three value portfolios (2 x 3 formations) as in the case of Fama and French (1993). SMB is defined as the difference between average return on small and big stocks, while LMH is measured as the difference between average return on low and high P/B stocks on period-toperiod basis. Any multicollinearity problem is sorted out before introducing these factors in the F-F framework. Finally, we examine if the returns on the sample portfolio could be explained by augmenting Fama French model with an additional risk factor(s). Two versions of augmented Fama French models are implied involving: (1) Carhart (1997) stock momentum factor and (2) business cycle premium. The full-blown equation for our augmented F-F versions is as follows: where WML and (BBB-AAA) are proxies for price momentum and premium and w and β Prem are the sensitivity coefficients. Other terms have the same meaning as in Equation (4). Equation (5) describes the four-factor model. The momentum and premium augmented versions of the F-F model are estimated using the above said equation by eliminating one of the factors which doesn't find a place in our five-factor framework. In order to create momentum factor each year starting December 2008, we rank the sample stocks on the basis of average past six-month excess returns and form five portfolios which are then held for next 12 months i.e. from January to December. We rebalance the portfolios on a yearly basis, and continue till the end of the sample period. Finally, we take a difference of P5 and P1 to form momentum factor where P5 comprises of past winners while P1 contains past losers. The premium factor has been created by taking the difference of monthly BBB and AAA corporate bond yield from January 2008 to March 2018. Descriptive statistics We start by providing the unadjusted returns and their descriptive statistic for the portfolios of our sample countries in Table 1. We find that the mean monthly, unadjusted returns are extremely high for P1 (portfolio of smallest 20% capitalization companies) as compared to P5 (portfolios of highest 20% market capitalization companies) for all sample countries. In fact, P5 of Spain and Italy provide negative returns for the sample period. The annualized unadjusted returns for P1 vary from 0.84% for Spain to 14.4% for Germany for our sample period. We further find that there is no major difference between the standard deviations of lowest to highest portfolios for our sample countries. Thus, we confirm the presence of size effect in the four European countries. However, mere confirmation of the presence of size anomaly is not sufficient unless it provides risk-adjusted extra normal returns. CAPM results In order to test whether the size effect gets explained by the risk story, we examine it by operationalizing the one-factor CAMP framework and results are provided in Table 2. We start by testing size effect for France and find that the alpha value of P1 to be 0.87% on monthly basis which is also statistically significant. In fact, barring P3 none of the portfolios are explained by CAPM. Similarly, for Germany, we find significant alpha of P1 to be 0.93% on a monthly basis. However, we find that for Germany both P4 and P5 are explained by market factor. CAPM is also able to explain all the portfolios of Spain and Italy. Thus, we find that among our sample countries the size effect persists for France and Germany, wherein P1 provides risk-adjusted annualized return of about 10.44% and 11.16% for France and Germany, respectively. However, CAPM proves to be a significant model in explaining the size effect for Italy and Spain. Non-synchronous trading bias The size effect may result because of the presence of non-synchronous trading bias due to which alphas may be overestimated. In order to check for non-synchronous trading bias, we next implement the Dimson (1979) correction procedure by adding lagged market factor in the CAPM framework. The results for the same are shown in Table 3. Portfolio betas should be read as the sum of the two betas. We find that for both France and Germany, the t statistics for the lagged market factor is significant for P1. However, the correction procedure has a negligible impact on lowering the values of alpha. Hence, we proclaim that non-synchronous trading bias, though present in our sample portfolios, has limited effect in explaining alphas. January seasonality Next, we check for the seasonality impact on size effect. As prior, literature shows that most of the small cap effect, due to various explanations like tax loss selling hypothesis or window dressing hypothesis, is found in the month of January. We create a dummy factor to test for January seasonality, wherein we put value of 1 for January months and 0 for all other months. We find the t statistics (Table 4) for all the small cap portfolios to be insignificant for the dummy variable. Thus, we find no January effect in our sample countries. Fama French three factor and other augmented models Post-CAPM, the next prominent asset pricing model being used in empirical work has been Fama and French (1993) three-factor model. Authors argue that because of operational and financial risk as well as due to relative distress, small cap firms provide higher returns as compared to large cap firms. Their argument is that market fails to capture these risks and therefore separate risk factors viz. size Table 3. Non-Synchronous trading bias results. We regress excess returns of our sample portfolios on the excess returns for the market factor as well as lagged market factor to correct for non-synchronous trading bias. Alpha, Beta and lagged beta values are reported for the sample countries Table 4. SEASONALITY RESULTS (January Effect) We regress excess returns of our sample portfolios on the excess returns for the market factor corrected as per Dimson (1979) and value should be created to account for operational and financial, and distress risks, respectively. We estimate the Fama-French (F-F) three-factor equation using Dimson correction and report our results in Table 5. We find that all the remaining portfolios of Germany get explained by Fama French three-factor model. Thus, it appears to be a significant model for explaining returns for Germany. We find that the monthly alpha value significantly reduce for P1 in France from 0.87% in CAPM to 0.33% in F-F three-factor model which is a significant reduction of about 62%. However, the size effect in France remains an anomaly as the three-factor model is not able to explain the unexplained portfolios (except P4 which gets explained) in France. In the next stage, we augment the Fama French three factors with additional factors and report results in Table 6. We first employ momentum as an additional factor to the F-F three-factor model. It is popularly known as Carhart model and re-run the estimations for the unexplained portfolios of France and provide results in Table 6, Panel A. We find that momentum factor, though significant for P3 and P4, is unable to explain any of the unexplained portfolios for France. Thus, we observe that the Carhart model fails to explain the alphas for France. Another rationale source of size effect provided in the literature is business cycle conditions. It argued that small size companies are prone to sensitivities in business cycles, and thus, a proxy for business cycle, named as premium factor, has been deployed in the F-F three-factor model. It can be seen from Table 6, Panel B that by employing premium factor though alpha of P2 is explained, but, P1 and P5 remain unexplained. Thus, premium factor has a limiting role in explaining size effect from France. Finally, we augment F-F three-factor model with two additional factors i.e. investment rate and profitability named as Fama French five-factor model to examine if the risk story gets explained in case of France. Though the model partly explains the alphas, just like previous augmented models, Fama French five-factor model, is also unable to fully explain both P1 and P5 for France. Thus, we find that none of the asset pricing models employed by us are able to fully explain the size anomaly for France. Another interesting finding is that in case of France even P5 is not being explained by any of the prominent asset pricing models. Summary and conclusion In this paper, we test one of the important asset pricing anomalies, i.e., the size anomaly, for four European markets namely France, Germany, Spain and Italy. Mere confirmation of the size anomaly is not sufficient, so we examine if the rational sources of size effect as given in the literature are able to explain the anomaly. We use data of month end adjusted closing prices from January 2008 to March 2018 for 505 companies of France and Germany, 427 companies of Spain and 503 companies of Italy. We confirm the presence of size effects in each of the respective economies as provided in their unadjusted returns. We further employ single factor as well as multifactor models to verify if size effect sustains the test of prominent asset pricing models. We observe CAPM to be the significant factor in explaining size anomaly for both Spain and Italy. However, single-factor model is not able to explain size effect for France and Germany. Thus, we employ multi-factor models to explain size anomaly for France and Germany. We find that correcting non-synchronous trading bias has a limited role in explaining size anomaly. We observe that seasonality affect as proclaimed through January effect appears to be missing for our sample countries. We next employ multi-factor models to explain returns of the unexplained portfolios for France and Germany. We observe that F-F three-factor model is able to explain all the unexplained portfolios in Germany. Thus, F-F three-factor model appears to be the appropriate asset pricing model for explaining returns in Germany. For France, though the model sufficiently explains the alphas of P1 but it does not fully explain portfolios alphas. Finally, we augment the F-F three-factor model with momentum, default premium and two additional factors namely investment rate and profitability for France. We find that none of the augmented model is fully able to explain returns on P1 and P5 for France. Thus, we find that none of the prominent asset pricing model is fully able to explain the size anomaly for France. Our results have implications for portfolio managers, academia as well as regulators. Using data for over 10 years we show that, of the four sample countries, size effect gets explained in three (Germany, Spain and Italy) economies. However, it persists in case of France. This provides portfolio managers an opportunity to exploit size anomaly for France and use it for making profitable trading strategies for their investors. Our study contributes to the academic literature of equity anomalies by verifying the presence of size anomaly in four west European countries. Our results show that size anomaly gets explained for all the sample countries except France. Future research would have to explore the puzzling behavior of equity returns in France for the size effect. For regulators, we showcase that different markets in Europe are at different stages of market efficiency as shown by the success and failure of various asset pricing models for sample countries.
2021-05-05T00:09:03.907Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3d60c6b98d4328b30fdd97ae93111d96476d7c66", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23322039.2021.1897224?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f0428646e6c6119466bd019feae00952f39cad01", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
252089323
pes2o/s2orc
v3-fos-license
Understanding Labor Market Discrimination Against Transgender People: Evidence from a Double List Experiment and a Survey Using a US nationally representative sample and a double list experiment designed to elicit views free from social desirability bias, we find that anti-transgender labor market attitudes are significantly underreported. After correcting for this concealment, we report that 73 percent of people would be comfortable with a transgender manager and 74 percent support employment non-discrimination protection for transgender people. We also show that respondents severely underestimate the population level of support for transgender individuals in the workplace, and we find that labor market support for transgender people is significantly lower than support for gay, lesbian, and bisexual people. Our results provide timely evidence on workplace-related views toward transgender people and help us better understand employment discrimination against them. Introduction Very little is known about labor market discrimination against transgender people. 1 This is in sharp contrast to a substantial and growing literature on the employment experiences of sexual minority populations relative to heterosexual individuals (Klawitter 2015;Neumark 2018;Badgett, Carpenter, and Sansone 2021) and on attitudes toward sexual minorities in the workplace and support for employment non-discrimination protection on the basis of sexual orientation . In this paper, we study Americans' views about transgender managers in the workplace, as well as their support for employment non-discrimination protection for transgender individuals using a representative online sample of the US population. Understanding labor market views toward transgender people is important especially in the context of the 2020 US Supreme Court decision in Bostock v. Clayton County which ruled that transgender people are legally protected from discrimination in employment. Multiple recent studies using population data on transgender people have demonstrated that gender minorities have significantly worse economic outcomes than otherwise similar cisgender people, even though employment discrimination against transgender people is illegal (Badgett, Carpenter, and Sansone 2021;Carpenter, Eppink, and Gonzales 2020;Carpenter, Lee, and Nettuno 2022). Nevertheless, we do not have good economic data on how transgender people are treated by employers, co-workers, or the general public with respect to labor market outcomes. Understanding these attitudes is important as they could affect health outcomes and disparities (National Academies of Sciences Engineering and Medicine 2020) through minority stressi.e., stress due to internalized homophobia and transphobia, anticipated rejection, constant efforts to hide one's identity, and actual experiences of discrimination and violence (Meyer, 1995). In addition, studying the level of support for employment protection is important for contextualizing its potential effectiveness and for improving relative outcomes for transgender people in the US. Furthermore, policymakers discussing proposed transgender-related policies may want to know whether voters support such policies, and employers or managers considering hiring and promoting transgender individuals may want to know if those individuals would be supported in the workplace. The fact that we have a very limited understanding about attitudes toward transgender employment rights and transgender people in the workplace is problematic also because a nontrivial share of the population identifies as transgender. Recent Pew Research Center data indicated that 1.6 percent of adults identified as transgender in 2022; the rate among adults under age 30 was 5.1 percent (Brown 2022a). Moreover, 44 percent of adults reported knowing someone who is transgender. These survey-based estimates are likely lower bounds due to individuals' concerns about social stigma and potential discrimination. 2 Therefore, understanding views toward these populations is important as transgender individuals represent a substantial and growing minority. In this paper, we study views toward transgender people in the workplace and support for transgender-related employment non-discrimination rights using an online sample that is representative of the US population with respect to race, sex, and age. Eliciting views about transgender people in the workplace and about transgender employment rights may be susceptible to social desirability bias. For instance, such biases may exist because of the perception that expressing anything other than support for transgender people in the workplace could result in negative reprisals (due, for instance, to the recent rise of 'cancel culture'). This would result in an artificially high rate of stated support for transgender people in the workplace. We overcome these biasesand document their importance and magnitudeby being the first to study transgenderrelated labor market views using a list experiment technique. This technique has been widely used in psychology, sociology, political science, and economics to elicit sensitive views and attitudes free from social desirability bias. 3 In a list experiment, individuals are presented with a list of statements and asked to report how many of the statements in the list are true for them, but they are not asked whether each specific statement is true for them. In our list experiments, one group of respondents is presented with four statements and another group is presented with the same four statements plus an additional key statement of interest pertaining to their views about transgender people in the workplace (specifically, whether they would be comfortable having a transgender manager or whether they support employment non-discrimination protection for transgender people). Comparisons across lists allow us to back out an estimate of the true share of respondents who agree with each key statement of interest regarding transgender people in the workplace. While the list experiment technique cannot identify which specific individuals agree with the key statements (because individuals only report the total number of statements within each list that are true for them as opposed to indicating whether each individual statement is true for them), it has the distinct advantage that we can credibly estimate population-level views toward transgender people in the workplace that are free from social desirability bias. Additionally, toward the end of our survey, we directly ask respondents about the key statements of interest (comfort with a transgender manager and support for employment non-discrimination protection for transgender people), which, when compared with the true share elicited through the list experiments, provides us with estimates of the magnitude of misreporting of attitudes regarding transgender people in the workplace. We can also use group characteristics to examine whether, for example, women on average are more or less supportive of transgender people in the workplace than men. Finally, as discussed in more detail in Section 3, we use a double list experiment to verify the robustness of our findings to using different non-key statements (Chuang et al. 2021) and to increase the precision of our estimates by minimizing the variance (Droitcour et al. 1991;Glynn 2013). Comparing our double list experiment to the direct survey responses, we find that anti-transgender labor market sentiment is significantly underreported (by 6-7 percentage points), consistent with a strong role for social desirability bias. We also find that even after correcting for social desirability bias, 73 percent of people would be comfortable with a transgender manager at work and 74 percent support non-discrimination protection in employment for transgender people. Women, sexual minorities, and Democrats have significantly more positive views and show greater support than men, heterosexual individuals, and Republicans or Independents, respectively. To complement the double list experiment, we then report the results from a descriptive survey. The survey allows us to compare views about transgender people in the workplace and about transgender employment non-discrimination rights in relation to views about lesbian, gay, and bisexual (LGB) people in the workplace and about LGB employment non-discrimination rights. In addition, our survey asked people about their general perceptions regarding the two statements of interest, i.e., their beliefs about the true population share of individuals who would be comfortable with transgender managers and who support employment non-discrimination protection for transgender people. Looking at our survey data, we find that Americans show significantly higher support for LGB people in the workplace and for LGB employment non-discrimination rights relative to support for transgender people in the workplace and for transgender employment non-discrimination rights. Our survey data also demonstrate that respondents severely underestimate the true level of support for transgender people in workplace among the general population by 28 to 53 percent. This finding is especially notable given that beliefs about others' views on stigmatized behaviors are shown to impact individuals' own views and behaviors (Bursztyn, González, and Yanagizawa-Drott 2020). It may suggest that support for transgender people in the workplace could be increased by correcting biased beliefs. Taken together our results provide timely evidence on labor market sentiment toward transgender people in the United States. Although anti-transgender sentiment is underreported, a sizable majority of American adultsnearly 3 in 4supports transgender people in the labor market, including in positions of workplace authority, and supports employment non-discrimination protection for transgender individuals. These findings are important given the documented positive effects of employment non-discrimination protections on wages for other minority groups (Donohue and Heckman 1991;Klawitter and Flatt 1998;Neumark and Stock 2006;Klawitter 2011;Delhommer 2020). Literature review Our study is related to a large economics literature on the drivers and impacts of discrimination in labor markets (Arrow 1973;Phelps 1972;Becker 1971;Bertrand and Duflo 2017;Neumark 2018). There is also a vast literature on discrimination based on social identity (such as race and gender) (Altonji and Blank 1999;Goldin and Rouse 2000;Bertrand and Mullainathan 2004;Lang and Spitzer 2020). Within this large body of literature, recent research has shown that LGBTQ+ individuals are subject to discrimination in formal markets such as labor and housing (for a review, see Badgett, Carpenter, and Sansone 2021) as well as in domains outside of these formal contexts such as with respect to prosocial behavior (B. Aksoy, Chadd, and Koh 2021). A small economics literature on employment, earnings, and income for transgender people also has emerged, with most studies finding that transgender people have significantly worse economic outcomes than similarly situated cisgender people (Badgett, Carpenter, and Sansone 2021;Geijtenbeek and Plug 2018;Granberg, Andersson, and Ahmed 2020;Shannon 2022;Carpenter, Eppink, and Gonzales 2020). For example, the most recent evidence from nationally representative US data indicates that non-cisgender individuals have significantly lower employment rates and higher poverty rates than otherwise similar cisgender individuals (Carpenter, Lee, and Nettuno 2022). We contribute to this broad but relatively new body of literature by studying views about transgender managers in the workplace and support for employment non-discrimination protection for transgender individuals. The comparison of views toward transgender individuals relative to LGB individuals in the workplace also provides an important contribution to this literature. As we examine comfort with having a transgender manager, our paper extends the literature examining the employment barriers (e.g., "glass ceilings") faced by women, racial minorities, and sexual minorities in accessing positions of leadership (Albrecht, Björklund, and Vroman 2003;Frank 2006;Giuliano, Levine, and Leonard 2009;Matsa and Miller 2011;C. G. Aksoy et al. 2019;Cullen and Perez-Truglia 2021). We are not aware of any other research that directly examines managerial or supervisory authority among transgender individuals. We also contribute to the growing literature on attitudes towards transgender individuals (Broockman and Kalla 2016;Taylor, Lewis, and Haider-Markel 2018;Luhur, Brown, and Flores 2019;McCarthy 2021;Lewis et al. 2022;Doan, Quadlin, and Powell 2022). Our paper contributes to the literature on list experiments. Several studies in psychology, sociology, and political science have used list experiments to elicit sensitive views and attitudes, including in the context of sexual minority rights. For example, Lax, Phillips, and Stollwerk (2016) have used a list experiment to measure public support for same-sex marriage in the US, finding no evidence of social desirability bias regarding support for same-sex marriage or the inclusion of sexual minority status in employment non-discrimination laws. Other research in these fields has used the list experiment to examine social desirability bias in the context of: support for a female American President (Streb et al. 2008), support for a Jewish presidential candidate (Kane, Craig, and Wald 2004); racial discrimination (Kuklinski, Cobb, and Gilens 1997;; the prevalence of Atheists (Gervais and Najle 2018); and the prevalence of risky sexual behaviors among college students (LaBrie and Earleywine 2000). Within economics, list experiments have been more limited, with some notable exceptions. For example, development economists have used this method to study sexual activity and reproductive behavior in Uganda (Jamison, Karlan, and Raffler 2013) as well as Cameroon and Cote d'Ivoire (Chuang et al. 2021). List experiments have also been used in economics to examine corruption in public procurement in Russia , use of loan proceeds in Peru and the Philippines (Karlan and Zinman 2012), illegal migration rates in Ethiopia, Mexico, Morocco, and the Philippines (McKenzie and Siegel 2013), hiring discrimination against women in Egypt (Osman, Speer, and Weaver 2021), and intimate partner violence in Peru (Agüero and Frisancho 2022). Our study is most closely related to who conducted a list experiment in 2012 to study anti-LGB sentiment using an Amazon Mechanical Turk sample. They showed that the magnitude of anti-LGB sentiment is significantly understated. Our results offer an important complement to their findings as the first list experiment evidence on views about transgender managers in the workplace and employment non-discrimination protection for transgender people. List experiments We use a list experiment technique (also called "item-count technique", "unmatched count", or "veiled approach") that was pioneered by Miller (1984). 4 As mentioned in the introduction, respondents are given a list of statements and asked to report how many statements (but not which specific ones) are true for them, thus providing an extra layer of anonymity and increasing privacy (Coutts and Jann 2011). Participants are either assigned to a treatment group or a control group. In the control group ("short list"), participants are given a list of statements and asked to indicate how many of those statements are true for them. In the treatment group ("long list"), participants are given the same list of statements plus a key statement of interest (in our context, a statement about views towards transgender individuals in the workplace). 5 The difference in means between the 4 We decided to use list experiments instead of randomized response technique (where respondents use a private randomization device -e.g., flip a coin -to determine whether they answer either a sensitive or innocuous question) because randomized response technique is more difficult to implement online, subjects trust the randomized response technique less than the list experiment (Coutts and Jann 2011), and participants may not respond to the randomization device relied upon by the randomized response technique as instructed . 5 The order of the statements is randomized at the individual level in both the short and long lists. This serves two goals. First, if we do not randomize the order of the key statements and list them as last, as done by many papers in two lists gives us the estimated share of the population with the key attribute of interest. Table 1 presents one of the lists used in our study. {Table 1 here} To formally illustrate how we use the list experiment technique to estimate the share of the population with the key attribute of interest, we follow the standard estimation technique implemented in previous studies (Tsai 2019). Suppose that we have a sample of n participants. Let be the indicator variable equal to one if participant i sees the long list instead of the short list, and 0 otherwise. Let be the potential answer to the key statement by participant i, and let , be the potential answer to the jth non-key statement by participant i (where j=4 in our application). Using the list in Table 1 . Under certain assumptions, 6 the difference in means estimator as presented below gives us the estimated share of the population with the key attribute (i.e., ( )). (1) To increase power and reduce variance, we extend this technique by using double list experiments (Droitcour et al. 1991;Glynn 2013). For each key statement, we have a set of two lists, (e.g., list A and list B) that are designed to be positively correlated. Each list contains four non-key statements. Half of the participants (randomly selected) see list A (a short list) and then list B with the key statement (a long list). The other half see list A with the key statement (a long list) and list B (a short list). We also randomized the order at the subject level such that some participants see list A first while others see list B first. The differences-in-means between short and long lists from both lists A and B are averaged, providing us the true share of the population with that key attribute. Formally, let and be the total number of items in list A and B, respectively, that are true for participant i, the estimated share of the population with the key attribute is given by ( ). Thanks to this extension, it is possible to obtain more precise estimates since all respondents provide information about the key statements, unlike the single list experiment in which only this literature, we worry that seeing a transgender-related statement as last in all lists could draw extra attention to the key statements. Second, the order of the statements might also have an impact on subjects' answers. By randomizing the order, we eliminate any aggregate effect coming from the ordering of the statements. 6 The list experiment technique relies on three assumptions: treatment randomization, no design effect, and no liar. We discuss these assumptions and provide evidence in support of them in Online Appendix A. respondents seeing the long list provide such information. The double list method also allows us to verify the robustness of our findings to using different non-key statements (Chuang et al. 2021). In this experiment, we test two key statements: Transgender manager: "I would be comfortable having a transgender manager at work." Transgender employment non-discrimination protection: "I think the law should prohibit employment discrimination against transgender individuals." We use the double list experiment technique for both statements and thus we have a total of four lists: Lists 1A and 1B for the transgender manager key statement and Lists 2A and 2B for the transgender employment non-discrimination protection key statement. 7 Following the recommendation in the literature Aronow et al. 2015), we also ask questions directly regarding the key statements to all participants toward the end of the survey. The direct questions provide baseline estimates of the share of population with the key attributes, and this allows us to estimate the size of the bias due to social desirability and misreporting of stigmatized attitudes. Survey questionnaire All subjects first participate in the list experiment section and then move to the survey. 8 Subjects are not allowed to skip any questions in the list experiments and are not allowed to go back and revise their answers at any point. However, subjects are always free to leave the study whenever they wish. The order of the questions in the survey section is the same for all respondents. In addition to the two questions (relating to the two key statements from the list experiments) asked directly in the survey, we collect standard demographic and socio-economic variables, and we ask additional direct questions to measure participants' views toward LGB individuals in the workplace. Finally, at the very end of the survey, we also elicited participants' beliefs about the two key statements used in the list experiment. 9 Specifically, the participants were shown the following statements and asked to fill in the blank with their best guess: 7 Although it is common practice in the literature not to randomize the order of the lists, we chose to incorporate some randomization into our design to control for potential order effects (here, we refer to the order of the lists, not the order of the statements within the list). We provide more explanation on this in Online Appendix A and show that we do not find any significant concerns for order effects. 8 At the beginning of the experiment, respondents signed a consent form and were informed that the purpose of the study was to understand the demographic composition of the respondents and their views on certain economic, political, and social issues. The description of the study did not specifically mention transgender issues, as we did not want to prime respondents or obtain a self-selected sample. 9 We chose not to incentivize these questions in order to keep the study simple and relatively quick. Although we acknowledge the usual drawbacks of using an unincentivized elicitation method, we think that these data provide novel and valuable insights about participant behavior. "Out of every 100 people in the general US population, I think approximately _____ out of 100 would be comfortable with having a transgender manager at work." "Out of every 100 people in the general US population, I think approximately _____ out of 100 would agree that the law should prohibit employment discrimination against transgender individuals." The complete set of instructions and survey questions used for our study can be found in Online Appendix C. 10 Key design considerations The list experiment technique allows researchers to estimate the true share of the population with the key attribute by providing an extra layer of anonymity to their responses. As discussed in the introduction, by comparing the responses in the list experiment to direct survey questions, we can also estimate the size of the bias due to social desirability and misreporting of stigmatized attitudes. Social desirability bias might cause some respondents not to report their true sentiments honestly when asked directly. This usually happens when the respondents believe that their opinion runs counter to the perceived social norm. Ex-ante, the size of the bias is not clear: online surveys may elicit truthful answers since they are self-administered, completed in private, and anonymous (Holbrook and Krosnick 2010;Robertson et al. 2018). Thus, the magnitude of misreporting we document is likely to be a lower bound to what might occur in other surveys, since most surveys are not conducted with as much privacy and anonymity and thus people may be less prone to social desirability bias even when answering the question directly. Importantly, it is not the case that increased reporting under the veil of the list experiment is simply mechanical. Previous research has shown that list experiments provide increased estimates of prevalence only for stigmatized views: there is no evidence of this technique leading to an increase in reporting of innocuous behaviors (Tsuchiya, Hirai, and Ono 2007;Coffman, Coffman, and Ericson 2017). 11 While designing the list experiments and choosing the non-key statements, we followed best practices in the literature (Glynn 2013). For example, one should carefully determine how many non-key statements to include. The number of non-key statements should be neither too low (to avoid a ceiling effect, i.e., participants reporting that all statements are true for them, thus removing the privacy protection provided by the list experiment) nor too high (to avoid higher variance and measurement error due to respondents' inability to remember or focus on all statements in the list). After carefully examining previous studies, we decided on four non-key statements. In each of the lists, we included a statement that we expected to be true for most people (to avoid a floor effect, i.e., participants reporting zero items, thus also removing the privacy protection provided by the list experiment), another statement that we expected to be false for most people (to avoid a ceiling effect), and the remaining two non-key statements were chosen such that they are expected to be negatively correlated. 12 This approach has the additional advantage of decreasing variance and increasing power. High variance is often an issue because the key statement is aggregated with a number of non-key statements. To some extent, the additional variance is the cost of the higher perceived privacy protection (Glynn 2013). Therefore, list randomization often produces results that are too high in variance to be statistically significant, especially if the attribute, view, or behavior of interest has low prevalence (Karlan and Zinman 2012). Thus, a modal response of 2 out of 4 for the non-key statements is desirable. Finally, in order to increase power further in the double list, we designed the non-key statements in Lists A and B to be positively correlated. Following Chuang et al. (2021), in order to draw less attention to our key statements and increase the validity of our list experiment, some of the non-key statements in our lists are political in nature. Additionally, instead of asking the direct questions right after their corresponding lists, in line with previous studies (Lax, Phillips, and Stollwerk 2016;Chuang et al. 2021), we ask the direct questions after the demographic questions, and together with other questions on income, religiousness, and political affiliation. This order was chosen to limit the participant's focus on the transgender-related statements in the list experiments. Additionally, following Berinsky (2004), we do not provide a "don't know" option in the direct question since individuals who hold socially stigmatized opinions may hide their opinions behind a "don't know" response. Finally, showed that list experiments work better when the stigmatized answer in the related direct question is a "no" instead of a "yes". Thus, we designed our key questions such that the socially stigmatized answer is always a "no". Data collection and study sample We coded the study using oTree (Chen, Schonger, and Wickens 2016) and conducted it on an online platform, Prolific, which has been used in many economics studies ( We ran our experiment in late January 2022 using Prolific's representative sample of the US population with respect to race, sex, and age. A total of 1,806 participants completed the study. 13 Participants never disclose any identifying information, and the survey is completely anonymous. The attrition rate was very low: a total of 36 participants started the study but did not complete it. Out of those 36, 25 exited the study before seeing the first list experiment. We only use the data of participants who completed the entire study. In addition, we included three attention check questions. Less than 1 percent (n=15) of the participants failed one out of the three attention checks. No participant failed two or more attention checks. Thus, we include all participants in our analysis. The study took about 7 minutes on average to complete, and subjects who successfully completed the study received $1.30 on average which corresponds to $10.40/hour. 14 {Table 2} In Table 2, we present summary statistics of our Prolific participants. 15 Comparing our sample to official population estimates from the Census and the American Community Survey (U.S. Census 2021; Ruggles et al. 2022), our sample appears representative not only based on age, ethnicity, and sexas expected given the sampling methodologybut also with respect to income, marital status, employment status, and urbanicity. Our sample is similarly likely to be Republican but is more likely to be Democrat and less likely to be Independent, and our sample is also more educated than the general US population (U.S. Census 2021; GSS 2021). In terms of region, although we have slightly more people from the Northeast and less from the West, overall, the regional distribution is comparable to the US population. In addition to our Prolific sample, we provide supplemental descriptive evidence from the American National Election Survey (ANES). The ANES is a large nationally representative survey of US adults that is widely used in political science and economics research (Morisi, Jost, and Singh 2019; Fouka and Tabellini 2022). We use publicly available microdata from the ANES 2020 Time Series Study. 16 We use ANES for two main purposes. First, these data include a 'feeling thermometer' type of question where respondents were asked to rate their feelings toward a variety of groups, including transgender individuals. 17 Below, when we investigate group-specific heterogeneity views about transgender people in the workplace (e.g., whether women report more positive views than men), we use the ANES patterns as a source of comparison and confirmation. Second, the ANES includes survey items that closely align with the questions we asked our Prolific responses in the list experiments with and without pilot data and show that this minor change in the instructions did not have an impact on the reported views in the list experiment. Thus, we combine both data sets and report our findings using all 1,806 participants. 14 We check the robustness of our findings by excluding participants who completed the study very quickly or very slowly (as measured by the top and bottom five percent of the study completion time distribution). Our main findings are robust, and the details are discussed in Online Appendix A. 15 Tables B1-B2 in the Online Appendix report sample sizes based on sex at birth, gender identity, and sexual orientation. 16 ANES 2020 data were collected in two waves: shortly before (between August 18, 2020 andNovember 3, 2020) and shortly after (between November 8, 2020 and January 4, 2021) the 2020 US Presidential Election. 17 Specifically, the 2020 ANES asked respondents "How would you rate transgender individuals?" It also asked respondents "How would you rate gay men and lesbians?" Respondents were asked to provide a number between 0 and 100, with higher numbers indicating more positive views. respondents, such as support for non-discrimination protection on the basis of sexual orientation. 18 As we explain below, the nationally representative ANES returns very similar patterns on questions that are common to both datasets, further suggesting that our Prolific sample is also likely to be representative of the US population. Results In this section, we first present our findings from the list experiment. We then report heterogeneity in workplace-related views toward transgender people based on participants' sex, sexual orientation, and political affiliation. Next, we examine participants' beliefs regarding other people's views towards transgender individuals in the workplace. After that, we describe results from the survey which compare views regarding lesbian, gay, and bisexual managers, and support for employment non-discrimination rights for sexual minorities to those for transgender managers and support for employment non-discrimination rights for transgender individuals, respectively. Views towards transgender individuals in the labor market First, we present our findings from the double list experiments and compare our data to the direct questions. The first two bars of Figure 1 present the proportion of our participants who are comfortable having a transgender manager at work (Transgender Manager) and the latter two bars present the proportion of participants who agree that the law should prohibit employment discrimination against transgender individuals (Trans Employment Non-Discrim). To estimate the true share of the population with the key attribute using the list experiments, we first take the difference in means between the long and the short lists for each key statement, separately for Lists A and B. 19 We then take the average of these two estimates. This average gives us the estimated proportion using the double list method which is presented as Double List in the figure. The Direct Question bars in Figure 1 are the shares of the population who report comfort with a transgender manager or support for employment non-discrimination protection for transgender people, respectively, that we estimate using the answers to the direct questions in the survey. {Figure 1} Looking at the first two bars of Figure 1, we find that discomfort with having a transgender manager in the workplace is significantly underreported. When asked directly, 80.1 percent of our participants say they would be comfortable having a transgender manager at work. However, when asked indirectly (i.e., using the double list experiment method), we find that the share of participants who would be comfortable with a transgender manager at work is only 73 percent, significantly lower than the estimates from the direct question. These findings are similar when we look at the views towards employment non-discrimination protection for transgender individuals, which are presented in the latter two bars of Figure 1. When we directly ask participants whether they think that the law should prohibit employment discrimination against transgender individuals, 79.5 percent of them say yes. However, looking at our double list experiment, the estimated true percentage of participants who agree with this statement is 73.7 percent, which is significantly lower. Overall, the percentage of the participants who are comfortable having a transgender manager at work and those who agree that the law should prohibit employment discrimination against transgender individuals decreases by 8.9 percent and 7.3 percent, respectively, when participants are provided an extra layer of privacy thanks to our double list experiment. This social desirability bias that we document in the context of transgender labor market attitudes is comparable in magnitude to where they investigate sentiments towards lesbian, gay, and bisexual individuals in various contexts using a single list experiment. Although we focus on the double list method when discussing our data since it gives us the highest precision, we also present our findings using the individual lists in Online Appendix Figure B3 and Panel A of Table B3 which show that our results are robust to using either list. Indeed, for both key statements, the difference between the estimate in List A and the one in List B is statistically indistinguishable from zero. These statistics confirm that our main results are robust across lists and are not driven by the choice of the non-key statements (Chuang et al. 2021). Our findings using direct questions are broadly in line with previous estimates using similar questions. A 2016 survey reported 71.2 percent of respondents agreeing that "Congress should pass laws to protect transgender people from employment discrimination" (Flores, Miller, and Tadlock 2018) and a 2017 US representative survey reported 72.7 percent of the participants agreeing that transgender people should be protected from discrimination by the government (Luhur, Brown, and Flores 2019). 20 Finally, our results are also in line with a 2017 US representative sample vignette study that found 75 percent of Americans supporting employment non-discrimination protection for transgender individuals (Doan, Quadlin, and Powell 2022). Next, we estimate the true population size with our two key attributes using a regression analysis. Since we used two lists for each key statement, we estimate the following regression model separately for each list and each key statement using OLS: where is an indicator variable that takes the value of 1 if the list was long (i.e., with the key statement) or 0 if the list was short, and is the vector of control variables that includes state fixed effects, demographic controls (subject's age, sex at birth, race, sexual orientation, and sexual attraction), socio-economic controls (subject's education level, employment status, income, current political affiliation, and current religious affiliation), beliefs about general level of support for the key statements (i.e., support for transgender managers or employment non-discrimination protection for transgender individuals), and additional controls (whether at least one child less than 18 years of age lives in the subject's household, number of people living in the subject's household, marital status, and urbanicity). Thus, 1 gives us the estimated true population size with the key attribute which is presented in Table 3. Panel A presents the estimated share of the participants who would be comfortable with a transgender manager at work and Panel B presents the estimated share of the participants who agree that the law should prohibit employment discrimination against transgender individuals. {Table 3} Columns 1 and 5 show the estimated share of the population without any controls. Thus, these estimated shares are the same as those presented in Table B3 Columns 1 and 2 of Panel A. Next, we find that our results are robust to the inclusion of control variables. As we add more controls, the estimated shares get slightly smaller for three out of four estimates. For only one of the estimates, the coefficient increases by a maximum of 1.1 percentage points. All of these provide strong support for findings discussed above in Figure 1 and Table B3. Since we employed a double list experiment, we can take the average of the estimates from Lists A and B. Taking the average of the coefficients from our most conversative estimates (columns 4 and 8), we find that 71.9 percent of the participants would be comfortable with having a transgender manager at work and 74 percent of the participants agree that the law should prohibit employment discrimination against transgender individuals. These estimated proportions are significantly lower than the estimates obtained by using direct questions (p-value < 0.001 and pvalue = 0.005, respectively), further confirming the presence of social desirability bias. To summarize, we show that a sizable majority of adults in the US supports transgender people in the labor market, including in positions of workplace authority. Almost three-fourths of individuals are comfortable with transgender individuals in positions of leadership in the workplace and support laws prohibiting employment discrimination against transgender individuals. However, we also show that many participants do not truthfully report their views regarding transgender individuals in the workplace when asked directly. This could be due to social desirability bias where some individuals may not feel comfortable expressing their actual sentiments on a socially sensitive topic. These findings imply that research conducted using only survey measures of views towards transgender individuals in the workplace may paint a more optimistic picture of the situation in the US than the reality. Perceptions about general views Next, we aim to understand what our participants think about the views of the general US population toward workplace issues related to transgender individuals. To do this, we elicited participants' beliefs about the two key statements used in the list experiment. More specifically, we asked participants' perceptions about views of the general US population towards transgender managers and employment non-discrimination protection for transgender individuals. Figure 2 presents these perceptions regarding comfort with having a transgender manager (Panel A) and support for employment non-discrimination protection for transgender individuals (Panel B). {Figure 2} Figure 2 presents two interesting take-away points. First, although the true proportion of our participants who are comfortable having a transgender manager at work is 73 percent, our participants guess on average that only 47.7 percent of the general US population would be comfortable with a transgender manager. That is, respondents underestimate the true level of comfort with a transgender manager by 25.3 percentage points (53 percent of the average guess). Similarly, although we estimated that 73.7 percent of our participants agree that the law should prohibit employment discrimination against transgender individuals, on average they think that only 57.4 percent of the general US population supports laws that prohibit employment discriminationan underestimate of about 16.3 percentage points (28 percent of the average guess). Second, our participants think that the general US population is more likely to support laws that prohibit employment non-discrimination than to be comfortable with a transgender manager (57.4 percent versus 47.7 percent, p-value < 0.001). This is an especially interesting finding given that we do not see a difference when we compare the estimated true proportions using the double list experiments in Figure 1 (73.7 percent versus 73 percent, p-value = 0.812). We also study these beliefs separately for those who personally agree with the key statement when asked directly versus those who do not. These findings are presented in Figures B4 and B5. Both figures reveal that, perhaps not surprisingly, there is a positive correlation between individuals' own views and their beliefs (Spearman's Correlation coefficients are 0.34, p-value < 0.001, and 0.24, p-value < 0.001 for transgender manager and transgender employment non-discrimination rights, respectively). In other words, people who disagree with the key statements (i.e., who state they would not be comfortable having a transgender manager or who do not support non-discrimination protection in employment for transgender individuals) guess lower levels of support from the general population than people who agree with the key statements. 21 Heterogeneity analysis In this section we study our main research questions by doing subgroup analyses. More specifically, we compare differences in means in the double list experiments and the direct questions across subgroups based on sex, sexual orientation, and political affiliation. 22 Results are presented in Figures 3 and 4. {Figure 3} {Figure 4} First, we compare women's views to those of men's views (Panels A in Figures 3 and 4, as well as Table B4). Women have significantly more positive views about transgender individuals and show higher levels of support for employment non-discrimination laws relative to men. This is true for estimates using both the double list experiments and the direct questions. We find a similar gender difference using the nationally representative ANES data where women (relative to men) report significantly more positive feelings toward transgender individuals (p-value < 0.001). Furthermore, we find that both men and women misreport their true views, although the difference is not significant for men for the employment non-discrimination protection statement. Second, we compare views by sexual orientation (Panels B in Figures 3 and 4, as well as Table B5). 23 We find that non-heterosexual individuals hold significantly more positive views than heterosexual individuals regarding transgender people in the workplace. However, the share of non-heterosexual individuals comfortable having a transgender manager (Panel B Figure 3) is 21 There are several potential explanations. First, we know from the extensive research on social norms that individuals' own beliefs and actions tend to adhere to social norms (Bicchieri 2002). These beliefs may be indicative of individuals' perceived social norms on these sensitive issues and thus the positive correlation between individual views and the beliefs would be in line with this research. Second, this positive correlation may be due to a falseconsensus effect, which is a cognitive bias that causes people to overestimate how much others are like them. However, it is interesting to note that, even among those comfortable with a transgender manager or who support employment non-discrimination protection for transgender individuals (Panels A in Figures B4 and B5), the average perceived levels of support among the US population are significantly lower than the ones estimated from the double list experiments in Figure 1. Finally, we also acknowledge it could be the case that, ex-post, people simply misreport their true beliefs to justify their (dis)agreement with those statements. Future research can shed more light on how these beliefs might interact with participants' own behavior. 22 Following our pre-analysis plan, we also conduct subgroup analyses by race (Table B7), age (Table B8), sexual attraction (Table B9), socio-economic status (Tables B10-B13), religious affiliation (Tables B14-B15), and geographical location (Table B16). We do not find significant differences in support for transgender people in the workplace associated with race, income, or employment status. We do find that support for transgender people in the workplace is significantly higher among younger individuals, those who are not exclusively attracted to a different sex, and non-religious people. 23 We classified those who answered yes to "Are you heterosexual/straight?" as heterosexual; and those who answered no as non-heterosexual. higher than the associated share supporting employment non-discrimination protection for transgender individuals (Panel B Figure 4), and the difference in the level of support when compared to heterosexual individuals is smaller for the employment non-discrimination outcome in Figure 4 than for having a transgender manager in Figure 3. Moreover, looking at Panel B of Figure 3, we find that heterosexual individuals are significantly more likely to underreport the stigmatized view when asked about their comfort with having a transgender manager relative to non-heterosexual individuals, and this difference is substantialmore than 11 percentage points and statistically significant at the five percent level (as indicated in Table B5). In fact, we do not find any significant evidence of misreporting by non-heterosexual individuals regarding their comfort with having a transgender manager: their views are similar across both elicitation methods. Looking at Panel B of Figure 4, we find that both heterosexual and non-heterosexual individuals misreport their true views about non-discrimination protection, and the misreporting is marginally significant for non-heterosexual individuals. Lastly, we also compare views across political affiliations. Results are presented in Panel C of Figures 3 and 4 (and Table B6). Several insights emerge. First, in both figures, Democrats' views regarding transgender individuals in the workplace are more positive than Independents' views, which are themselves more positive than Republicans' views using both elicitation methods. This political divide we observe in our dataset is consistent with the political divide in general acceptance of transgender individuals shown by a 2021 Pew Research Center survey (Brown 2022b). Similarly, it is consistent with the nationally representative ANES data where we find that Democrats report significantly more positive feelings towards transgender individuals relative to Independents (p-value < 0.001), who also report significantly more positive feelings compared to Republicans (p-value < 0.001). Second, we find significant underreporting of the stigmatized view about discomfort with having a transgender manager for all three groups. In contrast, when it comes to support for employment non-discrimination protection, we only see significant misreporting by Independents. Meanwhile, the estimated support for employment nondiscrimination for both Republicans and Democrats is similar across the two elicitation methods. In line with this, the only significant difference in the extent of misreporting arises when we compare Democrats to Independents (Table B6). Next, we present regression results where we control for sex, race, age, sexual orientation, sexual attraction, political affiliation, household income, employment status, religious affiliation, region and beliefs. We estimate the heterogenous effects of these independent variables using an estimation method specifically designed for double list experiments by Tsai (2019). 24 This method estimates Equation 2 using a linear least-squares estimation method while controlling for independent variables as well as interacting them with the treatment variable. These results are presented in Table 4 separately for the key statement about having a transgender manager (Column 1) and the key statement regarding employment non-discrimination protection (Column 2). {Table 4} Overall, the heterogeneity findings presented above are in line with these estimation results. Women and non-heterosexual individuals hold more positive views regarding transgender individuals, although the coefficient estimates are not statistically significant for the employment non-discrimination protection statement. Table 4 confirms our results regarding how one's political party affiliation correlates with their views towards transgender managers and employment non-discrimination protection. In line with our findings discussed in Section 4.2, there is a positive correlation between participants' own views and their beliefs. 25 Table 4 also reveals that participants with less than a Bachelor's degree have significantly less positive views towards transgender managers. We do not see a significant difference in views across different age groups, religious affiliations, income levels, employment status, or regions. Finally, although not specified in our pre-analysis plan, we also report evidence on heterogeneity in support for transgender individuals in the workplace related to prior managerial experience. 26 Individuals with such experience might plausibly have more information about managerial duties and responsibilities, and they are also more likely to be in positions that must comply with new non-discrimination regulations post-Bostock. We find that support for transgender individuals in the workplace is higher among individuals without managerial experience (Table B13). Moreover, the difference between the double list estimates and the answers to the direct question on comfort with a transgender manager is larger among those with managerial experience (p-value = 0.101); i.e., individuals with managerial experience misreport more than individuals without managerial experience. 27 These patterns may indicate that targeted managerial-focused interventions may be needed to ensure the equal treatment of transgender people in the workplace. Comparison of workplace-related views toward transgender individuals relative to LGB individuals So far, we have focused our analysis on views regarding transgender managers and support for employment non-discrimination protection for transgender people. It is also interesting to examine how these views compare relative to views regarding lesbian, gay, and bisexual individuals in 25 These correlations are also clear from the raw differences in means by beliefs (Table B17). In particular, the difference between the estimated level of support for employment discrimination protection from the double list experiment and from the direct question is significantly larger among those who believed that most Americans would support this policy. That is, we find higher social desirability bias among respondents who believe most Americans would support employment discrimination protection for transgender individuals. 26 We did not ask about managerial experience in our survey, but Prolific collects that information for a majority of the sample, and we use that information here. 27 These patterns with respect to prior managerial experience are especially interesting given that such experience is positively correlated with education, and we see the opposite pattern for education: individuals without a bachelor's degree have significantly less comfort with a transgender manager than individuals with a bachelor's degree or higher. Together, these patterns suggest that there is something unique about managerial experience that is related to negative views toward transgender people in the labor market. these same contexts. As described in Section 3, in the survey we asked questions that allow us to examine these differences directly. Results are presented in Figure 5. {Figure 5} We find that support for transgender managers in the workplace is significantly lower than support for lesbian, gay, and bisexual managers (see first two bars of Figure 5). Participants are 9.6 percentage points less likely to report being comfortable having a transgender manager relative to an openly lesbian, gay, or bisexual manager. Looking at support for employment nondiscrimination protection (the latter two bars of Figure 5), again, we see that participants are less likely to support such laws when those laws are designed to protect transgender individuals as opposed to lesbian, gay, and bisexual individuals. This pattern is further supported by the nationally representative ANES data indicating that feelings toward lesbian women and gay men are significantly more positive than feelings toward transgender individuals (p-value < 0.001). 28 The pattern is also consistent with previous studies measuring attitudes towards sexual and gender minorities (Lewis et al. 2017;Flores, Miller, and Tadlock 2018;Lewis et al. 2022). 29 Conclusion We report the results of a double list experiment and a survey designed to provide timely information on Americans' views toward transgender people in the workplace and support for transgender employment non-discrimination rights. As sexual and gender minorities are newly protected by federal employment non-discrimination protections as recently as Summer 2020, we sought to gauge workplace-related sentiment toward gender minorities using an elicitation method that removes social desirability biases which might artificially inflate support for transgender people in the workplace and transgender employment non-discrimination rights. Our double list experiment yielded three key findings. First, anti-transgender labor market sentiment in our representative online sample was significantly underreported, consistent with the presence of social desirability bias and pressure to report comfort with transgender managers and support for transgender employment non-discrimination protections. Second, despite the presence 28 For reference, ANES data indicate that Americans have more positive feelings toward Jewish people and Black people than toward transgender individuals. Americans also have similar feelings towards Muslim and transgender individuals, while their feelings toward transgender people are more positive than their feelings toward feminists and individuals who participate in the Black Lives Matter movement. 29 Notably, the share of our Prolific respondents who support employment non-discrimination for sexual minorities (84.9 percent) is very similar to the share of nationally representative ANES respondents who favor laws to protect gay men and lesbian women against job discrimination (86.6 percent). Moreover, the shares of our respondents who support LGB managers (89.7 percent) and LGB non-discrimination (84.9 percent) are comparable to where 83.8 percent of their Mechanical Turk participants indicated that they would be happy to have a lesbian, gay, or bisexual manager at work and 85.6 percent said that they believe it should be illegal to discriminate in hiring based on someone's sexual orientation. Thus, our data on support for LGB people in the workplace are in line with previous well-designed surveys, including the nationally representative ANES that was fielded less than 24 months prior to our experiment. of significant underreporting of anti-transgender sentiment, overall levels of true comfort with having a transgender manager at work and support for employment non-discrimination protection for transgender people were well over 70 percent. Thus, a sizable majority of adults in the US support transgender people in the workplace and transgender employment non-discrimination rights. Third, this support varied across demographic groups, with more support among women, sexual minorities, and Democrats. Our survey yielded additional insights on views toward transgender people in the labor market in the United States. We found that people severely underestimate the level of comfort with having a transgender manager at work and the level of support for employment non-discrimination protection for transgender people. We also found that survey respondents reported more support for lesbian, gay, or bisexual people in the workplace and employment non-discrimination rights for lesbian, gay, or bisexual individuals than for transgender people in the workplace and for transgender employment non-discrimination rights, respectively. Our results are highly relevant for policy. Indeed, they show large popular support behind the 2020 Supreme Court ruling in Bostock v. Clayton County banning employment discrimination against transgender people. They also emphasize the importance of accounting for social pressure when measuring support for sensitive policies, since people may misreport their true beliefs: people's actual views are the ones that will guide their voting choices between candidates supporting or opposing policies to extend transgender rights. In addition, our findings on the mismatch between beliefs and actual views suggest that there may be scope for informational interventions to improve labor market outcomes for transgender individuals. Specifically, given that most respondents underestimate the overall level of support among the US population for transgender managers and employment non-discrimination laws protecting transgender individuals, informing individuals about the actual level of support for transgender individuals in the workplace could potentially shift individual's views, in line with other studies on gender norms (Bursztyn, González, and Yanagizawa-Drott 2020). If these mismatches between beliefs and actual views are not corrected, such misperceptions could lend legitimacy to anti-transgender policies that most people may not support. Finally, our results indicate that transgender-specific labor market interventions may be necessary to achieve workplace equality for gender minorities, since individuals report significantly more positive views regarding LGB-related workplace support than transgender-related workplace support. Figure 5: Comparison of views toward transgender individuals relative to LGB individuals and issues. * p < 0.10, ** p < 0.05, *** p < 0.01. 95-percent confidence intervals reported with horizontal range plots. The numbers above the horizontal bars are the differences between the two groups at the base of each horizontal bar. Questions used in this Race categories are not mutually exclusive (participants could select more than one option). The variable "Employed" includes both "employed for wages" and "self-employed". Source: 2022 Prolific List Experiment. * p < 0.10, ** p < 0.05, *** p < 0.01. Robust standard errors clustered in parentheses. Transgender manager key statement: "I would be comfortable having a transgender manager at work". Trans employment non-discrimination protection key statement: "I think the law should prohibit employment discrimination against transgender individuals". Demographic controls include subject's age, sex at birth, race (including missing indicator), sexual orientation, and sexual attraction. Socio-economic factors and beliefs include subject's education level, employment status, income, current religious affiliation, political affiliation, and beliefs about general level of support for transgender managers (Panel A) or employment discrimination protection for transgender individuals (Panel B). Additional controls include whether at least one child less than 18 years of age lives in the subject's household, number of people living in the subject's household, urbanicity, and marital status. OLS estimates. Source: 2022 Prolific List Experiment. * p < 0.10, ** p < 0.05, *** p < 0.01. Robust standard errors clustered in parentheses. Transgender manager key statement: "I would be comfortable having a transgender manager at work". Trans employment non-discrimination protection key statement: "I think the law should prohibit employment discrimination against transgender individuals". Coefficients obtained using the Stata command kict ls (Tsai, 2019) performing least squares estimation for a double list experiment. The dependent variables are the reported true number of statements for the transgender manager lists (Column 1) and the employment non-discrimination protection lists (Column 2). The treatment variable is an indicator variable equal to 1 for the first long list (List A) containing the corresponding key statement and the second short list (List B), 0 for the first short list (List A) and the second long list (List B). All estimated coefficients of the interactions of the treatment variable with the observable characteristics are reported except for the variable "missing race". A1. Experimental design details Although it is common practice in the literature not to randomize the order of the lists, we chose to incorporate some randomization into our design to control for potential order effects (here, we refer to the order of the lists, not the order of the statements within the list). More specifically, we created the following four paths that a participant follows: KS 1 and KS 2 stand for transgender manager key statement and transgender employment nondiscrimination protection key statement, respectively. Manager List A, Manager List B, Employ Non-Discrim List A, and Employ Non-Discrim List B can be seen in the instructions in Online Appendix C. As can be seen above, half of our participants saw List As first, and the other half saw List Bs first. When we compare the distribution of answers across these two orders using Pearson's chi-square test (i.e., comparing responses in Path 1 to Path 4 and Path 2 to Path 3), we do not see any significant differences between the lists. A.2.1. Data quality checks As discussed in Section 3.2., we carefully constructed each list to avoid floor and ceiling effects (i.e., participants reporting zero items or all items, thus removing the privacy protection provided by the list experiment). We check for ceiling and floor effects and present findings in Figures B1-B2. As can be seen in these figures, only a very small share of our participants reports the highest and lowest possible items in each of the lists. Thus, we conclude that the floor and ceiling effects are negligible in our experiment. Additionally, if the distributions of responses had followed a uniform distribution, then it would have indicated that most respondents provided random answers . As shown in Figures B1 and B2, it is therefore reassuring to note that our distributions of responses do not follow such a uniform distribution. Next, we check the robustness of our main list experiment findings by excluding participants who completed the study very quickly or very slowly since they may not be paying as much attention to the study instructions. On average, it took 420 seconds (7 minutes) to complete the experiment. We exclude a total of 183 participants who took less than 211 seconds (top 5%) and those who took more than 796 seconds (bottom 5%). The results are presented in Table B3 Panel D and show that our findings are robust to removing these participants. Following our pre-analysis plan, we also checked if some respondents provided the same number for all list experiments (which might be an indication of participants not paying attention). Across all five lists, nobody provided the same number. Looking at the first four lists (thus excluding the list that serves as an attention check), 64 participants provided the same number for all four lists. Our main findings (Figure 1 and Table B3) are robust to the exclusion of these 64 participants. A.2.2. List experiment assumptions The validity of a list experiment relies on three assumptions: 1) treatment randomization, 2) no design effect, and 3) no liar. The first assumption means that the sample is split at random. The second assumption means that respondents do not give different answers to non-key statements depending on whether they are in the long list group. The third assumption means that respondents answer the key statement truthfully. A common practice to check the first assumption -treatment randomization -is to test for differences between the short list and long list groups' responses to important variables in the survey. We do this in Table B18 where we check the differences between the two groups in terms of their demographic covariates. We do not see a significant difference between the two groups except for sex where one group has slightly more females than the other. We conclude that our randomization of treatment was effective. Moreover, following Gerber and Green (2012) and , we do not only rely on means comparisons but also employ regression analyses where we control for observable characteristics (as discussed in Section 4.1). The second assumptionno design effectrequires respondents not to change their answers to non-key statements depending on whether the key statement appears in the list (i.e., whether they see the long list). To clarify, suppose that a respondent in the short list group answers two non-key statements affirmatively. If they were assigned to the long list group, their answer must be either '2' or '3' (that is, they either answer two non-key statements affirmatively or answer two non-key statements plus the key statement affirmatively). It is worth noting that we do not assume that subjects give truthful answers to these non-key statements, we only assume that the answers are consistent in short and long list groups. proposed a statistical test for the nodesign-effect assumption. The first step is to estimate the probabilities of all possible types of itemcount responses. If some of these estimated probabilities were a nonsensical value (e.g., a negative value), it would raise doubts about the validity of the no-design-effect assumption. One can then test whether such negative estimates have arisen by chance. In our two list experiments regarding transgender managers (Lists 1A and 1B), none of the estimated probabilities is below zero or above one. The same can be said about List 2A regarding employment non-discrimination protection. For List 2B regarding employment non-discrimination protection, two out of the ten estimated probabilities are slightly below zero. 30 Nevertheless, one cannot reject the null that such estimates have arisen by chance. Therefore, it is possible to conclude that the available evidence supports the "no design effect" assumption. It is not statistically feasible to check the 'no liar' assumption, not only because respondents' answers to the key statement are by design unobserved, but also because their truthful answers are unknown (otherwise there would be no point in using the list experiment technique). By running this experiment in an online anonymized platform and by making sure when designing the lists that agreeing to all or none of the statement is highly unlikely, we have tried to limit any concerns about this assumption. Indeed, Figures B1 and B2 present the distribution of responses for each list and key statements: the modal response in all lists is 2. Moreover, as noted in the previous section, the percentage of times where the responses are 0 or 4 (5 for long lists) is negligible, meaning that the privacy of responses was protected. Figure B1: Distribution of responses by list. Transgender manager. Manager List B Manager List B with key statement Key statement in the list: "I would be comfortable with having a transgender manager at work." Number of observations: 1,806. Source: 2022 Prolific List Experiment. * p < 0.10, ** p < 0.05, *** p < 0.01. Standard errors in brackets. Transgender manager key statement: "I would be comfortable having a transgender manager at work". Trans employment non-discrimination protection key statement: "I think the law should prohibit employment discrimination against transgender individuals". Race question" What is your race? Choose all that apply". "Other or multiple races" includes Black or African American, American Indian or Alaskan Native, Asian or Native Hawaiian or Pacific Islander, Some Other Race, and individuals who selected more than one race (including those who selected "white" as one of their race categories). 13 participants who did not select any race have been excluded from this analysis. Number of observations: 1,793. Source: 2022 Prolific List Experiment. * p < 0.10, ** p < 0.05, *** p < 0.01. Standard errors in brackets. Transgender manager key statement: "I would be comfortable having a transgender manager at work". Trans employment non-discrimination protection key statement: "I think the law should prohibit employment discrimination against transgender individuals". The sexual attraction category "Other" includes participants attracted to both females and males, participants attracted to same-sex individuals (same-sex based on sex at birth), and participants who selected the option "Other" when asked about their sexual attraction. Number of observations: 1,806. Source: 2022 Prolific List Experiment. * p < 0.10, ** p < 0.05, *** p < 0.01. Standard errors in brackets. Transgender manager key statement: "I would be comfortable having a transgender manager at work". Trans employment non-discrimination protection key statement: "I think the law should prohibit employment discrimination against transgender individuals". Religiosity question: "How important is religion in your life?" Participants who answered "Very Important" or "Somewhat important" coded as "Religion important in life". Participants who answered "Not too important" or "Not at all important" coded as "Religion not important in life". Number of observations: 1,806. Source: 2022 Prolific List Experiment. * p < 0.10, ** p < 0.05, *** p < 0.01. Standard errors in brackets. Transgender manager key statement: "I would be comfortable having a transgender manager at work". Trans employment non-discrimination protection key statement: "I think the law should prohibit employment discrimination against transgender individuals". Participants are divided in groups based on the US state where they lived at the time of the survey. Number of observations: 1,806. Source: 2022 Prolific List Experiment.
2022-09-07T01:16:09.195Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "5e68b498b96ec9dd118748c55594359275a89d78", "oa_license": "CCBYNC", "oa_url": "https://ore.exeter.ac.uk/repository/bitstream/10871/135558/2/aksoy-et-al-2024-understanding-labor-market-discrimination-against-transgender-people-evidence-from-a-double-list.pdf", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "5e68b498b96ec9dd118748c55594359275a89d78", "s2fieldsofstudy": [ "Sociology", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
259221909
pes2o/s2orc
v3-fos-license
G-CSF rescue of FOLFIRINOX-induced neutropenia leads to systemic immune suppression in mice and humans Background Recombinant granulocyte colony-stimulating factor (G-CSF) is routinely administered for prophylaxis or treatment of chemotherapy-induced neutropenia. Chronic myelopoiesis and granulopoiesis in patients with cancer has been shown to induce immature monocytes and neutrophils that contribute to both systemic and local immunosuppression in the tumor microenvironment. The effect of recombinant G-CSF (pegfilgrastim or filgrastim) on the production of myeloid-derived suppressive cells is unknown. Here we examined patients with pancreatic cancer, a disease known to induce myeloid-derived suppressor cells (MDSCs), and for which pegfilgrastim is routinely administered concurrently with FOLFIRINOX but not with gemcitabine-based chemotherapy regimens. Methods Serial blood was collected from patients with pancreatic ductal adenocarcinoma newly starting on FOLFIRINOX or gemcitabine/n(ab)paclitaxel combination chemotherapy regimens. Neutrophil and monocyte frequencies were determined by flow cytometry from whole blood and peripheral blood mononuclear cell fractions. Serum cytokines were evaluated pretreatment and on-treatment. Patient serum was used in vitro to differentiate healthy donor monocytes to MDSCs as measured by downregulation of major histocompatibility complex II (HLA-DR) and the ability to suppress T-cell proliferation in vitro. C57BL/6 female mice with pancreatic tumors were treated with FOLFIRINOX with or without recombinant G-CSF to directly assess the role of G-CSF on induction of immunosuppressive neutrophils. Results Patients receiving FOLFIRINOX with pegfilgrastim had increased serum G-CSF that correlated with an induction of granulocytic MDSCs. This increase was not observed in patients receiving gemcitabine/n(ab)paclitaxel without pegfilgrastim. Interleukin-18 also significantly increased in serum on FOLFIRINOX treatment. Patient serum could induce MDSCs as determined by in vitro functional assays, and this suppressive effect increased with on-treatment serum. Induction of MDSCs in vitro could be recapitulated by addition of recombinant G-CSF to healthy serum, indicating that G-CSF is sufficient for MDSC differentiation. In mice, neutrophils isolated from spleen of G-CSF-treated mice were significantly more capable of suppressing T-cell proliferation. Conclusions Pegfilgrastim use contributes to immune suppression in both humans and mice with pancreatic cancer. These results suggest that use of recombinant G-CSF as supportive care, while critically important for mitigating neutropenia, may complicate efforts to induce antitumor immunity. Open access The acute inflammatory response involves rapid influx of neutrophils and monocytes into damaged tissue to clear potential pathogenic threats and initiate wound healing.Damaged epithelial cells and activated fibroblasts at the site of injury produce hematopoietic growth factors and chemokines for long range communication with the bone marrow to increase production of myeloid lineage cells.This emergency myelopoiesis is evolutionarily important for protection against ischemic damage and a variety of other acute threats. 1 However, in the context of chronic inflammation, the constant strain on the hematopoietic system to produce neutrophils and monocytes can cause these cells to exit the bone marrow in an immature state.These immature monocytes and neutrophils have been loosely called myeloid-derived suppressor cells (MDSCs), a broad term encompassing a variety of different subtypes and lineages capable of inhibiting adaptive immunity. 2 3In the setting of chronic inflammation, the ability to suppress T-cell responses and thus dampen further tissue destruction may be advantageous. Cancer grows slowly over time and induces chronic inflammation driven by the presence of microbes or factors associated with necrotic cell death such as oxidized DNA, adenosine triphosphate (ATP), or high mobility group box 1 protein (HMGB1) activate innate pattern recognition receptors in immune cells.Both cancer cells and associated fibroblasts can secrete chemokines and growth factors, with granulocyte colony-stimulating factor (G-CSF), granulocytemacrophage CSF (GM-CSF), C-C motif chemokine ligand 2 (CCL2), and ligands for C-X-C motif chemokine receptor 2 (CXCR2) being important for production and recruitment of monocyte and neutrophil lineage MDSCs capable of inducing both local and systemic immune suppression. 4umor-associated macrophages can also secrete interleukin (IL)-1β, which further activates MDSCs and correlates with metastasis. 5In mice, MDSCs are identified as Gr-1 + CD11b + cells and express a transcriptional program consistent with their ability to sequester metabolites and suppress T-cell responses. 3 6Immature myeloid cells have been shown to accumulate in the spleen, which serves as a reservoir for replenishment of MDSCs in the tumor microenvironment. 7ranulocytic MDSCs (Gr-MDSCs) are derived from immature neutrophils and are recruited into the tumor by C-X-C motif chemokines CXCL1, CXCL2, and CXCL3 acting on CXCR2.Monocytic MDSCs (Mo-MDSCs) express Ly6C and, like monocytic-derived macrophages, are recruited into the tumor by CCL2 acting on C-C motif chemokine receptor 2 (CCR2). 8 9][12] Human MDSCs have been more difficult to quantify due to lack of consensus as to which surface markers best identify granulocytic and monocytic MDSC lineages. 13DSCs are functionally defined by their ability to suppress T-cell responses; however, functional assays are difficult to perform with intrinsically short-lived cells that survive for only a few hours after isolation.Nevertheless, most groups agree that CD15 + cells that survive density gradient centrifugation to appear in the mononuclear cell fraction are Gr-MDSCs.Other surface properties of Gr-MDSCs include CD33 + HLA-DR − CD16 + CXCR2 + . 13Surface low-density lipoprotein receptor LOX-1 or CD84 may further distinguish MDSCs of the granulocytic lineages in humans. 14imilar to mice, human Gr-MDSCs express CXCR2 and can be recruited to tissues by CXCL1, CXCL2, and IL-8. 15o-MDSCs and healthy monocytes cannot be easily separated based on density.Mo-MDSCs in humans are typically identified as CD14 + CD33 + HLA-DR low . 13ancreatic ductal adenocarcinoma (PDAC) induces high rates of MDSC formation in both mice and humans. 11 16-180][21] CXCL1 was identified as the dominant tumor cell-secreted factor responsible for conferring a T cell poor, immunotherapy refractory state in mouse models of pancreatic cancer. 203][24] In mouse models of pancreatic cancer, depletion of Gr-MDSCs 25 or Mo-MDSCs leads to reduced tumor growth. 10Compensatory increases in one myeloid cell population can offset reductions in another, although CD11b agonism appears to target multiple myeloid lineages simultaneously, reducing myeloid cell accumulation in PDAC mouse models and synergizing with checkpoint blockade. 26Reprogramming of myeloid cells away from an MDSC phenotype using agonistic anti-CD40 or cIAP1/2 antagonism has also shown success in preclinical models. 27 28In patients with PDAC, MDSCs have been identified in both blood and tumor, and their presence correlates with poor prognosis. 16 29In a phase Ib/II study, the CCR2 small molecule inhibitor PF-04136309 was evaluated in patients with locally advanced PDAC in combination with FOLFIRINOX chemotherapy versus FOLFIRINOX alone. 32The authors collected bone marrow biopsies and peripheral blood to show a reduction in both circulating monocytes and tumor Mo-MDSCs in patients receiving the CCR2 inhibitor.Monocyte frequencies in the bone marrow increased, suggesting that CCR2 is required for egress from the bone marrow.Tumor burden also decreased, with more than one-third of CCR2 inhibitor-treated patients becoming eligible for surgery versus none in the FOLFIRINOX control arm. 32Unfortunately, a study of CCR2 inhibitor with gemcitabine/n(ab)paclitaxel in patients with advanced PDAC failed to meet its endpoints for safety or efficacy. 34Why CCR2 inhibitor performed so well in locally advanced disease but not in the metastatic setting Open access is unclear.One possibility is that inhibition of monocyte trafficking cannot be sustained long-term due to risk of infection and that these agents may be best used as either a short-term bridge to surgery or in conjunction with therapies that stimulate antitumor T cells.Another possible explanation could involve differences between the chemotherapy backbones used for these two trials.The effects of FOLFIRINOX versus gemcitabine/n(ab) paclitaxel chemotherapy and associated supportive care regimens on hematopoietic cell survival and MDSC differentiation have not been reported. Current treatments for both localized and advanced PDAC include combination chemotherapy regimens.FOLFIRINOX extends median overall survival in patients with advanced PDAC compared with gemcitabine (11.1 vs 6.8 months). 35Addition of albumin-conjugated paclitaxel (n(ab)paclitaxel, brand name Abraxane) to gemcitabine extends median overall survival by 1.8 months compared with gemcitabine alone (8.5 vs 6.7 months). 36FOLF-IRINOX and gemcitabine/n(ab)paclitaxel have not been compared head-to-head in a randomized trial for advanced PDAC, and an ongoing trial is addressing this question in the context of identifying molecular predictors favoring one regimen over the other (NCT04469556).In the neoadjuvant setting, a parallel phase 2 trial using FOLF-IRINOX or gemcitabine/n(ab)paclitaxel showed similar median overall survival post surgery. 37 38FOLFIRINOX has a slightly less favorable safety profile with overall higher rates of grade 3/4 toxicities including fatigue, diarrhea and need for growth factor use compared with gemcitabine/n(ab)paclitaxel.Nevertheless, both chemotherapy regimens are used in standard practice. 39linical trials for PDAC typically include a chemotherapy backbone combined with a novel agent. 33iven the success of immunotherapy in other cancer types, many immunotherapy agents are being tested in PDAC, with thus far limited success outside of the 1% of PDAC tumors that are microsatellite instable and can be treated with checkpoint inhibitor therapy. 40Which chemotherapy backbone would best combine with immunotherapy agents has been a source of much debate.FOLFIRINOX has a perceived slightly higher efficacy, potentially due to responsiveness of the approximately 5% of patients with breast cancer type 1/2 susceptibility protein (BRCA1/2) mutations to platinum-based agents, but the side effect profile limits clinical enrollment and has resulted in gemcitabine/n(ab)paclitaxel being more commonly combined with immunotherapy in clinical trials. 41To date, there is not a strong scientific rationale for which chemotherapy regimen would best synergize with antitumor immunity in humans.FOLFIRINOX has been reported to increase effector T-cell responsiveness in peripheral blood, although it should be noted that T-cell restimulation assays were performed from frozen peripheral blood mononuclear cells (PBMCs), a condition that does not support survival of Gr-MDSCs during the freeze-thaw process. 42Resected tumors treated with FOLFIRINOX show a marked influx of myeloid cells into the tumor center, although whether these myeloid cells are contributing to vs a consequence of tumor cell death is unclear. 43 44ere we evaluated patients with PDAC newly starting on FOLFIRINOX or gemcitabine/n(ab)paclitaxel.We found that FOLFIRINOX, but not gemcitabine/n(ab) paclitaxel, was associated with an increase in Gr-MDSCs as determined by flow cytometry and functional assessment of patient serum.This increase in Gr-MDSCs was caused by pegfilgrastim administered as supportive care for FOLFIRINOX-induced neutropenia.We showed that G-CSF added to healthy serum was sufficient to recapitulate the immunosuppressive effects of post-FOLFIRINOX patient serum.We further developed a mouse model of pegfilgrastim rescue of neutropenia to formally demonstrate that neutrophil-lineage cells arising from G-CSF treatment were more immunosuppressive than similarly isolated cells from cancer-bearing mice alone. Ethics approval All animal protocols were approved by the Dana-Farber Cancer Institute Committee on Animal Care (protocol #14-019, 14-037) and are in compliance with the National Institutes of Health and National Cancer Institute ethical guidelines for tumor-bearing animals. Human samples Whole blood and serum were collected from patients with PDAC receiving FOLFIRINOX or gemcitabine/n(ab)paclitaxel at Dana-Farber Cancer Institute under protocol #03-189 (see table 1).Blood and serum were used for immune cells and cytokine analysis. Healthy donor serum was obtained from consenting patients at Massachusetts General Hospital under protocol #21-590.Participants were women between 30 and 75 years of age (inclusive) and had no symptoms of acute illness, no history of immune-mediated disease, no use of immunomodulating medications within 1 month prior to enrollment, received no vaccines within 5 weeks prior to enrollment, and had never received an organ transplant.Blood was collected at two study visits based 4 weeks apart.During each visit, physical well-being and eligibility were verified by the study team and a detailed medical history (including relevant comorbidities, current and recent medications, and vaccination history) was reviewed.Serum from healthy donors were used for cytokine analysis controls. Healthy donor monocytes and T cells were obtained from de-identified leukapheresis cones from the Kraft Blood Donor Center. Human blood processing Whole blood was processed within 24 hours of collection.From the fresh whole blood, 200 µl were used for flow cytometry staining.The remaining blood was used for PBMC isolation by gradient centrifugation using Flow cytometry Frozen PBMCs were thawed in a warm bath and then placed in 10 mL of a solution of RPMI complete (RPMI 1640 medium (Gibco) supplemented with 10% fetal bovine serum, 2 mmol/l l-glutamine, 1% penicillin/ streptomycin, 1% minimal essential media nonessential amino acids, 1 mmol/l sodium pyruvate, and 0.1 mmol/l 2-mercaptoethanol) with DNase I (Sigma-Aldrich).PBMCs were washed once in phosphate-buffered saline (PBS) with ethylenediaminetetraacetic acid (EDTA) 2 mM and stained with Zombie NIR fixable viability kit (BioLegend) prior to staining for flow cytometry.Stained cells were fixed with 1% formalin (Sigma) and analyzed with an SP6800 Spectral Cell Analyzer (SONY). Human CD14+ MDSC differentiation Monocytes were isolated from healthy donor leukopacs using a human CD14 isolation kit (Miltenyi) as per the manufacturer's protocol.Monocytes were resuspended in RPMI complete without FBS and plated into a thermosensitive 6-well plate (Thermo Scientific 174901).200 µl of human serum (either from healthy donors or patients) was added for a final concentration of 20% serum.75 pg/ mL of M-CSF (Peprotech) was added to each condition.Some monocytes were also cultured with 10 ng/mL of human IL-6 (Peprotech), 10 ng/mL GM-CSF (Peprotech), G-CSF (Miltenyi), IL-20 (Peprotech), or IL-33 (Peprotech).On days 3-5, the medium was changed, 400 µl of fresh RPMI complete without FBS was added and replenishment of serum and/or recombinant cytokines as indicated.On day 7, the cells were harvested using cold PBS aided by a cell scraper. Human T-cell proliferation assay PBMCs were isolated from healthy donor fresh blood using Ficoll-Paque PLUS (Cytiva).The PBMCs were then washed, and T cells isolated using the human Pan T-cell isolation kit (Miltenyi) as per the manufacturer's protocol.After isolation, the T cells were stained with CFSE (Invitrogen), washed in FBS and then cultured in RPMI complete with CD3/CD28 activation beads (Gibco) in a U-bottom 96-well plate.3 hours later, 50,000 MDSCs were added to each well containing pre-activated T cells.After 3 days of incubation at 37°C, the cells were analyzed by flow cytometry. Human serum cytokine analysis On the day of collection, patient serum was centrifuged at 450 g for 15 min, aliquoted and stored at −80°C.Then cytokines were analyzed by cytokine bead array (Eve Technologies, Canada). Mouse tumor graft and treatment Female C57BL/6J mice aged 6-8 weeks were purchased from Jackson Labs (stock #000664) and used for experiments after acclimating for at least 1 week to being housed at the Dana-Farber Cancer Institute Redstone Facility.Tumor inoculations were performed as described. 41Briefly, female C57BL/6 mice were subcutaneously injected with Open access 250,000 6694c2 cells. 19Tumor size was measured twice weekly, and tumor volume calculated.Mice were treated with FOLFIRINOX (5-fluorouracil 60 mg/kg, leucovorin 75 mg/kg, irinotecan 50 mg/kg and oxaliplatin 5 mg/ kg) or PBS with or without G-CSF (5 µg/mice, Neupogen) resuspended in PBS.Five groups of n=5 mice per group were used for a total of 25 mice per experiment.Group 1=no tumor, no treatment; Group 2=tumor, PBS; Group 3=tumor, FOLFIRINOX; Group 4=tumor, G-CSF; Group 5=tumor, FOLFIRINOX+G CSF.All values were compared with those obtained for the control group of tumor-bearing mice treated with PBS.Mice were euthanized on day 12 post implantation.Tumors were weighed at the time of euthanasia.Investigators were not blinded.Humane endpoints include body condition score of 2 or less, 20% or more weight loss, tumor ulceration, tumor size >2000 mm 3 , or other signs of morbidity.No mice in the study were euthanized for humane endpoints or excluded. Mouse flow cytometry Blood was collected by retro-orbital bleeding and erythrocytes lysed with red cell lysis buffer (8.26 g NH4Cl, 1 g KHCO3, 37 mg EDTA, 1 L water).Spleens were crushed through a 40 micron cell strainer, washed and lysed with ACK buffer.Bone marrow was isolated from femurs by flushing with PBS using a 27-gage syringe.Cells were resuspended in flow cytometry buffer (2% FBS in PBS) and stained with Zombie NIR viability dye and other flow cytometry antibodies prior to fixation with 1% formalin.Samples were analyzed using an LSR Fortessa X-20 (BD). Mouse MDSC suppression assay T cells were isolated from pooled spleen and lymph nodes of C57BL/6J mice using Mouse T Cell Isolation Kit (Invitrogen).T cells were washed twice with PBS, counted, and labeled with CFSE (Life Technologies) as per the manufacturer's protocol.50,000 CFSE-labeled T cells were plated into a U-bottom 96 well plate with CD3/ CD28 activation beads (Life Technologies).The plate was placed in a 37°C incubator while preparing the MDSCs. Spleen cells from two to three tumor-bearing mice per group were pooled at 12 days after inoculation.Neutrophil isolation was performed using the Mouse Neutrophil Enrichment Kit (Invitrogen).Isolated cells were washed with RPMI complete, counted, and 25,000 cells added to the stimulated T cells.MSDCs and T cells were cocultured for 3 days prior to analysis by flow cytometry. Tumor immunofluorescence staining Subcutaneous tumors were fixed with Z-fix (Fisher Scientific) and frozen in OCT (Fisher Scientific).Samples were cut into 8 µm sections using a cryostat (Leica) and analyzed by immunofluorescence using antibodies against Arginase 1 (Life Technologies) and Gr-1 (clone RB6-8C5, BioLegend).Sections were imaged using Leica Thunder Imager Live cell and 3D Assay microscope. Statistics Pairwise, group comparisons and correlation were performed using Wilcoxon matched-pairs signed-rank test, one-way analysis of variance test, and Pearson coefficient calculation.GraphPad Prism software was used for statistical analysis. Data accessibility All data are presented in the manuscript.Any additional data may be obtained from the investigators on request. RESULTS FOLFIRINOX is associated with an increase in circulating Gr-MSDCs Peripheral blood was collected from patients with PDAC newly starting either FOLFIRINOX or gemcitabine/n(ab) paclitaxel chemotherapy regimens (figure 1A and table 1). Whole blood was analyzed by flow cytometry to quantify neutrophils and monocytes according to the gating strategy in online supplemental file 2. No significant changes in neutrophils as a percentage of CD45+ cells were observed for either chemotherapy regimen (figure 1B).FOLFIRINOX treatment induced an early decrease in monocytes, although this population recovered in subsequent cycles (figure 1C).Absolute neutrophil counts increased significantly in patients receiving FOLF-IRINOX, as expected from concurrent administration of pegfilgrastim in this population (figure 1D).To quantify MDSCs in whole blood, we analyzed CD15+CD16− immature neutrophils and found that this population increased during treatment with FOLFIRINOX but not with gemcitabine/n(ab)paclitaxel (online supplemental file 2).No significant changes were observed in any of the major lymphocyte populations (online supplemental file 2). PBMC were isolated by density gradient centrifugation, a process that retains lymphocytes, monocytes, Mo-MDSCs and Gr-MDSCs but not healthy granulocytes.From the PBMC fraction, we defined Mo-MDSCs as CD45+CD33+CD15 CD14+HLA-DR low/neg and Gr-MDSCs as CD45+CD33+CD15+ (figure 1E and online supplemental file 2).We observed a significant increase in the percentage of Gr-MDSCs on treatment with FOLFIRINOX but not with gemcitabine/n(ab)paclitaxel (figure 1F).This difference was also apparent when comparing absolute abundance of Gr-MDSCs (figure 1G).Mo-MDSCs were less abundant overall, and their frequency as a percentage of total CD45+ cells or total monocytes was not significantly affected by either chemotherapy regimen (figure 1H,I).However, absolute abundance of Mo-MDSCs was significantly increased in patients after receiving FOLFIRINOX, consistent with higher overall white blood cell counts (figure 1J). FOLFIRINOX-treated patient serum contains increased G-CSF To determine whether systemic growth factors and chemokines associated with emergency myelopoiesis were affected by chemotherapy treatment, we measured a panel of cytokines and chemokines from patient serum by cytokine bead array.For this panel, we also included serum from five healthy age-matched people who donated two blood samples 28 days apart.The mean of healthy controls is indicated by a dashed line in each panel.Not surprisingly, all patients treated with FOLFIRINOX showed a marked increase in serum G-CSF, consistent with administration of pegfilgrastim (recombinant pegylated G-CSF) on day 3 of each FOLFIRINOX cycle as prophylaxis for chemotherapyinduced neutropenia (p=0.002)(figure 2A).After treatment, G-CSF levels were positively correlated (p=0.0309) with the percentage of Gr-MDSCs (figure 2B).Given that total neutrophil frequencies were not significantly altered in patients receiving FOLFIRINOX (figure 1B) and that rescue of neutrophil production was successfully achieved by pegfilgrastim treatment (figure 1D), we hypothesize that a higher fraction of these newly-produced neutrophils were Gr-MDSCs. We examined other circulating markers of inflammation and noted a significant increase in the IL-1 family member IL-18 (p<0.0001) on treatment with FOLFIRINOX but not gemcitabine/n(ab)paclitaxel (figure 2C).The innate inflammatory cytokines IL-1β, IL-6 and tumor necrosis factor-α were elevated in all patients with PDAC above levels found in similarly aged healthy controls but were not significantly affected by either chemotherapy regimen (figure 2D-F).The IL-1R antagonist (IL-1RA) was significantly increased in the FOLFIRINOX cohort (figure 2G).Gr-MDSC and neutrophil recruitment into tissues is mainly driven by CXCR2 binding CXCL1/2/5 or IL-8.The chemokines CXCL1 and CXCL5 were significantly decreased in the circulation of FOLFIRINOX-treated patients and reached levels similar to that of healthy controls whereas IL-8 was not significantly affected by either chemotherapy regimen (figure 2H,I).Of the 71 cytokines and chemokines analyzed, 3 other factors were significantly increased by FOLFIRINOX treatment (online supplemental table 1).These were FLT3 ligand, TRAIL, and CTACK (CCL27), all of which increased on FOLFIRINOX administration although concentrations remained within the normal range of healthy control blood donors and were not investigated further (online supplemental file 2).CCL8, a monocyte recruiting chemokine and PDGFaa/bb were significantly decreased with FOLFIRINOX treatment but remained within the range of healthy control values (online supplemental file 2). FOLFIRINOX-treated patient serum increases suppressive capacity of Mo-MDSCs in vitro Mo-MDSCs can be induced in vitro by differentiating healthy donor monocytes with recombinant GM-CSF and IL-6 (figure 3A). 45These induced Mo-MDSCs (iMDSCs) adopt a large spreading shape that is morphologically distinct from control monocytes cultured for a week with healthy serum (figure 3B).To develop a functional assay to measure systemic immune suppression in patients with PDAC, we adapted a method previously used to test systemic immune suppression in patients with sepsis or COVID-19. 46 47On coculture at a ratio of 1:1 with CFSElabeled T cells activated with anti-CD3/CD28, iMDSCs suppress T-cell proliferation whereas monocytes differentiated with healthy serum and M-CSF are not suppressive (figure 3C).Healthy donor serum with maintenance levels of M-CSF (75 pg/mL) does not induce Mo-MDSCs due to low concentrations of growth factors.However, in patients with acute or chronic inflammation, high levels of myeloid-differentiation factors in serum can induce formation of Mo-MDSCs in vitro (figure 3C).To test whether serum from FOLFIRINOX-treated patients could induce the differentiation of Mo-MDSCs in vitro, we isolated CD14+ cells from a healthy donor and cultured them for 1 week in medium containing maintenance levels of M-CSF with 20% patient serum from pretreatment or post-treatment time points and performed a T-cell suppression assay.Post-treatment serum induced significantly more suppressive Mo-MDSCs than baseline serum, suggesting that treatment induced changes in serum factors capable of increasing the suppressive capacity of Mo-MDSCs (figure 3C,D). G-CSF is sufficient to induce human Mo-MDSCs in vitro Neither GM-CSF nor IL-6, the two recombinant cytokines used as positive controls in our Mo-MDSC differentiation assay, were affected by FOLFIRINOX treatment (figure 2E and online supplemental table 1).GM-CSF was undetectable across most of the patients in our cohort.We therefore hypothesized that other serum growth factors, such as G-CSF, could be responsible for the increased differentiation capacity of Mo-MDSCs observed in serum from FOLFIRINOX-treated patients.Human monocytes express the G-CSF receptor. 48To test for a direct effect of G-CSF on Mo-MDSC formation, we cultured magnetic bead-purified CD14+ cells obtained from healthy donor PBMCs with 200 pg/mL of G-CSF, which is similar to G-CSF concentrations measured in post-FOLFIRINOX patient serum.After 7 days of differentiation, we cocultured our putative Mo-MDSCs with CFSE-labeled T cells and measured proliferation on activation with anti-CD3/ CD28.G-CSF alone was sufficient to induce Mo-MDSCs that suppressed T-cell proliferation similarly to the positive control of GM-CSF/IL-6 (figure 4A).This confirms the ability of G-CSF to directly induce Mo-MDSCs formation in vitro.We further analyzed these induced Mo-MDSCs by flow cytometry and found that in vitro differentiated Mo-MDSCs expressed CD33 and lower levels of HLA-DR compared with monocytes cultured with healthy serum (figure 4B).G-CSF reduced HLA-DR expression in in vitro differentiated Mo-MDSCs in a dose-dependent fashion (figure 4C), again suggesting that G-CSF is sufficient to induce Mo-MDSCs.Open access G-CSF rescue of chemotherapy-induced neutropenia in mice induces Gr-MDSCs that are more suppressive than ones induced by the cancer alone Although we demonstrated that FOLFIRINOX-treated patient serum or recombinant G-CSF are sufficient to induce Mo-MDSCs in vitro, we could not formally demonstrate that pegfilgrastim is responsible for the increase in Gr-MDSCs in vivo as all patients receiving pegfilgrastim also received FOLFIRINOX.We therefore developed a mouse model of chemotherapy-induced neutropenia and G-CSF rescue.Immune-competent C57BL/6 mice were inoculated subcutaneously with the poorly immunogenic pancreatic cancer cell line 6694c2, originally derived from a spontaneous tumor from a mouse expressing oncogenic Kras G12D and monoallelic loss of p53 in pancreatic acinar cells. 20Human G-CSF binds to murine G-CSF receptor; therefore, we treated mice with clinical-grade pegfilgrastim (Neulasta) and/or FOLFIRINOX according to the schedule shown (figure 5A).Mice were euthanized on day 12, at which time point tumor size was not significantly different among the treatment groups (figure 5B).Examination of the bone marrow showed an increase in production of CD11b+ cells in tumor-bearing mice compared with mice with no tumors (figure 5C).This increase was further augmented by G-CSF administration, indicating that our model recapitulated enhanced myelopoiesis observed in the setting of cancer and growth factor supplementation.Similar to our finding in patients with PDAC receiving FOLFIRINOX, frequencies of Mo-MDSCs in blood (defined in mice as CD11b+Ly6C+) were not affected by any of the treatments (figure 5D,E and online supplemental file 2).However, circulating neutrophils, defined as CD11b+Ly6G+, were dramatically affected (figure 5D,E).G-CSF treated mice showed a striking increase in peripheral blood neutrophils.Conversely, neutrophils were nearly absent in blood from mice receiving FOLFIRINOX alone but were restored to normal levels by combination treatment of FOLFIRINOX Open access and G-CSF.The mean fluorescence intensity of Ly6G also shifted with treatment, consistent with rapidly formed neutrophils expressing lower levels of this neutrophil differentiation marker (figure 5D).FOLFIRINOX treatment reduced eosinophil and neutrophil populations significantly in blood and spleen as previously reported 49 (figure 5E and online supplemental file 2).Neutrophils, but not eosinophils, were restored with G-CSF treatment, similar to the effects of G-CSF in humans.To determine whether G-CSF-induced neutrophils were qualitatively different from those present in tumor-bearing mice, we isolated CD11b+ cells from the spleens of mice from each treatment group and examined their immunosuppressive potential ex vivo by coculturing CD11b+ cells with CFSElabeled T cells and a 1:1 ratio with anti-CD3/CD8 stimulation (figure 5F).Neutrophils from mice treated with G-CSF were more capable of suppressing T-cell proliferation than neutrophils from tumor-bearing mice or mice treated with FOLFIRINOX alone (figure 5F).Thus, G-CSF administration in mice is sufficient to induce an increase in systemic Gr-MDCSs with immunosuppressive capacity.To determine whether the systemic increase in Gr-MDSCs Open access resulted in more Gr-MDSCs in the tumor, we performed immunofluorescence staining for the neutrophil marker Gr-1 and arginase (Arg1) which is highly expressed in both Gr-MDSCs and Mo-MDSCs.We observed a significant increase in Gr-MDSCs (Arg1+Gr1+) inside the tumor in the FOLFIRINOX+G-CSF treated group compared with all other treatment regimens (figure 5G).Collectively these data suggest that G-CSF is sufficient in mice to induce systemic neutrophilia containing a higher fraction of Gr-MDSCs than would be achieved by either normal hematopoiesis or tumor-induced emergency hematopoiesis.These systemic Gr-MDSCs can accumulate in the tumor microenvironment thereby potentially contributing to both local and systemic immune suppression. DISCUSSION Febrile neutropenia is a potentially fatal complication of cancer care.1][52] Pegylated G-CSF provides longer lasting growth factor support and can be self-administered at home after completing FOLFIRINOX infusion. 39 50evertheless, these agents come at a cost.Here we report a negative impact of G-CSF on induction of systemic immune suppression in both humans and mice.Current treatment for PDAC does not rely on invoking antitumor immunity, and thus the generation of MDSCs may be unimportant for patients receiving standard of care FOLF-IRINOX.However, we are hopeful that future treatments for PDAC will invoke antitumor T-cell responses.While we do not advocate withholding G-CSF from patients with neutropenia, we do suggest being aware of the negative impact of G-CSF when designing clinical trials aimed at inducing T cell-based antitumor immunity.We further suggest that gemcitabine/n(ab)paclitaxel, which less frequently requires G-CSF supportive care, may be a more preferred treatment for patients with PDAC who are also receiving checkpoint inhibitor therapy for microsatellite instable or high tumor mutational burden disease. Although we conducted this study in humans with PDAC and mouse PDAC models, our results are likely applicable across cancer types.Most telling, we show that addition of G-CSF to healthy human serum is capable of inducing functional Mo-MDSCs that suppress T-cell responses in vitro.These results indicate that supraphysiologic G-CSF, independent of cancer status, can induce systemic immune suppression.We were only able to functionally assess the suppressive capacity of Mo-MDSCs in vitro due to technical limitations in culturing of human neutrophils.However, we note that the frequency of Gr-MDSCs increases significantly with FOLFIRINOX treatment and is positively correlated with serum levels of G-CSF.G-CSF is widely used for neutropenia in patients with cancer and is given across multiple hematologic and solid tumor types. 50In a similar study of patients with breast cancer, peripheral blood MDSCs were found to increase on treatment with doxorubicin, cyclophosphamide and pegfilgrastim (G-CSF). 53G-CSF suppression of T-cell responses is occasionally desirable.Human CD4 T cells from G-CSF mobilized blood are less capable of allogenic responses in mixed lymphocyte cultures, 54 and similarly G-CSF administration in a mouse model can reduce graft versus host disease symptoms. 55Intriguingly, both serum G-CSF and CD14+CD15+ Gr-MDSCs were increased in the normal course of pregnancy, suggesting that chronic G-CSF production may have evolved for maternal-fetal tolerance. 56here are several limitations to our study.First, all patients receiving G-CSF also received FOLFIRINOX whereas all patients not receiving G-CSF were treated with gemcitabine/n(ab)paclitaxel.The choice of treatment regimen may have been selected for older or sicker patients in the group receiving gemcitabine/n(ab) paclitaxel.Although we demonstrated that G-CSF use correlated with abundance of Gr-MDSCs and was sufficient on its own to induce Mo-MDSCs, we cannot conclude that it is the only factor contributing to the observed difference in MDSCs between patients receiving two different chemotherapy regimens.Human neutrophils are not amenable to differentiation after they exit the bone marrow, thus we were unable to directly assess the effects of G-CSF on human Gr-MDSC formation in vitro as we did for Mo-MDSCs.Given the high rate of neutropenia observed with FOLFIRINOX, there are no patients at our center who receive FOLFIRINOX without G-CSF.Retrospective analysis from other centers indicates that prophylactic G-CSF is associated with fewer FOLFIRINOX dose reductions and a trending benefit in progression-free survival. 57Ideally, we would have tested paired blood samples from patients receiving gemcitabine/n(ab)paclitaxel with G-CSF compared with patients receiving gemcitabine/n(ab)paclitaxel without growth factor support; however, G-CSF is not commonly given with gemcitabine/n(ab)paclitaxel at our center and thus would require a much larger clinical cohort.Second, all patients in our study received pegfilgrastim, a version of G-CSF with polyethylene glycol additions for half-life extension.It is possible that filgrastim, which has a shorter half-life, would have different effects on production of Gr-MDSCs.Finally, we do not show a clear link between G-CSF usage and antitumor T-cell responses.T-cell recognition of PDAC has been difficult to demonstrate outside of the 1% of patients with MSI-high tumors. 41We therefore conclude that while G-CSF does induce systemic immune suppression, this is still a theoretical concern for patients with PDAC.Our results may be more relevant to patients with immunotherapy-responsive tumor types. Human MDSCs have been difficult to study.Here we demonstrate that pegfilgrastim induces MDSCs in humans and present a novel mouse model of chemotherapy-induced neutropenia with G-CSF rescue.In humans we show that even though PDAC induces Gr-MDSCs in treatment-naïve patients, the absolute Figure 1 Figure 1 Patients receiving FOLFIRINOX show an increase in circulating Gr-MDSCs.(A) Timing of treatments received by the FOLFIRINOX group and the gemcitabine/n(ab)paclitaxel group of patients with metastatic pancreatic cancer.(B,C) Whole blood collected at cycle 1 day 1 (C1D1) of treatment and either cycle 2 day 1 or cycle 3 day 1 (C2/3D1) of FOLFIRINOX treatment or C1D15 of gemcitabine/n(ab)paclitaxel treatment was stained with the antibodies indicated in online supplemental file 2 and analyzed by flow cytometry.N=9 FOLFIRINOX; N=7 gem/n(ab)paclitaxel.Percentage of CD15+ neutrophils (B) and CD14+ monocytes (C) out of total CD45+ immune cells.(D) Absolute neutrophil counts (ANC) values were obtained from patient records corresponding to the same time points analyzed in B and C. Patients were included even if no corresponding flow cytometry was performed on whole blood.N=18 FOLFIRINOX; N=8 gem/n(ab)paclitaxel. (E) Representative flow cytometry plot showing the Gr-MDSCs gating strategy.(F-J) Peripheral blood mononuclear cells were collected by density gradient centrifugation using Ficoll and analyzed by flow cytometry according to the gating scheme shown in online supplemental file 2. (F) Percentage of Gr-MDSCs (CD45+CD33+ CD15+) N=19 FOLFIRINOX; N=8 gem/n(ab)paclitaxel. (G) Absolute counts of Gr-MDSCs from PBMC fraction were calculated from patient samples where whole blood neutrophil frequencies and ANC values were known.N=9 FOLFIRINOX; N=6 gem/n(ab)paclitaxel. (H) Mo-MDSCs (CD45+CD33+ CD15 CD14+) out of total CD45+ immune cells.N=19 FOLFIRINOX; N=8 gem/n(ab)paclitaxel. (I) Mo-MDSCs out of total CD14+ cells.N=19 FOLFIRINOX; N=8 gem/n(ab)paclitaxel. (J) Absolute counts of Gr-MDSCs from PBMC fraction were calculated from patient samples where whole blood neutrophil frequencies and ANC values were known.N=9 FOLFIRINOX; N=6 gem/n(ab)paclitaxel.Wilcoxon matched-pairs signed-rank test was used throughout.Gr-MDSCs, granulocytic MDSCs; MDSCs, myeloid-derived suppressor cells; Mo-MDSCs, monocytic MDSCs; PBMC, peripheral blood mononuclear cell. Figure 2 G Figure 2 G-CSF and IL-18 are increased in patients on FOLFIRINOX treatment.Serum from patients receiving FOLFIRINOX, gemcitabine/n(ab)paclitaxel, or healthy donor serum were analyzed for the indicated cytokine and chemokines by cytokine bead array.Samples were collected at cycle 1 day 1 (C1D1) of treatment and either cycle 2 day 1 or cycle 3 day 1 (C2/3D1) of FOLFIRINOX treatment or C1D15 of gemcitabine/n(ab)paclitaxel treatment.(A) Circulating levels of G-CSF.N=17 FOLFIRINOX and N=6 gemcitabine/n(ab)paclitaxel.(B) Correlation between serum G-CSF from FOLFIRINOX-treated patients and the percentage of Gr-MDSCs at C2D1 or C3D1.N=12 patients had paired data for analysis.Pearson's correlation was used for statistical analysis.(C-J) Circulating levels of IL-18, IL1β, IL-6, TNFα, IL-1RA, IL-8, CXCL1 and CXCL5.Mean values of the healthy control samples are indicated with a dashed line.N=17 FOLFIRINOX and N=6 gemcitabine/n(ab)paclitaxel.Wilcoxon matched-pairs signed-rank test was used throughout.CXCL, C-X-C motif chemokine; G-CSF, granulocyte colony-stimulating factor; Gr-MDSCs, granulocytic MDSCs; IL, interleukin; IL-1RA, IL-1R antagonist; TNF, tumor necrosis factor. Figure 3 Figure 3 Serum from FOLFIRINOX-treated patients can induce Mo-MDSCs in vitro.(A) Diagram of the serum suppression assay.Healthy monocytes are differentiated for 7 days in media containing 20% healthy donor or patient serum or recombinant IL-6/GM-CSF and then cocultured with CFSE-labeled T cells from a healthy donor and anti-CD3/CD28 beads.(B) Brightfield images (×40) of monocytes cultured for 7 days with either healthy donor serum alone or healthy donor serum plus 10 ng each of IL-6 and GM-CSF (iMDSCs).(C) T-cell proliferation after 72 hours of culture with the indicated Mo-MDSC populations was measured by flow cytometry for CFSE dye dilution.Plots are labeled with the source of the serum used for Mo-MDSC differentiation.Proliferation index is denoted in the corner of each representative flow cytometry plot.Representative of N=16 paired patient samples.(D) Mo-MDSCs were differentiated from healthy monocytes as shown in A using serum of FOLFIRINOXtreated patients from cycle 1 day 1 (C1D1) and either C2D1 or C3D1 (C2/3D1).Suppressive capacity of the in vitro differentiated Mo-MDSCs was assessed by cocultured with activated healthy donor T cells and measurement of T-cell proliferation after 72 hours of by flow cytometry for CFSE dye dilution.N=16 FOLFIRINOX.Wilcoxon matched-pairs signed-rank test was used for statistical analysis.GM-CSF, granulocyte macrophage colony-stimulating factor; IL, interleukin; iMDSCs, induced Mo-MDSCs; MDSCs, myeloid-derived suppressor cells; Mo-MDSCs, monocytic MDSCs. Figure 4 Figure 4 Recombinant G-CSF is sufficient to induce suppressive, MHC class II-low Mo-MDSCs in vitro.(A) Monocytes were obtained from healthy donor blood using positive selection on CD14+ magnetic beads.Monocyte were then cultured in vitro with 75 pg/mL M-CSF and the indicated concentrations of G-CSF or GM-CSF/IL-6 for 7 days.Induced Mo-MDSCs were then cultured with CFSE-labeled T cells for 3 days coculture at a 1:1 ratio, and T-cell proliferation was assessed by flow cytometry.Representative of three independent experiments.(B) Representative flow cytometry plots showing the gating strategy for MDSCs-induced in vitro.MHC-II (HLA-DR) expression on induced MDSCs is typically low.(C) Flow plot and quantification of HLA-DR mean fluorescence intensity of CD33+ SSCA high cells after monocytes were cultured for days with 75 pg/mL M-CSF with the addition of G-CSF (200 pg/mL or 10 ng/mL) or 10 ng/mL each of GM-CSF and IL-6.Representative of four independent experiments.GM-CSF, granulocyte macrophage CSF; IL, interleukin; MDSCs, myeloid-derived suppressor cells; MHC, major histocompatibility complex; Mo-MDSCs, monocytic MDSCs. Table 1 Patient demographics and characteristics
2023-06-23T06:17:12.991Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "d660a4c2ec7f35dd8599811a4d072d3f59a5c94c", "oa_license": "CCBYNC", "oa_url": "https://jitc.bmj.com/content/jitc/11/6/e006589.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb4f9e2eeb9b268dfcf086787c082f8d8cbbb81b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12285982
pes2o/s2orc
v3-fos-license
A Study of Ion-Exchange Chromatography in an Expanded Bed for Bovine Albumin Recovery In the present work, the effect of bed expansion on BSA adsorption on Amberlite IRA 410 ion-exchange r esin was studied. The hydrodynamic behavior of an expanded b ed adsorption column on effects of the biomolecules and salt addition and temperature were studied to optimize t h conditions for BSA recovery on ion-exchange resi n. Residence time distribution showed that HEPT, axial dispersion and the Pecletl number increased with t emperature and bed height, bed voidage and linear velocity. Th e binding capacity of the resin increased with bed h ight. The Amberlite IRA 410 ion-exchange showed an affinity f or BSA with a recovery yield of 78.36 % of total pr otein. INTRODUCTION Expanded bed adsorption (EBA) is a downstream process developed from protein chromatography, but different in that shows the fluidized chromatographic adsorbent bed.It permits crude feeding into the chromatographic column without an initial treatment to eliminate the biological material suspension, and as the bed expands, it increases adsorbent surface contact, making interaction with the targeted molecules more effective (Amersham Pharmacia Biotech, 1997;Fernandez-Lahore et al., 2001;Roy et al., 1999).In the present work, studies on expanded bed adsorption behaviors were made to achieve a better understanding of the effects of adsorbent type and size (Dainiak et al. 2002;Yamamoto et al., 2001), bed height, linear velocity (Mullick and Flickinger, 1999), fluidization and elution solutions effects on residence time distribution (RTD) (Fernadez-Lahore et al., 2001;Santos, 2001) for application in recovery of important biomolecules.Dainiak et al. (2002) proposed a new technique for the treatment of anion exchangers for adsorption of the shikimic acid directly from the cellcontaining fermentation broth.Amberlite 401 and 458 anionic exchange resins were treated with hydrophilic polymer, poly(acrylic acid) (PAA), to form PAA-Amberlite 401 and PAA-Amberlite-458.The binding capacity of pure shikimic acid was about 81 mg for mL of adsorbent for both cross-linked PAA-Amberlite and native Amberlite in the fluidized mode of column operation.Binding capacity dropped to 17 and 15 mg/mL, respectively, when filtered fermentation broth was done and to about 10 mg/mL for cross-linked PAA-Amberlite when the fermentation broth containing cells was used directly.Native Amberlite cannot be used for the direct adsorption of shikimic acid due to the immediate clogging of the column and the collapse of the expanded bed.The cross-linked PAA-Amberlite was used repeatedly for direct adsorption of shikimic acid from the industrial fermentation broth.Human serum albumin (HSA) from very dense Saccharomyces cerevisiae suspensions was recovered by expanded bed adsorption by Mullick and Flikinger (1999).The adsorption of proteins was on mixed-mode fluoride-modified zirconia (FmZr) particles (38 to 75 µm, surface area of 29 m 2 /g and density of 2.8 g/cm 3 ).Because of the high density of the porous zirconia particles, HSA (4 mg/mL) can be adsorbed in a FmZr bed expanded of three time its height.The expanded bed adsorption of any protein from a suspension containing more than 50 g DCW/L cells had not been previously reported.The FmZr bed expansion characteristics were well represented by the Richardson-Zaki correlation with a particle terminal velocity of 3.1 mm/s and a bed expansion index of 5.4 (Mullick and Flikinger, 1999).Expanded bed hydrodynamics were studied as a function of bed expansion using RTD with sodium nitrite as the tracer.The authors perceived that the protein binding capacity at 5% breakthrough decreased from 22 mg HSA/mL settled bed void volume for 20 g DCW/L yeast to 15 mg HSA/mL settled bed void volume for 40 g DCW/mL yeast and remained unchanged for the higher yeast concentrations (60 to 100g DCW/L).However, the equilibrium binding capacity decreased monotonically as a function of yeast concentration (20 to 100 g DCW/L) and the binding capacity at 100 g DCW/L yeast was five times lower than that at 20 g DCW/L yeast.The lower equilibrium capacity at the high cell concentrations resulted from the adsorption of cells on the particle surfaces, restricting the access of HSA to the intraparticle surface area.To remove the adsorbed HSA and yeast from zirconia particles, 1500 to 2000 column volumes of 0.25 M NaOH were required.No significant effect on chromatographic performance was observed after this treatment (Mullick and Flickinger, 1999).Fernandez-Lahore et al. (2001) examined the suitability of ion-selective electrodes (ISE) for the determination of RTD in turbid, cell-containing fluids.The enhanced feedstock compatibility of IES is better than that of other tracer sensing devices and allows a better study of bed system hydrodynamics under relevant operating conditions.Within the linear range of the corresponding ISE-tracer pair, both the rate and pH are normally measured during the expanded bed adsorption (EBA) of proteins.Analyzing the RTD obtained after perfect ion-tracer pulse in terms of the PDE (PDE, axial dispersion, plugflow exchange of mass with stagnant zones) gave a quantitative description of the underlying hydrodynamic situation during EBA processing.According to the authors, the data provided a powerful tool for predicting the overall process of adsorption with a defined feedstock type and composition.The best results were obtained using the intact yeast cell suspensions at different biomass contents (up to 7.5% wet weight) and buffer conductivities (5-12 mS) in an EBA column filled with the adsorbent Streamline QXL as fluidized phase.In the expanded bed, sedimentation and particle fluidization must be attempted in order to obtain the optimal conditions of systems operation.Richardson and Zaki (1954) studied several materials and obtained Equation 1 for the relationship between the fluid velocity (U) and end velocity of the particle (U T ) with the bed voidage (ε), which is given as: Where n is the Richardson-Zaki index or expansion index and is a function of the terminal Reynolds number (Re t ). For the Stokes region, Re p < 0.1, the terminal velocity of an isolated particle (U T ) is given as: Where d p and g are particle diameter and gravity acceleration; ρ P , ρ L and µ are particle and liquid specific mass and liquid viscosity, respectively.And particles Reynold number, Re p , is given as: With the linearization of Equation 1, it is possible to obtain n, experimentally with Equation 5 (Richardson and Zaki, 1954). The aim of the present work was to study the effect of bed expansion on BSA adsorption on Amberlite IRA 410 ion-exchange resin.The effects of addition of biomolecules and salts and temperature on hydrodynamic behavior of an expanded-bed adsorption column was also studied to obtain the optimal conditions for BSA recovery on ion-exchange resin. Fluidizers Distilled water (H 2 O), 0.07 M phosphate buffer at pH 6 (Tp) and maize malt at 2% (MM) in 0.07 M phosphate buffer at pH 6 were used.In the Table 1 shows the physical properties of the fluidizers.The properties of water were as described by Streeter (1977), the properties of the other fluids were measured by viscosimetry and by the changed weight of 1 mL of fluid volume.A pH of 6 was chosen as it fell midway between the optimal pH for α and β-amylases, and molecules targeted for futures work with maize malt. EBA column Figure 1 shows a scheme of the EBA column used in the present work.The glass column was 1x30 cm with an adjustable piston and feed flow inlet at the bottom and a product flow outlet at the top. Sixty mesh plates at the feed inlet and at the product outlet were used to avoid the loss of adsorbents particles.A ruler was placed at the side of the column for the measurement of bed height. Maize malt obtaining Maize seeds were selected, weighed and washed.Seeds absorbed between 40 and 45% of the moisture and germinated in the laboratory at room temperature and pressure.The maize malt was dried at 55°C for 5 h and stored at 5°C (Biazus et al., 2005 and2006;Ferreira et al., 2007;Malavasi and Malavasi, 2004;Severo Júnior et al., 2007). BSA solution A 250 mg/L BSA solution was prepared with 0.07 M phosphate buffer at pH 6.0. Hydrodynamic study Two g of Amberlite IRA 410 ion-exchange resin was used in all the assays; it gives a bed height of about 4 cm.The fluidizer was fed from the bottom of the column.Liner velocity was between 0.0004 and 0.008 m/s and bed height was measured with the ruler at the side of the column.The Richardson-Zaki index and experimental end velocity (U TExp ) were obtained with Equation 5 and the calculated end velocity (U Tcalc ) was obtained with Equation 3 (Biazus et al., 2006;Chang et al., 1994;Fernadez-Lahore et al., 2001;Santos, 2001;Richardson and Zaki, 1954). Determination of bed voidage (ε) Bed voidage was obtained by substitution of data on specific mass (ρ P ) and mass (m P ) of the adsorbent particles, area of the cross section of the column (A T ) and bed height (H), using the following Equation: where V P is particle volume. Study of residence time distribution (RTD) Phosphate buffer (0.07 M, pH 6.0) was used as fluidizer.The particle bed was fluidized until the bed height of the study was achieved (approximately two, three and four times the initial bed height).Five milliliter of the tracer (glucose solution) was injected at the bottom of the column (below the particle bed).At the column outlet samples were collected from time to time.Glucose concentration was measured in all the samples by DNS (Reguly, 1996).The RTD curves were obtained by the pulse method.Figure 2 shows the RTD determination.The mean residence time (t) and the standard deviation (σ) are substituted in to Equation 7 to obtain the theoretical plate number (N) and in Equation 8to obtain the height equivalent of the theoretical plates (HETP). BSA breakthrough curves Batch adsorption was carried out at 22°C.The adsorbent was incubated with 25 to 50 mL of 0.07 M phosphate buffer containing different BSA contents.The protein concentration was measured (Bradford, 1976) from time to time, until a constant value was obtained.The Langmuir isotherm was fit to the data (see Equation 11).Continuous adsorptions were carried out in an expanded bed at bed height of approximately 8, 12 and 16 cm.For this purpose, 5 mL of BSA solution was injected into the column and protein concentration was measured (Bradford, 1976) from time to time, until the breakthrough was obtained (Amersham Pharmacia Biotech, 1997; Dainiak et al., 2002;Santos, 2001).The maximum capacities of resin binding were determined between 10-90% of area of the breakthrough curve. where Q eq is the BSA-resin equilibrium binding capacity (mg/g or mg/mL) and Q B is the resin maximum binding capacity (mg/g or mg/mL), C eq is the equilibrium concentration (mg/L) and K is the dissociation constant (mg/L).Total protein concentration was determined according to the dye binding method of Bradford (1976) with BSA as protein standard. RESULTS AND DISCUSSION Figures 3 and 4, the curves of ln U versus ln ε are shown at 22°C and 28°C, respectively.It could be observed that the multiple correlations were optimal (about 1.0), suggesting that the Richardsson-Zaki equation was the best empirical model for predicting the particles fluidization.There was a reduction in the value of n when salt (phosphate) and biological material (maize malt) were added to the distilled water systems.It was observed that increasing temperature reduced n, which was due to changes in the fluid properties with temperature and with adsorption and diffusion into particle pores in the following order MM > Tp > H 2 O at a temperature of 22°C±2°C and MM < Tp < H 2 O at a temperature of 28°C±2°C.For the maize malt (in buffer) system at low bed voidage there was a large effect of friction on degree of expansion for the particleparticle, particle-liquid and particle-biomolecule interactions, which increased in the linear velocity of the fluidizer to maintain the bed voidage at the same level as that in the salt system.For high voidage, there was an inversion of hydrodynamic behavior due to the decrease in the strength of the friction and approach of the linear velocity to the end velocity of the adsorbents particles.The strength for the maize malt fluidizer was higher than the salt fluidizer (Biazus et al., 2006;Dainiak et al., 2002, Fernandez-Lahore et al., 2001;Mullick and Flickinger, 1999;Santos, 2001;Richardson and Zaki, 1954).Table 2 shows the experimental (U TExp ) and calculated (U TCalc ) particle end velocities for the fluidizers and temperatures studied.It was observed that the errors between the velocities were lower and they could be associated with the experimental measurement of viscosity and of specific mass; however, these no reduces the reliability of the data showed in table, for until 25% of error could be commonly accepts in engineering processes.The low errors suggested that the Richardson-Zaki equation was the best empirical model to predict the hydrodynamic behavior of adsorbent particles in expanded bed systems, even when the fluidizer contained a larger amount of biological material in suspension (Biazus et al., 2006;Santos, 2001).The fluidized particle end velocity decreased as the salt and biological material were added and the specific mass (ρ) and the viscosity (µ) of the fluidizers were higher than those of the water fluidizer.Figures 5 and 6 show RTD curves for glucose tracer as it passed into the column bed; which illustrated that the glucose solution could be used as tracer (Biazus et al., 2006;Fernadez-Lahore et al., 2001;Yamamoto et al., 2001). MM Table 3 shows RTD results after substitution of the data into Equations 6, 7, 8 and 9 according to the methodology used by Biazus et al. (2006) Fernádez-Lahore et al. (2001), Santos (2001) and Yamamoto et al. (2001).It was observed that the liquid axial dispersion increased with the height and bed voidage, linear velocity and temperature. There was a ten-fold increase when the initial bed height doubled and a 30-fold increase when the initial bed height quadrupled.This facilitated the flow of biological material into the particle bed and increased the contact between the biological material and adsorbent particles, so that it was possible to feed the crude material into the fermented tank directly, by there avoiding the fouling and reducing the costs of pretreatment and prepurification, which were the main chromatographic problems (Amersham Pharmacia Biotech, 1997;Roy et al., 1999).The Pecletl number (Pe) is the parameter that measures the mass transfer flow into the system.It increased with the bed height, doubling with the maximum bed height and facilitated the mass transfer in the system unlike what occurred in fixed bed systems.The HETP changed with all the parameters studied.(1999), Dainiak et al. (2002) and Lanckriet and Middenberg (2004).The Q B found for Amberlite IRA 410 in this work was higher than those for cross-linked PAA-Amberlite IRA 458 and 401 (Dainiak et al., 2002), zirconia adsorbent (Mullick and Flickinger, 1999) and others adsorbents (Amersham Pharmacia Biotech, 1997;Chang et al., 1994;Kalil, 2000;Lanckriet and Middenberg, 2004;Santos, 2001).Increasing residence time by decreasing flow velocity could have a negative effect on the bed stability due to a decrease in the expansion.However, it could be possible to decrease the flow velocity without affecting the bed height, since viscosity was usually significantly higher in the feed than in the buffer.How significant the effect of this could depend on the resistance to mass transfer in the system.It may be significant for a protein with high molecular weight, especially if the viscosity in the feed is high and slows molecular diffusion (Amersham Pharmacia Biotech, 1997;Chang et al., 1994).Figure 9 shows the maximum binding capacity changing with the bed height. CONCLUSIONS For a salt-malt system, low bed voidage friction had a large effect on the degree of expansion for the particle-particle, particle-liquid and particlebiomolecule interactions, with the need to increase the linear velocity of the fluid to maintain the bed voidage at the same level as in a salt system.At high voidage, there was an inversion of hydrodynamic behavior due to the decrease in the strength of friction force and linear velocity and that approach of the end velocity the effect was higher for a given mass of salt-malt fluidizer than for the salt fluidizer. The Richardson and Zack model showed a good fit with the experimental data, with a relative error between 12 and 15%.The RTD showed that HEPT, axial dispersion and the Pecletl number increased with the temperature and bed height, bed voidage and linear velocity.The binding capacity of the resin increased with the bed height.The Amberlite IRA 410 ion-exchange resin showed an affinity for BSA, with a recovery yield of 78.36 % of total protein. Figure 9 - Figure 9 -Effect of bed height on BSA breakthrough capacity at 22°C±2°C. Table 2 - End velocity of particles. Table 3 - Experimental results of RTD. Table 4 . Q B and recovery yield increased with the bed height, as reported byRoy et al. Table 4 - Effect of bed height on maximum binding capacity and recovery yield.
2017-10-23T12:59:15.810Z
2009-04-01T00:00:00.000
{ "year": 2009, "sha1": "ed46e912c0741cd7c4522daadc9edf6ea3cce0ec", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/babt/a/yRWWrWBqRVZydKVFr4snZPz/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ed46e912c0741cd7c4522daadc9edf6ea3cce0ec", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
4000015
pes2o/s2orc
v3-fos-license
Room for one more? A review of the literature on 'inappropriate' admissions to hospital for older people in the English NHS : This paper reports the findings of a review of the literature on emergency admissions to hospital for older people in the UK, undertaken between May and June 2014 at the Health Services Management Centre, University of Birmingham. This review sought to explore: the rate of in/appropriate emergency admissions of older people in the UK; the way this is defined in the literature; solutions proposed to reduce the rate of inappropriate admissions; and the methodological issues which particular definitions of 'inappropriateness' raise. The extent to which a patient perspective is included in these definitions of inappropriateness was also noted, given patient involvement is such a key policy priority in other areas of health policy. Despite long-standing policy debates relatively little research has been published on formal rates of 'inappropriate' emergency hospital admissions for older people in the English NHS in recent years. What has been produced indicates varying rates of in/appropriateness, inconsistent ways of defining appropriateness, and a lack of focus on the possible solutions to address the problem. Significantly, patient perspectives are lacking, and we would suggest that this is a key factor in fully understanding how to prevent avoidable admissions. With an ageing population, significant financial challenges and a potentially fragmented health and social care system, the issue of the appropriateness of emergency admission is a pressing one which requires further research, greater focus on the experiences of older people and their families, and more nuanced contextual and evidence-based responses. Key words:  Older people  Emergency hospital admission  Prevention  Health and social care What is known about this topic?  Inappropriate emergency admissions to hospital are the subject of significant policy and media debate  While a range of possible explanations are put forward, many of the accounts appear overly-simplistic and/or under-evidenced  Given current demographic and financial pressures, the desire to prevent unnecessary emergency admissions will only increase What this paper adds:  There is relatively limited research on this topic, and it is difficult to compare results in a meaningful way (due to local contextual and methodological details)  Different methods of identifying 'inappropriate' admissions each have their limitations, and potential solutions do not appear well thought through  Research which includes the perspective of patients, families and front-line staff may provide a more nuanced, helpful approach Every year, the NHS experiences more than 2 million unplanned admissions for people over 65 (accounting for 68 per cent of hospital emergency bed days and the use of more than 51,000 acute beds at any one time) (Imison et al., 2012;Poteliakhoff, 2011). With an ageing population, a challenging financial context and major structural upheavals throughout the English health service, such pressures show no sign of abatingand the NHS has to find ways of reducing emergency hospital admissions (in situations where care can be provided as effectively elsewhere). However, this is by no means a new issue. For many years, a common concern for policy makers has been that high levels of emergency hospital admissions run the risk of concentrating too many resources in expensive, acute care, leaving insufficient funding to invest in community-based alternatives and in rehabilitation for people recovering from ill health. Under successive governments, this has led to a series of attempts to make more effective use of hospital beds, recognising that these are scarce resources for which demand outstrips supply. Over time, this has included the creation of a There is also significant national work underway to better understand and resolve considerable variation in the probability of emergency admission or bed utilisation in over 65s between localities, with a desire to achieve greater efficiency and better outcomes for patients by tackling any unwarranted variation (see Imison et al., 2012 Moving from national policy to public perceptions, negative headlines continue to appear in the national press around the pressures facing acute hospitals (Boseley, 2012;Prynne, 2014) and perceived shortcomings in community services which are seen as contributing to excessive and unnecessary emergency admissions (particularly of frail older people) (Campbell, 2012). In different accounts, the culprits range from the growing pressures of an However, behind many of the headlines is an assumption that potentially large numbers of people (often older people) are attending and being admitted to hospital as emergency patients when there is scope to care for them more appropriately in alternative settings. For example, Triggle (2012b) reports that 2.3 million overnight stays could be prevented were there better organisation of urgent care, with GPs and other health care providers working together to prevent patients getting to the stage of crisis requiring hospital. Wright (2013) reports that half a million older patients could avoid hospital if they were cared for appropriately by community services. A recent study by Cowling et al (2014) found just over 26 per cent of people attend the emergency department because they could not access a GP appointment. Underpinning both policy and media accounts, therefore, is an assumption that scarce resources could be being used more effectively if the number of inappropriate admissions to hospital could be reduced, thereby freeing up existing hospital beds for those people who genuinely need them. Despite common policy and media perceptions of a 'problem' of significant inappropriate emergency hospital admissions, these accounts mask a number of underlying questions:  What is the rate of 'inappropriate' admission for older people?  How is this defined and who decides?  What causes such a situation?  What solutions might help to make more appropriate use of current resources? In response to these questions, the current paper reviews the literature on the appropriateness of emergency admissions, taking special account of the extent to which the literature includes a patient perspective. Though the four questions above are set-out as overall review questions, this paper offers more focus on the methodological insights gained from doing this review, having had limited success in answering these four questions due to the complex and fragmented nature of the evidence, as will become apparent below. In our opinion, drawing on the lived experience of people using services is crucial to understanding the context within which the older person is using health and social services and to developing an appropriate responseparticularly at a time when government is emphasising their commitment to the concept of 'nothing about me without me.' As we have argued elsewhere (Glasby and Littlechild, 2000): As we argue below, this patient perspective closes the gap on a patient's journey from healthy to admission to hospital: they can provide the detail and insight which may help to identify moments at which preventative measures could have been taken. We therefore wanted to see the extent to which the current literature reflected this stance and whether patient input is generally valued in studies on inappropriate emergency admissions. Box 1 Media coverage of emergency admissions and the pressures facing acute care Methods The literature review we conducted was a narrative analytical review, summarising and interpreting the data presented in the reviewed studies to compare and contrast them in their original form (Mays et al., 2001). The review was undertaken between May and June 2014. This review sought to explore: the rate of in/appropriate emergency admissions of older people in the UK; the way this is defined in the literature; solutions proposed to reduce the rate of inappropriate admissions; and the methodological issues raised by particular definitions of 'inappropriateness'. Importantly for our present study, the extent to which patient perspectives are included in these definitions of inappropriateness was also noted. 'NHS services outside of hospitals are struggling to cope with growing demand brought on by the ageing population, hospital bed shortages and staff cutbacks' (Campbell, 2012). 'Sir Bruce [Keogh -NHS Medical Director] believes a system-wide transformation is needed to cope with the "intense, growing and unsustainable" pressures on urgent and emergency care services. … Every year millions of patients seek emergency help in hospital when they could have been cared for much closer to home' (Prynne, 2014). 'Elderly care is being jeopardised by the increasing numbers of older people being moved to nonspecialist wards to clear beds for new patients' (McArdle, 2013). 'Nearly two-thirds of the patients now being admitted to hospital are over the age of 65 and many are much older. Their needs are increasing -they are frail and many have dementia. Many arrive in hospital because of a sudden crisis in their health: over the last 10 years, there has been a 37% increase in emergency hospital admissions' (Boseley, 2012). The reference lists of articles included in this study were also searched. Each title and abstract was reviewed independently by two members of the research team and selected for relevance to the overall aims and objectives of the study. The inclusion and exclusion criteria are set out below. Inclusion and Exclusion Criteria Studies were included if they set out a formal rate (percentage or frequency) of people aged 65 and over, inappropriately admitted to a UK hospital(s) on an emergency basis. Specifically excluded were:  Material published and/or based on data collected prior to 1993 (the date of the implementation of the NHS and Community Care Act 1990a key piece of legislation significantly affecting the provision of older people's services).  Local inspections where findings have been summarised in a national report.  Articles reporting findings from studies already included in the review.  the rate of inappropriate emergency admissions of older people identified by the study  the way 'inappropriateness' is defined  solutions proposed to reduce the rate of inappropriate admissions  the extent to which patient perspectives are included in these studiesas we discuss below, we feel this is a key gap in the literature and one where future research is needed The quality of individual studies was not appraised as part of our inclusion criteria (all studies that met the criteria above were included), albeit that potential limitations in the methods adopted were noted (see below for further discussion). In conducting this review, we recognise that the terminology used by different commentators and stakeholders is contested. We prefer terms such as 'avoidable' or 'preventable' admissions (which recognise that some admissions might not have taken place if alternative services existed locally or if a different course of action had been taken at an earlier stage). However, there is a key strand of literaturevery much reflected in policy and media debates -which categorises admissions as 'appropriate' or 'inappropriate', and this is the focus of our current review. Findings Overview of the literature Despite significant media and policy debate, the review identified only ten studies that met our criteria. These are summarised in Tables 1 to 3 below, with a subsequent discussion of the relative absence of patient perspectives and the implications of these findings for future research, policy and practice. As can be seen, all of the studies bar one were from England (Beringer and Flangan's (1999) study was based in Northern Ireland). Rates of inappropriateness varied widely (see Table 1 and see below for further discussion), while the methods used to define appropriateness were primarily based around clinical judgement or the use of structured 'clinical review instruments' (structured lists of reasons why patients might appropriately be admitted to hospital -see Table 2). Though patient perspectives were included in two studies, one of these studies was written by two of the current authors, while the other did not go on to use this qualitative data in a meaningful way. Finally, the solutions proposed by different authors were diverse and often based on the opinion of individual researchers rather than on formal evaluation of genuine alternatives to hospital admission (see Table 3). Rates of in/appropriate admission The literature does not provide a simple answer to the rate of in/appropriate admissions to hospital (see Table 1): rates of 'inappropriate' admissions vary widely depending on what tools are used to judge the admission or whether this is based solely on the decisions of health professionals (see below for further discussion). Rates also depend on geography, with differences between rural and urban hospitals (Coast et al., 1996); time of yearwinter seeing an increase in the overall admission rate and increasing the likelihood of inappropriate admissions (Beringer and Flanagan, 1999); which services are available in a particular area and whether they can be accessed as true alternatives to hospital; and who saw the patient in terms of what knowledge and experience they had in caring for older people (Leah and Adams, 2010). These findings reflect the difficulties facing acute care in terms of staffing and resource availability, as well as differences occurring due to environment and how these can all impact on the appropriateness of emergency admissions. These varying rates make comparisons difficult and suggest a critical need to take local context into account when researching and creating policy around emergency admissions: one blanket response, without appropriate, locally contextualised research evidence, will not necessarily deal with the problem (which manifests itself very differently in different local areas). Definitions of 'appropriate' and 'inappropriate' admission The literature shows there is no accepted standard definition of what it means to be an inappropriate admission (see Table 2), with studies tending to adopt one of two approaches. However, there are a number of potential criticisms of these tools in the broader literature, including that the AEP does not take into account the fact that there may be no other option in the local area for the patient except hospital (Glasby and Littlechild, 2000). It is for this reason that some commentators have referred more to 'avoidable' than to 'inappropriate' admissions (Mytton et al., 2013;see Glasby and Littlechild, 2000 for more on problems with terminology), as well as the fact that the AEP can be used in 'pure' or amended form and that this can make a difference to what is then deemed appropriate or otherwise (Houghton et al., 1996). Appropriateness also depends on when the AEP or ISD-A are applied to each patient's case: only when there is more knowledge of the person and what actually went on to happen to them can they be properly judged an inappropriate admission (see Coast et al., 1995;Tsang and Severs, 1995). In other words, these tools are helpful up to a point, but are applied retrospectively and take no account of local circumstances or the availability of alternative services. All this reveals the complexity which surrounds decisions on who is appropriate to admit to hospital. While some studies draw heavily on professional (often medical) discretion but lack consistency and transparency, others use more structured protocols but lack the insights which local professional judgement can bring to understanding the issues at stake. Patient Perspectives As outlined above, one of the key stakeholders in understanding how a patient got from being healthy to being admitted to hospital is arguably the patient themselves. They may have real understanding of how their health changed over time and, significantly for reducing inappropriate admissions, what preventative measures could have been taken to avoid hospital admission. Yet, our search found the inclusion of patient perspectives was rare and that their knowledge and potential contribution is therefore missing from research into inappropriate admissions. Only two of the studies in our review (Houghton et al., 1996; Littlechild and Glasby, 2001) included a patient perspective, one of which was written by two of the current authors, while the other research team did not go on to write up any of the findings from this qualitative element of their study. In our view, this dramatically undervalues the contribution which patients could make to current debates and represents a key gap in the literature (see below for further discussion). Possible solutions As Table 3 suggests, different authors suggest a very broad range of potential solutions (or developments that might help reduce the scale of the problem). While some studies focus Coast et al., 1996 use the ISD-A to judge the appropriateness or otherwise of the admissions in their study, writing: 'The appropriateness of admission was assessed using explicit standardized criteria in the form of the intensity-severity-discharge review system with adult criteria (ISD-A). Up to 19 explanatory variables were available for the analyses. These variables were modelled for each centre separately, using logistic regression to produce final sets of factors independently related to the appropriateness of admission.' The tool was applied during and after the first 24 hours of admission, using hospital notes and patient records, with the patient's health status available in only one of the two study sites. The ISD-A uses both a generic set of criteria for all patients and then more specific questions related to certain conditions or hospital units. The researchers did not meet with the patients themselves, but relied on these assessment criteria and the intensity of the service they are receiving. Logistical regression was carried out on the variables which arose from the application of these criteria. Inappropriateness was judged on the criteria and the intensity of service the patient was receiving within a 24 hour time-period. Though this provides a clear route by which to judge an admission as in/appropriate, it leaves the patient themselves out of the discussion, focusing instead on clinical notes and statistical regression models. The inappropriateness of the admission is judged without recognition of the situation the patient may have been in prior to admission; in one case study site this included not taking the person's health status prior to admission into account. The authors are well aware of the potential limitations of their method, including that the tool has only been noted as 'fair to moderate' in validity, but feel its strengths lie in the consistent application of an objective tool. on particular alternative service models (Leah and Adams, 2010;Mayo and Allen, 2010), these authors were a part of the organisations setting up and evaluating such servicesand more independent verification may be needed to develop a more robust evidence base. However, many of the rest of the recommendations have more of a 'scattergun' feel and are certainly a lot less focused or definitive. Indeed, the impression in the majority of the literature is of authors who have identified a problem and are then speculating on potential ways forwardrather than a series of studies which are able to point unambiguously to specific solutions. There is, however, general agreement that high quality decision-making is needed when deciding whether to admit an older patient to hospital care or not and that health care professionals in different parts of the system should be supported and trained to be able to do this more effectively than at present. These findings, though complex, have important implications for health research, policy, and practice which we will now go on to examine. Implications of this Review From this review of the literature it is clear that inappropriate emergency hospital admissions are highly complicated and, potentially, not currently very well understood. Given this is such a high profile policy and media issue, it is particularly surprising that there are so few UK studies setting out a formal rate of inappropriate admissions, and there is a an urgent need for more research. Our review also suggests that different studies use different approaches to defining the rate of inappropriate admissions, finding different levels of inappropriateness in different local contexts. This is a highly important point: if the tools and methods used to categorise an emergency admission then define 'inappropriateness' differently this will feed through into how many inappropriate admissions are understood to exist and the following analysis and understanding of the situation within those specific hospitals and beyond. If our ability to even understand an admission as inappropriate is limited, then our ability to respond positively to the issues at stake is significantly curtailed. At present, some studies include a key role for the clinical judgement of local professionals (but often provide little detail on how decisions are made) while others use more structured tools (but pay insufficient attention to clinical expertise and the context of local services). Without much greater attention to the strengths and limitations of each approach, current debates are likely to be over-simplistic and limited in terms of their effectiveness. To be successful, future policy must surely pay much greater attention to the importance of local context, and this review suggests that there is unlikely to be a 'one size fits all' solution to an issue this complex. Many of the 'solutions' currently put forward also appear to lack rigour, based more on the informal assumptions of individual authors than on a detailed analysis of the pros and cons of alternative service models. Above all, older people seem to be rarely involved in research into inappropriate admissions, and this seems a major gap. If policy and practice is to better understand how best to reduce the number of potentially inappropriate or avoidable admissions, it is difficult to imagine a way forward which does not involve some degree of engagement with older people themselves. Researchers, clinical experts and structured tools might all have a role to play in exploring the nature and scale of the issues at stakebut it is our contention that research, policy and practice must also engage directly with older people if local services are to stand a chance of understanding how people come to be admitted as emergencies, what alternatives might have been appropriate, and what might work better in the future. Overall, therefore, this review concludes that more research is needed to contribute additional evidence to highly topical policy and media debates, that local context is crucial in understanding the issues at stake, and that future research must engage meaningfully with the lived experience of people using services. The current research team is involved in a wider study which seeks to fill precisely this gap (Glasby et al., forthcoming)but until these limitations in the existing evidence are overcome, the search for potential solutions is likely to prove elusive. Conclusion This survey of the relevant literature has shown that emergency admissions are a complex topic, for which there are few, if any, straight-forward answers. Varying rates of inappropriateness across contexts allow for few comparisons, but instead highlight the critical need to take context into account when researching emergency admissions and suggesting possible practice and policy solutions. These varying rates in part rest upon the initial definition of in/appropriateness given in the literature, which is defined in two ways: using expert clinical perspectives or by using more structured clinical review instruments. Neither approach is perfect: the former rests on potentially opaque decision-making processes, inevitably subjective and partial, while the latter approach, though guided by more objective criteria, is arguably overly-simplistic, enjoys the benefit of hindsight and ignores the realities of what resources/alternatives were actually available to local practitioners. Future research needs to take these methodological concerns into account. (1995,1996) Intensity-Severity-Discharge Review System with Adult Criteria (ISD-A). In the 1995 paper, GPs then commented on those cases perceived to be inappropriate according to the ISD-A Houghton et al (1996) Appropriateness Evaluation Protocol (AEP) Leah and Adams ( The researchers themselves, who judged in their professional capacity as surgeons Mytton et al (2012) Opinions of two consultant geriatricians and one GP Tsang and Severs (1995) AEP and also the opinion of one of six participating consultants More support for GPs in providing appropriate medical care for older people; enhanced investment in community services; and reinvestment in acute hospital care for older people Coast et al (1995,1996) More funding for alternatives to hospital (for example, GP beds and urgent outpatient assessment) Houghton et al (1996) Better liaison between health and social services and more timely provision of community care services; more non-acute bed provision (or an acceptance that acute beds are actually a mixture of acute and nonacute). Leah and Adams (2010) Further evaluation of teams like the Assessment Team for Older People described and further investment in their creation in hospitals around the country Littlechild and Glasby (2001) Broad range of potential solutions, including: more preventative work with older people to prevent falls, improve the detection of established illnesses and to help people manage and treat identified illnesses more effectively; health and social care services need to work more closely together; preventative social work strategies for those needing only small amounts of support at an earlier stage than they might have been referred; more integrated service delivery to users; and more communication and information about where people can go for help. Mayo and Allen (2010) More investment in such Rapid Response teams as the one described McDonagh et al (2000) Suggests greater methodological clarity and transparency when studies are written up so that results can be better compared and understood; also suggests not using subjective opinion to judge appropriateness of admission and length of stay. For older people specifically, more intense outpatient services or sub-acute beds could be provided. Continued research is needed to produce definitive conclusions.
2018-03-22T20:36:58.123Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "1f46926d4ff9d92d205741c09836357c65632b67", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hsc.12281", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "73c1e17af70efd8a1d8d0b3884f156179060ca32", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
134262145
pes2o/s2orc
v3-fos-license
The existence of High Conservation Value Forest (HCVF) in Perum Perhutani KPH Kendal to support Implementation of FSC Certification . High Conservation Value Forest (HCVF) is the identification of High Conservation Values that are important and need to be protected. Under FSC certification mechanism, HCVF becomes one of Principles and Criteria to attain certification. In this study, we identify the existence of HCVF in Perum Perhutani KPH Kendal to support implementation process of FSC certification. Qualitative method was conducted through observation and secondary data from Perum Perhutani KPH Kendal. Data analysis showed through ecolabel certification, Perum Perhutani KPH Kendal has been identified HCVF area covering 2,715.5 hectares consists of HCV 1 until 6. Secondary Natural Forest (HAS) Subah and Kaliwungu for Ulolanang and Pagerwunung Nature Reserve buffer zone include as HCV 1.1, conservation area of leopard ( Panthera pardus melas ) and Pangolin ( Manis javanica ). for HCV 1.2, conservation area of lutung ( Trachypiyhecus auratus ) as endemic species for CITES App I and Critically Endangered species include as HCV 1.3, Goa kiskendo for bats species habitat include as HCV 1.4, regions of interest species for Deer ( Cervus timorensis ) and Kepodang ( Oriolus chinensis ) as HCV 2.3, Germplasm Protection Region/ KPPN area with high biodiversity include as HCV 3, river border area and water springs for HCV 4. While, utilization of firewood, grass for cattle fodder include as HCV 5 and 14 cultural sites include as HCV 6. From monitoring and evaluation of HCVF data, showed that in 2011-2015 the level of diversity for flora and fauna were increased. Background Indonesia's forests are the third largest tropical forest in the world after Brazil and Congo covering forest area of 1,860,359.67 km2 and second ranks for biodiversity after Brazil. Sustainable forest management in tropical countries using mandatory command and control approaches are seen as unsuccessful project by green consumers, proving that the environment of tropical forests is getting decrease, including forest in Indonesia. Forest degradation in Indonesia has occurred with a very large scale area. According to the analysis of Forest Watch Indonesia in 2011, rate of deforestation in Indonesia during these three periods decreased due to the depletion of Indonesia's forest area of 1.8 million ha / year within the period of 1985-1997, about 2.84 million ha / year in the period of 1997-2000 and approximately 1.51 million ha / year during 2000-2009 [1]. Perum Perhutani as a state owned company that manage forest products area in Java and Madura Island, take a responsibility to create a sustainable forest. Ecolabel certification is a forest management instrument with the aim to maintain the sustainability of forest resources and their functions. The implementation of ecolabel certification in Perum Perhutani KPH Kendal uses Forest Stewardship Council (FSC) scheme. This study aims to determine the existence of High Conservation Value Forest (HCVF) in Perum Perhutani KPH Kendal and management forms undertaken to maintain environmental protection. HCVF is an area that has one or more High Conservation Values (HCVs). HCV is something that has high conservation value at the local, regional or global level that includes ecological values, environmental, social and cultural values [2]. HCVF (High Conservation Value Forest) Identification result HCVF was introduced by FSC in 1999 based on FSC principle number 9. The concept of HCVF is intended to identify the existence of high conservation values existing in an area (forest) and the establishment of it's management and monitoring plan to maintain and / or enhance the conservation value. HCVF is an area that has one or more High Conservation Values (HCV). HCV is something that has high conservation value at the local, regional or global level that includes ecological values, environmental, social and cultural services. In the HCVF concept, high conservation values (HCV) classified into six HCV values. Forest areas could be classified to be of high conservation value if they possess one or more of the following characteristics [2]: HCV1: Forest areas that has a globally, regionally and locally important concentrations of biodiversity values (e.g. endemic species, endangered species, refugia). HCV2: Forest areas that has locally important in global, regional and local landscape area, within or with a management unit, in which most species populations, or all species naturally present in the region, in distribution patterns and natural abundance. HCV3: Forest areas within or rare, threatened or endangered ecosystems. HCV4: Forest areas that act as natural regulators in critical situations (e.g. watershed protection, erosion control). HCV5: Forest areas that is importance to meet the basic needs of local communities (e.g. fulfillment of basic needs, health). HCV6: Forest areas that are critical to the traditional cultural identity of local communities (important cultural, ecological, economic, religious areas identified with local communities). According to the HCV identification in Kendal forest area of 2,715.5 Ha consist of HCV 1, HCV 2, HCV 3, HCV 4, HCV 5 and HCV 6. HCV 5 HCV 5 is related with the fulfillment of basic human needs. Some of these activities, contributed by Perum Perhutani KPH Kendal to fulfill the basic needs such as intercropping system, utilization of firewood by the community from the forest, the fulfillment of forage needs and the source of clean water from the spring. HCV 6 Perum Perhutani KPH Kendal identified 14 cultural sites which used by the community for the fulfillment of religious and cultural needs. Cultural sites located in KPH Kendal such as cultural tombs. These cultural sites usually used by local communities and outside areas to perform rituals such as alms earth, religious tourism, certain rituals (ascetic). Detail identification of HCVF areas can be seen in the Table 1. Management of HCVF areas In order to maintain and enhance the existence and function of HCV Areas, Perum Perhutani KPH Kendal undertakes some management such as: -Maintenance of HCVF boundary. -Counseling /socialization of illegal activities on HCVF areas Counseling of illegal activities on HCVF areas in KPH Kendal such as illegal cutting prohibition, illegal forest fire prohibiton, wildlife hunting prohibition and wild cultivation and HCVF areas management to the stakeholders and communities who lives near forest areas. , Pt 9, Pt 61, pt 67j, pt.39f, pt 44 In areas which identified as HCV 1 to 4 above, HCV with good conditions are characterized by the increasing levels of different types of flora and fauna in some of these areas. Tumpangsari (agroforestry), forage animal feed (HMT) and Rencek/ branches (HCV 5) From monitoring and evaluation result of HCVF KPH Kendal in 2015, showed that the community near forest can fulfill their life needs from intercropping, forage animal feed (HMT) and rencek. Annual data shows an increasing income, which can be seen in the following table:
2019-04-27T13:08:53.351Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "b96227126f40577eaa43b2bc743daed970652e37", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/06/e3sconf_icenis2018_08019.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "637b8af2fcaac81cfe431c949f48dcaedc78e080", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
245387360
pes2o/s2orc
v3-fos-license
Factors Influencing the Creativity of Chinese Upper-Secondary-School Students Participating in Programming Education Purpose: This study explored whether instructional characteristics, learner characteristics, family socioeconomic status, and gender influence creativity in the context of programming education in China. Methods: A total of 851 upper-secondary-school students in Beijing, China, were surveyed using the Creativity Scale, Programming Learning Scale, Programming Teaching Scale and Family Socioeconomic Status Questionnaire. SPSS (version 22) was used for correlation analysis, t-test and regression analysis. Results: (1) Teachers’ programming teaching method and management; students’ programming learning approach, attitude, and engagement; gender; and family economic capital were all significantly associated with creativity. (2) There were significant differences between males and females in terms of creativity, programming learning approach and programming learning attitude. (3) Learner attitudes, engagement, and approach, and their family economic capital, were strong predictors of creativity, with the strongest influence of learners’ attitudes to programming learning and weaker influence of family economic capital. Conclusion: The main factors that influence creativity in the context of programming education are programming teaching method, programming teaching management, programming learning approach, programming learning attitude, programming learning engagement and family economic capital. Among these, learner factors (attitude, engagement, and approach) and family economic capital are the key factors influencing creativity. These findings provide a basis for improving the creativity of Chinese programming learners and inspire teachers to consider learner factors and gender differences as they design and manage their instruction. Furthermore, the influence of family economic capital on the creativity of learners cannot be ignored. INTRODUCTION As an important aspect of learning, creativity is an indicator of student development. McWilliam (2009) identified creativity as a key learning outcome of our time and the core business of education. Also, in 2018, the OECD (2018) in its publication "Learning Framework for 2030" mentioned that creativity is a necessary skill for learners, and recommend it as an educational focus. Similarly, Zhang et al. (2021) argued that creativity is an important component of any educational programming. As Shu et al. (2020) stated, creativity is essential for every culture and society, which is required to solve new problems for students in the twenty-first century (de Vries, 2021;Kozhevnikov et al., 2021). Increasing evidence confirms the importance of programming education in creativity development. The American New Media Alliance pointed out that programming will gradually become a key element promoting basic education (Sun and Li, 2019), which enables learners to create new strategies to solve problems and to test innovative solutions in all disciplines (Saritepeci, 2019). In other words, programming education takes place in a technologyenhanced environment (Hung and Sitthiworachart, 2019), which provides the most direct pathway for the development of thinking skills (Fu et al., 2021). Furthermore, experts agreed that fun programming methods can develop creativity (Tengler et al., 2020). Similarly, Noh and Lee (2020) demonstrated that programming itself is a creative activity through an 11week programming course experiment, which can stimulate students more creative. Although few scholars have explored what factors are associated with the development of creativity in the context of programming education, studies have shown that it is related to certain characteristics of learners. For example, Campos Cancino and Moreno Minguez (2020) argued that family factors are the basis for learners' cognitive and emotional development, which influence the development of learners' creativity. Similarly, Yang et al. (2020) used data analysis to find that the socioeconomic status of college students' families predicted creativity. Liang et al. (2021) also found that respondents' self-rated creativity is positively related to family socioeconomic status in an investigation among Chinese adolescents aged 9-14. In addition, Zhang et al. (2020) argued that learners' gender also affects creativity; specifically, boys outperform girls in terms of originality of creativity, but girls outperform boys in terms of abstraction. Also, Noh and Lee (2020) study demonstrated gender differences in learner creativity. Besides, promoting learners' creativity by programming education has certain relation with the effectiveness of programming instruction. Research has shown that there are at least two major factors affecting the effectiveness of programming instruction: teacher teaching and student learning. In the case of programming instruction, teachers' teaching methods and management can affect the effectiveness of programming instruction (Kiss and Arki, 2016;Bi and Shi, 2019;Wu and Hao, 2019). For example, Tang et al. (2017) demonstrated through experiments that using flipped class approach in programming courses can improve students' cognitive and competence levels. Again, Du and Liang (2011) demonstrated through validated factor analysis that the role of teachers in managing the classroom was a factor in the effectiveness of teaching and learning. In terms of student programming learning, existing research has found that learners' attitudes toward learning programming (Durak, 2018b), learning approach (Tan and Lee, 2017), and learning engagement (Tian and Wu, 2018) also influence learning outcomes. Therefore, a more comprehensive perspective is needed to examination the influencing factors of students' creativity development, which contains three aspects: programming learner's family factors, teacher's programming teaching factors and programming learners' individual factors. Specifically, we propose the following research hypotheses. LITERATURE REVIEW Creativity Anything new arises from creativity (Noh and Lee, 2020). Creativity is the ability to create new products or ideas that are valuable and useful (Woodman et al., 1993;Shalley et al., 2016;Qiang et al., 2020;Hou et al., 2021;Zhang et al., 2021). Generally, this ability is reflected in behaviors such as inventing, designing, inventing, creating, and planning (Guilford, 1950) and is characterized by fluency, flexibility, novelty, and refinement (Kupers et al., 2018;Kim et al., 2019;Rao et al., 2021) and is the result of the interaction between capacity, process and environment results (Gu et al., 2021). Family Socioeconomic Status Family socioeconomic status comprises an individual's or family's property income, the educational level of parents, and parental occupation (Wiederkehr et al., 2015;Yang et al., 2020). Family property income is defined as the capital, which is used to purchase material goods like household goods, books, mobile phones, and computers. Parental occupation refers to the resources acquired through parents' social interactions and relationships, including their work or occupation (Entwisle and Astone, 1994). Some studies have found that the family socioeconomic status affects learners' creativity. For instance, Zhang et al. (2018) studied a sample of 955 students and found that family socioeconomic status is significantly related to creativity, though students with low family socioeconomic status face limitations in creativity while solving problems. Similarly, Lebuda and Csikszentmihalyi (2020), using a grounded-theory methodology, found that family factors affect the development of creativity. Likewise, Yang et al. (2020) showed a significant positive association between family socio-economic status and creativity in a survey of students at ten universities. Also, Jankowska and Karwowski (2019) related the level of creativity of children to family socioeconomic status. Programming Education-Instruction In programming education, instructional approach refers to the purposeful and organized way in which teachers guide students in acquiring knowledge and skills in computer programming, including aspects such as curriculum content, learning environment, teaching strategies, curriculum, and evaluation of student achievement. Schooling is seen as a major site for creativity development (Kong et al., 2013;Gu et al., 2021), and some experts have found a relationship between teaching and creativity. For example, Lin et al.'s (2017) study with 104 students showed that exploratory education can have a significant impact on creativity development. Likewise, Huang et al. (2021) argued that adaptive teaching models were effective in enhancing students' creativity. In addition, a survey of 872 teachers and 944 students by Burayeva et al. (2020) showed that effective management strategies were beneficial in stimulating students' creative potential. Similarly, problem-based teaching (Sidek et al., 2020), project-based teaching (Wu and Wu, 2020), and active teaching methods (Vokic and Aleksic, 2020) all help to stimulate students' creative processes and foster their creativity. Programming Education-Learning Learners acquire programming knowledge and skills through a process that involves their learning approach, learning attitude and their learning engagement. Researchers have found that learner engagement, as the initial condition for learning, plays an important role in learning computer programming (Durak, 2018a), and that negative attitudes toward learning how to program can affect engagement in programming courses (Durak, 2020). For instance, Zuo et al. (2019) found that learners' perceptions can influence their creativity. Flipped learning strategies and self-directed learning styles can enhance learners' creative performance (Hsia et al., 2021;Shafait et al., 2021). Again, studies have found a strong correlation between individual affective attitudes and creativity (Silva and Coelho, 2019;Hernandez-Jorge et al., 2020). Furthermore, an empirical study conducted by Yang (2021) on 466 practitioners also showed a positive correlation between personal work engagement and creativity. Similarly, Sun (2020) survey of 652 university students showed that student engagement moderated the relationship between social media use and student creativity. Gender Currently, although there are many studies on gender differences in creativity, scholars have not reached agreement regarding the influence of gender differences on creativity. Taylor and Barbot (2021) found that no significant difference between the scores of males and females on creative drawing tasks. Similarly, Koronis et al. (2019) findings revealed no correlation between gender and creativity. In contrast, Zhang et al. (2020) suggested that males performed better in some aspects of creativity. Participants Participants were drawn from four public schools in Beijing, China, the region that first initiated programming education. A combination of convenience and random sampling method was used for this study. First, the researchers chose one school each in Xicheng, Haidian, Chaoyang, and Shunyi Districts of Beijing, China. The four schools had been teaching computer programming for a long time and served the same grade levels. Second, a random sample of upper-secondary-school-students from each school was selected to complete the questionnaire in about 15 min for the study. A total of 851 students (aged 16-18) took part in the study and completed the questionnaire. Of the respondents, 405 (47.59%) were male and 492 (52.41%) were female. In order to ensure the authenticity and reliability, this study used paper questionnaires in the classroom environment with the permission of programming teachers and participants. Before the survey, all participants knew the purpose of the study and participated voluntarily. Moreover, the researchers explained that they can leave the classroom at any time without any reason, and all information would be kept confidential. It is worth noting that all participants agreed to use their anonymous data. Measures The instrument used in this study contains five sections: demographic information, Creativity Scale, Programming Learning Scale, Programming Teaching Scale, and Family Socioeconomic Status Questionnaire. The demographic information section collected data on gender and age. All three scales and the questionnaire were originally developed in English and were translated into Chinese for use in this study. Given that Brislin (1970) demonstrated that checking the quality of back-translations is valid, we adopted his method and back-translated the instrument: one researcher translated the measures from English to Chinese, and the second researcher translated the Chinese version back to English, and finally, the third researcher compared the three versions (original, translation, and back translation) to determine the equivalence among them. Creativity Scale The Creativity Scale is based on the Adaptor and Innovator Scale (Kirton, 1976), which is currently the most widely used measure of creativity. The scale contains 32 items, including three dimensions: originality, organization and appropriate respect for authority and rules. Sample items were "I usually have original ideas about problems "and" I often break the rules when doing things." Each item is scored on a 5point Likert scale: 1 means completely disagree and 5 means strongly agree. Scores range between 32 and 160, and the higher the score, the more the individual's creativity tends toward innovation. The Cronbach's alpha from the original study was reported as 0.88. Before collecting the formal data, we tested 160 students with Cronbach's alpha of 0.969. In addition, exploratory and confirmatory factor analysis revealed factor loading of the scale was between 0.5 and 0.87, which shows the good reliability and validity. In this study, the alpha was 0.979. Programming Learning Scale According to the OECD (2009), we have compiled the Programming Learning Scale based on the characteristics of programming education. The scale consists of 22 items and is divided into three dimensions: programming learning approach, programming learning attitudes and programming learning engagement. Scores range from 22 to 110: the higher the score, the more effective the learner at learning computer programming. Before beginning formal data collection for this study, we tested the scale with 160 students, the Cronbach's alpha was 0.917, and the factor loading was between 0.55 and 0.88. In this study, the alpha was 0.879, which shows good reliability. Programming Teaching Scale Similarly, the Programming Teaching Scale was adapted from the OECD (2009). On the basis of the original questionnaire, we have added relevant features of programming education. The scale consists of 22 items, including programming teaching methods and programming teaching management. Participants evaluated the statements using a 5-point Likert scale, where 1 means completely disagree, and 5 means strongly agree. The score ranges between 21 and 105: the higher the score, the better the programming teaching level. In the initial test of 160 students, the reliability of the scale was 0.937, and the factor loading was between 0.57 and 0.85. After formal testing, the Cronbach's alpha was 0.939. Family Socioeconomic Status Questionnaire Based on the OECD (2009) survey of students' family backgrounds and the conditions of information technology, we adapted the questionnaire to form the Family Socioeconomic Status Questionnaire. In particular, family cultural capital refers to the educational level of the parents, which we assigned from elementary to graduate school. Family social capital refers to the occupation of the parents. Parents' occupations were assigned according to the occupational calculation method of the family socioeconomic status, that is, from top to bottom, according to the government or organ cadres or civil servants, business managers, ordinary employees, etc. Family economic capital was determined by the number of properties owned by the family. One point was awarded for owning an item and no points were awarded for not owing an item. The scores were then normalized for family social capital, family cultural capital and family economic capital. Data Analysis In this study, all statistical analysis was performed using IBM SPSS Statistics Version 22. First, all data were electronically entered using Office 2019. After that, correlation analysis, independent sample t-tests, and linear regression analysis were carried out. The statistical significance level for all tests was set at p < 0.05. Lastly, the results were organized into tables. Research Process To confirm or refute the hypotheses, data analysis was conducted in five main steps. First, we verified the correlation between creativity and the hypothetical factors to determine whether Frontiers in Psychology | www.frontiersin.org there was a relationship between variables. Next, we used independent sample t-tests to determine if there were gender differences in creativity and the influencing factors. Then, we divided the scores for creativity into high-and low-scoring groups and used independent sample t-tests to determine the difference in factor scores between these two groups, thus proving whether the average difference in different creativity scores can be inferred to the overall. After that, similarly, we divided the six hypothetical factors into high-and low-scoring groups to test whether there was a difference in creativity between these two groups for each influencing factor. Lastly, single factor regression analysis and stepwise regression analysis were carried out, and the effective prediction index and regression equation of creativity were obtained. Descriptive Analysis The statistical information of participants is shown in Table 1. In terms of gender, 405 participants were male and 492 were female. In terms of age, the age of the participants was concentrated in 16-18 years old, 151 people were 18 years old, 240 people were 17 years old, and 480 people were 16 years old. Table 2 shows the mean scores and standard deviation of participants' creativity scores and the hypothetical influencing factors. It was found that the mean total score for creativity was 3.423 (SD = 0.938), which indicates that the creativity of the participants inclined toward Table 3 presents the relationships between creativity and its influencing factors in this study. The results of Pearsoncorrelation analysis show that there is a strong relationship between learner's creativity and these factors: learners' gender (r = -0.091, P = 0.008), family economic capital (r = 0.144, P < 0.001), programming learning approach (r = 0.330, P < 0.001), programming learning attitude (r = 0.687, P < 0.001) and programming learning engagement (r = 0.447, P < 0.001), programming teaching management (r = 0.172, P < 0.001) and programming teaching method (r = -0.084, p = 0.014) Among them, family economic capital, programming learning approach, programming learning attitude, programming learning engagement, programming teaching management and creativity are positively correlated, programming teaching method and creativity are negatively correlated. Family cultural capital (r = -0.028, p = 0.419) and family social capital (r = 0.057, p = 0.096) were not found to be correlated with creativity. Gender-Related Differences in Scores for Creativity and Influencing Factors Through correlation analysis of creativity and influencing factors, we found that gender was significantly correlated with creativity among Chinese teenagers learning computer programming. We divided scores into two groups according to gender, 405 for males and 446 for females. Then we used independent sample t-tests to test the differences in creativity and influencing factors between the two groups. The critical p-value for all tests in this study was 0.05. Table 4 shows a significant difference in scores for creativity between male and female learners (t = 2.616, p = 0.009). We also used independent sample t-tests to determine whether there was a significant difference between females and males regarding factors influencing creativity. The results show an extremely significant difference in programming learning approach (t = 4.268, p < 0.001) and programming learning attitude (t = 5.618, p < 0.001). Gender difference was not found for other factors. In terms of family cultural capital (t = 0.799, p = 0.425), family social capital (t = 0.249, p = 0.804), family economic capital (t = 0.500, p = 0.617), programming learning engagement (t = -1.597, p = 0.111), programming teaching method (t = -1.627, p = 0.104) and programming teaching management (t = -1.580, p = 0.114), no gender difference was found. Table 5 shows the differences in factor scores between groups with high and low scores for creativity. First, we divided the total research participants into high-and low-scoring groups for creativity; that is, the 27% who had the highest scores for creativity became the high-scoring group, and the 27% with the lowest scores became the low-scoring group. Each group comprised 230 data sets. Second, we used independent sample t-tests to determine whether there were differences among the influencing factors. The results showed that the p-values of family economic capital (p < 0.001), programming learning approach (p < 0.001), programming learning attitude (p < 0.001), programming learning engagement (p < 0.001), programming teaching method (p = 0.001) and programming teaching management (p < 0.001) were all less than or equal to 0.01, and there were significant differences between the two groups. Accordingly, we can conclude that there is a causal relationship between these factors and creativity in this context. Furthermore, the difference between the high-and low-scoring groups was extremely significant for programming learning approach, programming learning attitude, and programming learning engagement: the mean gap was 1.043, 1.741, and 0.900, respectively. Since there were no significant differences in family cultural capital (p = 0.657) and family social capital (p = 0.076) between high-and low-scoring groups (p = 0.05), they were not considered as influencing factors for creativity. Differences in Creativity Between Groups With Low and High Scores for Hypothetical Factors We also verified whether the differences in the scores for hypothetical factors led to differences in the creativity of Chinese teenagers of computer programming. To achieve this, the scores for each hypothetical influencing factor were divided into three groups. The highest-scoring 27% of the total sample was the highscoring group, and the lowest-scoring 27% was the low-scoring group, with 230 data sets each. The results, presented in Table 6, indicate differences in the creativity of high-and low-scoring groups for family economic capital, programming learning approach, programming learning attitude, programming learning engagement, programming teaching management and family cultural capital. Among them, there is relatively small difference of creativity in family cultural capital, and no difference in family cultural capital score (p = 0.042) and programming teaching method (p = 0.075). Thus, family economic capital, programming learning approach, programming learning attitude, programming learning engagement, programming teaching method and family cultural capital can be regarded as important conditions for developing the creativity of Chinese teenagers involved in programming education. Analysis of the Influence of Influencing Factors on the Creativity of Chinese Teenagers Participating in Programming Education The results presented above suggested that student gender, family economic capital, family cultural capital, programming learning approach, programming learning attitude, programming learning engagement, and programming teaching method and programming teaching management are the important factors influencing the creativity of Chinese teenagers participating in programming education. Using these eight factors as independent variables and creativity as a dependent variable, we conducted a single factor regression analysis, the results of which are presented in Table 7. Similar to Liu et al.'s (2021) research methodology, the analysis also produced the following results: first, the regression equation for family economic capital, programming learning approach, programming learning attitude, programming learning engagement, programming teaching method, programming learning management and creativity is significant. Second, although family cultural capital is an influencing factor for creativity, the regression fit is not good (R 2 = 0.001), and the equation is not significant (p = 0.419 > 0.05). Consequently, the effect size of family cultural capital is low. Finally, the significance of gender falls somewhere between the above two situations (R 2 = 0.008, p = 0.008). Multiple stepwise linear regression analysis was performed using the same eight factors as independent variables and creativity as the dependent variable, and the results are schematized in Table 8. Only four independent variables were retained: programming learning attitude, programming learning engagement, programming learning approach, and family economic capital. Except family economic capital (p = 0.003), the p-values of other three independent variables were less than 0.01. The variance inflation factors for these four variables were 1.868, 1.352, 1.551, and 1.031, respectively, suggesting that multicollinearity was not present. It can be concluded that in the context of programming education for Chinese teenagers, programming learning attitude, programming learning engagement, programming learning approach and family economic capital are the important influencing factors and effective predictors of learners' creativity. The regression equation is Where, Y represents the creativity of a Chinese teenagers who is learning computer programming, X 1 represents their attitude toward learning how to program, X 2 represents their learning engagement, X 3 represents their learning approach, and X 4 represents their family's economic capital. In addition, the R 2 of the regression model is 0.488; that is, the total explanatory rate of the three independent variables to the dependent variable is 48.8%. The equation is significant. DISCUSSION Cultivating innovative talents is an important goal of education development, and improving students' creativity has become the focus of improving education quality. As a complex and innovative curriculum, programming education is expected to support students' creative development. More and more studies have confirmed that programming education is related to students' creativity. However, whether programming education can promote learners' creativity may be influenced by other factors. Therefore, it is urgent and meaningful to explore what factors affect the creativity development of students in the context of programming education in China. Based on the existing literature, this study explores how the programming learning characteristics of learners, teachers' teaching, family socioeconomic status and student's gender affect students' programming creativity, thus to cope with this demand. This research is carried out in the real background of Chinese programming education, so that we have a preliminary overall understanding of the factors influencing creativity in the background of programming learning. CONCLUSION The results of this study provide partial supports for the hypothesis constructed. The research draws the following conclusions: First, in the context of this research, the individual factors of learners, including learners' attitudes, engagement and approach in programming learning, as well as gender differences, have an important impact on students' creativity. Second, family economic and cultural capital can influence students' creativity, while social capital has no influence on students' creativity development. Third, teachers' programming teaching including teaching methods and teaching management has little influence on students' creativity. Moreover, the four independent variables of students' programming learning attitude, learning engagement, learning approach and family economic capital are more important factors influencing students' creativity, from which to build the regression equation to help predict the development of Chinese youth programming learners' creativity tendency. The higher a programming learner scores in these four aspects, the higher level of his or her creativity. As a result, this study comprehensively considers the three main aspects that may affect students' creativity, including learners, teachers and families. The influence of these variables on students' programming creativity was explained, so as to provide inspiration for programming education and students' creative development. Individual Learning Characteristics and the Influence of Gender on Creativity in Programming Education Firstly, in the context of Chinese programming education, the direct factors that affect the creativity development of Chinese teenagers are some individual characteristics in learning, including the programming learning attitudes, learning engagement and learning approach. This result is consistent with other studies. For example, in the aspect of learning attitude, Kirton (1976) found that individual's cognitive attitude is one of the main factors determining different creative tendencies. Amabile et al. (2005) also believed that positive emotional attitudes were positively correlated with creativity. As for the influence of programming learning engagement on creativity, Denner et al. (2012) found that when middle school students create diversified programming games independently, the innovation of the programming games they completed is different due to the influence of personal involvement and knowledge reserve. In terms of programming learning approach, the problem-based, project-based and game-based learning approaches are more suitable for programming learning and conducive to the development of creativity (Tomos et al., 2017;Chis et al., 2018;Gunay et al., 2019), while memorizing and traditional learning approaches are not suitable for programming learning (Wu et al., 2012;Zheng and Huang, 2019). Secondly, in the context of Chinese programming education, learner gender affects both creativity and learning how to program. This result is consistent with existing research. There are significant differences between male learners and female learners in learning (Peng, 2019), male learners have more active learning opportunities and experiences than female learners (Brophy and Good, 1970). A study of gender differences in creative thinking also found that the areas of the brain associated with semantic cognition, rule learning and decision making were more active in men than in women, and divergent thinking easily activated (Abraham et al., 2014), this shows that boys have certain advantages in creative activities. At the same time, this conclusion is also caused by the characteristics of programming learning. The program itself is highly abstract and strict logical, and easy to cause female learners' fear and lack of interest, male learners are significantly more interested in programming learning than female learners (Sun and Li, 2019). Many scholars have proposed that programming teaching design and practice should take into account gender differences, so that all students can effectively participate in programming learning (Becker et al., 2019;Wee and Yap, 2021). The Influence of Family Factors on Creativity in Programming Education Firstly, in the context of programming education in China, family economic capital of teenagers is an important influence on the development of creativity. Combined with the above introduction, although the programming teaching in public schools in China is facing many development difficulties at present, the programming education companies are developing rapidly, which carry out interest oriented and specialty oriented programming education in a profit-making way. They established programming curriculum system from early childhood to senior high school students. This condition led to the investment of a large number of family economic capital. Some studies have also confirmed that families with high economic capital are more likely to provide good learning environment and educational resources for their children (Carvalho, 2016). In contrast, children from families with lower economic status are more likely to face more stress and hardship (Conger et al., 2010). Hence, teenagers with high family economic capital are more likely to study programming education courses and to dedicate more time to them (Zhou and Wang, 2014). Secondly, the family cultural capital of Chinese teenagers of programming have influence on creativity. This result echoes that of Zhu (2013), who suggested that cultural environment affects innovative behaviors. Studies notes that academically successful parents are more likely to expose their children to rich resources and challenging classes (Woo et al., 2021), and participate in intellectual activities with their children, thereby indirectly supporting children' creative development (Jankowska and Maciej, 2018). Such support is not only positive affirmation in attitude, but also economic, cultural and other aspects of support. For example, study by Liu and Morgan (2016) have shown that parents with higher cultural capital are more effective in guiding their children. They can provide children with more material, cultural knowledge, skills and other aspects of support, students' learning motivation, and achievement are relatively high (Chiu and Chow, 2010). Therefore, parents with higher family cultural capital are more likely to accept programming, understand and be familiar with programming, and thus provide educational guidance and support in many aspects for their children's programming learning and creativity development (Kong, 2017;Kong and Wang, 2021). Finally, differently from the hypothesis, the family social capital of these Chinese young adults had no obvious influence on their creativity. The likely explanation of this finding is that family social capital mostly plays a greater role in supporting children who are about to be employed (Peng, 2019). In contrast, the participants in this study were upper secondary school students in China. The place where they learn programming and develop creativity is in the classroom, they get more support from teachers and parents in terms of knowledge. This is consistent with Lareau's view, he pointed out that student's learning is extremely complicated and advantages of social class do not necessarily lead to good educational results (Zhou and Wang, 2015). As a result, when students learn complex programming knowledge, their family's social capital is often not directly related to creative development. The Impact of Teacher Teaching on Creativity in Programming Education Unlike other studies, this study found that the current teacher's teaching had less impact on participants' creativity. This needs to be explained in combination with the current development of programming education in China. China has issued a new generation of artificial intelligence development plan, which clearly proposes to vigorously popularize programming education, programming education has received unprecedented policy support. However, in the actual teaching of public schools in China, programming courses are not as important as major subjects like English and math. It was not included in the heavyweight exam. In addition, although numerous studies have revealed that programming teaching should build student-centered classrooms Ramirez et al., 2018). Nevertheless, teachers are better at knowledge transfer teaching at present, and their programming teaching methods, teaching experience and teaching innovation also need to be improved (Huang and Huang, 2017;Ohashi et al., 2018). All these reasons have exposed the plight of programming education in promoting the development of creativity in China at present, which is in urgent need of breakthrough. Implications From the perspective of promoting the development of programming education, this study, respectively, discusses the relationship between creativity and the family factors, the programming teaching factors, the students' learning factors and gender of programming learners, which provides necessary theoretical support and practical basis for programming education and learners' creativity. First of all, the most crucial is that teachers need to design creative programming learning activities scientifically according to learners' learning characteristics and rules to cultivate learners' creativity. So as to promote learners to carry out understanding-based, inquiry-based, and project-based programming learning. Secondly, promoting learners' creativity by programming education, attention should also be paid to gender differences. Programming teaching design and practice should take into account gender differences, so that all students can effectively participate in programming learning. Thus, the gender difference in programming learners' creativity provides an important basis for teachers to design programming learning content and organize teaching activities reasonably. Finally, it will be the future trend to cultivate learners' creativity through programming education from out-ofschool specialty education supported by family economic and cultural capital to the popularize education in school. Therefore, it is necessary to explore the new programming education ecological environment of home-school coeducation and on-and-off campus joint education, so as to provide fair, continuous and seamless opportunities and conditions for every student to develop their creativity through programming learning. LIMITATIONS AND DIRECTIONS FOR FUTURE RESEARCH While the present study has yielded findings that have implications, we recognize that its design is not without limitations. The first limitation is that, for reasons of time, money, and the convenience of the researchers, the data for this study were all from students in a single specific city. Although the city is one of the fastest developing cities in programming education in China, it may not be easy to extend the results to other regions and students from different backgrounds. The second limitation is that the generalizability evaluation of influencing creativity in this study relied on the nine factors analyzed above and not on considering the impact of programming education policies on the creativity of learners. Therefore, in future research on programming education in China, we will consider other influencing factors and expand the sampling region. In conclusion, more data and studies on other influencing factors in the future will further validate and complement the findings presented here. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS JL and YZ designed the research. XS and MS performed the literature search and data analysis. JL, YZ, XS, and MS wrote, reviewed, and edited the manuscript. XL, JC, ZL, and FX reviewed and edited the manuscript. All authors contributed to the article and approved the submitted version.
2021-12-23T14:29:18.007Z
2021-12-23T00:00:00.000
{ "year": 2021, "sha1": "d90d048a761f69b171ab3d950add5eb2c18bccb1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d90d048a761f69b171ab3d950add5eb2c18bccb1", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
233561519
pes2o/s2orc
v3-fos-license
Associations between Meteorological Factors and Reported Mumps Cases from 1999 to 2020 in Japan The present study investigated associations between epidemiological mumps patterns and meteorological factors in Japan. We used mumps surveillance data and meteorological data from all 47 prefectures of Japan from 1999 to 2020. A time-series analysis incorporating spectral analysis and the least-squares method was adopted. In all power spectral densities for the 47 prefectures, spectral lines were observed at frequency positions corresponding to 1-year and 6-month cycles. Optimum least-squares fitting (LSF) curves calculated with the 1-year and 6-month cycles explained the underlying variation in the mumps data. The LSF curves reproduced bimodal and unimodal cycles that are clearly observed in northern and southern Japan, respectively. In investigating factors associated with the seasonality of mumps epidemics, we defined the contribution ratios of a 1-year cycle (Q1) and 6-month cycle (Q2) as the contributions of amplitudes of 1-year and 6-month cycles, respectively, to the entire amplitude of the time series data. Q1 and Q2 were significantly correlated with annual mean temperature. The vaccine coverage rate of a measles–mumps–rubella vaccine might not have affected the 1-year and 6-month modes of the time series data. The results of the study suggest an association between mean temperature and mumps epidemics in Japan. Introduction Mumps is an acute respiratory disease caused by a virus belonging to the family Paramyxoviridae. Direct contact with mumps patients is regarded as the most common transmission route, because infectious air droplets can invade the upper respiratory tract mucous membrane [1]. Mumps is generally considered to be less contagious than measles or chickenpox, which may explain why so many children reach adulthood without having been immunized by naturally acquired infection [2]. The incubation period averages 18 days and ranges from 14 to 21 days [3]. Onset of the disease is characterized by inflammation of the parotid gland with precursory fever [1]. Although most infections are mild, severe clinical cases can occur involving complications such as aseptic meningitis, encephalitis, pancreatitis and orchitis. Mumps is one of the main causes of sensorineural deafness acquired during childhood, which is difficult to cure. The treatment of mumps and its complications is basically symptomatic treatment. For fever, analgesics and antipyretics are administered, and for patients with meningitis, rest is instructed. Infusion is indicated in cases of dehydration. Vaccines are the only way to effectively prevent mumps [4]. Currently, 122 countries in the world implement the standard two-dose schedule as a routine vaccination [5]. Among developed countries, only Japan has not yet included the mumps vaccine in their routine vaccination schedule; this omission in Japan is due to a measles-mumps-rubella (MMR) vaccine being discontinued in April 1993 on account of the occurrence of aseptic meningitis following vaccination, which was attributed to the mumps vaccine component in the MMR vaccine [3]. Mumps vaccine coverage has ranged from 30% to 40% throughout Japan in recent years, which creates a level of herd immunity insufficient to prevent outbreaks, and large mumps epidemics recur at 4-to 5-year intervals [3]. Mumps remains a serious public health issue in Japan. Seasonal distribution of mumps has been detected in various countries including Japan [6][7][8][9][10][11][12][13][14][15]. A seasonal pattern has been detected in the USA, with significant peaks in April [6]. The seasonal distribution reportedly varies between northern and southern China, where there are respective peaks in spring and summer [7], and an analysis of the weekly reported cases of mumps in Japan revealed that seasonal peaks were not identical from year to year in the southern part of the country [8]. Such distributions indicate that meteorological factors may influence the transmission of mumps. The effects of meteorological factors on mumps transmission may differ from one country to another, and they may differ within the same country in different climatic regions. To investigate the underlying causes of mumps epidemics in specific climatic regions, a systematic study is needed. Such an investigation should quantify the influence of meteorological factors on mumps incidence in different countries in each climatic region. Some studies have also investigated the effects of population density as a socio-economic factor on temporal variations in mumps epidemics in developing countries [16,17]. Previous studies investigating associations between epidemic patterns of reported mumps cases and meteorological conditions have focused on only one region each, such as Taiwan [14], or even multiple cities for one climatic zone each, such as Guangxi (southern China) in the subtropical monsoon region [16]. Meanwhile, Japan is divided into 47 prefectures, extending from latitude 45 • N to 20 • N, and its meteorological conditions thus vary widely; the most northern prefecture has a subpolar climate, and the most southern prefecture has a subtropical climate. Furthermore, the 47 prefectures have a range of population density from 71 to 5986 people/km 2 , which is wider than that (from 96 to 383 people/km 2 ) for cities in Guangxi (southern China). As part of Japan's nationwide infectious diseases reporting and surveillance system, mumps surveillance data have been collected in all 47 of the country's prefectures since mid-1999. We surmised that a subset of the mumps surveillance data may be useful in clarifying associations of meteorological conditions and the density of the population experiencing a mumps epidemic. The present study conducted a time series analysis, incorporating the maximum entropy method (MEM) in a spectral analysis and the least-squares method (LSM) [18][19][20]. The effect of vaccinations on mumps epidemics was also investigated. The results obtained may facilitate the more accurate prediction of epidemics and more informed preparation for the effects of climatic changes on the epidemiology of infectious diseases. Materials The surveillance system of infectious diseases in Japan started to collect and publish weekly reported mumps incidence data for the whole of Japan in July 1981. Since April 1999, the incidence data have been published by all 47 prefectures of Japan. The incidence data indicate the number of mumps cases reported weekly per pediatric sentinel clinic. There are approximately 3000 pediatric sentinel clinics nationwide. Sentinel mumps cases were defined by clinical presentation; that is, sudden swelling of the parotid glands on one or both sides lasting longer than 2 days [21]. 2.1.1. Mumps Data by Prefecture of Japan from April 1999 to December 2020 To investigate the association between the reported number of mumps cases and meteorological conditions and population density in detail, a time series analysis was conducted for the longest possible weekly incidence data for the 47 prefectures in Japan currently available; i.e., from April 1999 to December 2020. The present study is the first to perform a time series analysis of the data for this period. The data were obtained from the Infectious Diseases Weekly Report Japan [21]. We selected three representative sites from the 47 prefectures in Japan: (a) Hokkaido Prefecture, the most northern (latitude 43 • N); (b) Tokyo Prefecture, the capital city in the east (latitude 35 • N); and (c) Okinawa Prefecture, the most southern (latitude 26 • N). Hokkaido has a subpolar climate, Okinawa has a subtropical climate, and Tokyo has a temperate climate. In Table 1, the three prefectures are arranged from northern to southern Japan by latitude and longitude. The 47 prefectures of Japan were shown in our preceding study [18]. The effect of vaccination on mumps epidemics was investigated with the longest possible weekly incidence data of mumps for the whole of Japan currently available during the period July 1981 to December 2020. The present study is the first to perform a time series analysis of the data for this period. The data were obtained from the Surveillance of Infectious Disease [22] and the Infectious Diseases Weekly Report Japan [21]. The data are indicated in Dataset S1. Meteorological Data In the present study, the daily mean temperature ( • C), relative humidity (%), rainfall (mm), and wind velocity (m/s) were used, based on a study conducted in Japan's southern prefecture reporting that mean temperature and relative humidity were associated with an increased occurrence of mumps [8]. These data were collected at stations that are part of the Automated Meteorological Data Acquisition System [23], which operates in Japan's 47 prefectural capitals, and were obtained from the Japan Meteorological Agency website [24]. Daily data were obtained for a total of 7671 days from 1999 to 2020 (7671 data points). Using the daily data for mean temperature, relative humidity, and wind velocity from 1999 to 2020 for each prefecture we calculated a mean value corresponding to the average of the daily data (one data point). For each prefecture, we also calculated a summation of the daily rainfall from 1999 to 2020 (one data point). Time Series Analysis We used a time series analysis consisting of MEM spectral analysis in the frequency domain and LSM in the time domain [18][19][20]. The MEM is considered to have a high degree of resolution of spectral estimates [25]. Therefore, an MEM spectral analysis allows us to determine precisely short data sequences, such as the infectious disease surveillance data used in this study [18][19][20][25][26][27]. MEM Spectral Analysis We assumed that the time series data x(t) (where t is time) were composed of systematic and fluctuating parts [28]: x(t) = systematic part + fluctuating part. (1) To investigate temporal patterns of x(t) in the monthly time series data, we performed MEM spectral analysis [18]. This method of analysis facilitates elucidation of periodicities in a time series of short data lengths with a high degree of frequency resolution compared with other analysis methods of infectious disease surveillance data such as the fast Fourier transform and autoregressive methods, which require time series of long data lengths [29]. MEM spectral analysis produces a power spectral density (PSD). The MEM-PSD P(f ) (where f represents frequency) for a time series with equal sampling interval ∆t, can be expressed by where the value of P m is the output power of a prediction-error filter of order m and γ m, k is the corresponding filter order. LSM The validity of the MEM spectral analysis results was confirmed by calculating the least squares fitting (LSF) curve pertaining to the original time series data x(t) with MEMestimated periods. The formula used to generate the LSF curve for X(t) was as follows: The above formula is calculated using the LSM for x(t) with unknown parameters f n , A 0 and A n (n = 1, 2, 3, . . . , N), where f n (=1/T n ; T n is the period) is the frequency of the n-th component; A 0 is a constant that indicates the average value of the time-series data; A n is the amplitude of the n-th component and θ n is the phase of the n-th component; and N is the total number of components. The LSM using Equation (3) must be nonlinear. Linearization of this nonlinearity is required to obtain unique optimum values of these parameters. In the present analysis, linearization was achieved using the MEM-estimated periodic modes (f n ). The value of f n can be determined by the positions of the peaks in the MEM-PSD. The optimum values of parameters A 0 , A n and θ n (n = 1, 2, 3, . . . , N) in Equation (3), except for N, were exactly determined from the optimum LSF curve (Equation (3)) calculated with f n . The reproducibility level of x(t) (Equation (1)) by the optimum LSF curve (Equation (3)) was evaluated via Spearman's correlation (ρ) analysis performed using SPSS (Statistical Package for the Social Sciences) version 17.0J software (SPSS, Japan). A p value of ≤ 0.05 was considered statistically significant. Contribution Ratio Based on the result of MEM spectral analysis, we assign periodic modes f n in Equation (3) that construct seasonal variations of mumps data. First, the power of each periodic mode is evaluated by the square of amplitude, A n 2 , of the n-th mode constituting the LSF curve X(t) (Equation (3)). Second, we estimate R corresponding to the power of residual time series which is obtained by subtracting the LSF curve X(t) (Equation (3)) from the original time series x(t) (Equation (1)). As a result, the total powers of the original time series Q is obtained by Dividing both sides of Equation (4) by Q, we obtain the normalized relation where A n 2 /Q and R/Q respectively correspond to the contribution of A n 2 and R to, Q. We refer to the first term on the left-hand side of Equation (5) as the "contribution ratio", which means the contribution A n 2 normalized by Q [16][17][18]. If A n 2 /Q in the first term becomes large, then the second term R/Q becomes small. The formula used to generate the contribution ratio Q n is where A n indicates the amplitude of the n-th periodic mode constituting the LSF curve X(t) (Equation (3)) pertaining to the original data x(t) (Equation (1)), and Q is the total power of x(t). Segment Time Series Analysis The effect of vaccinations on mumps epidemics was investigated adopting segment time series analysis, which has been widely used in fields such as medical and biological science, as well as in the physical sciences and engineering [30][31][32]. In segment time series analysis, weekly incidence data of mumps for the whole of Japan in the period July 1981 to December 2020 were divided into 200 segments. The segments each had a time range of 5 years and their starts differed in intervals of 2 months. The MEM-PSD was then calculated for each segment. The 200 MEM-PSDs thus obtained were arranged in the order of the time sequence to construct a three-dimensional (3D) spectral array. Outline of the Analysis Procedure MEM spectral analysis was conducted first, and the long-term period was determined from the PSD for the time series data. Long-term trends in the data were then calculated using the LSF method (Equation (3)) with the MEM-estimated period. This LSF curve corresponding to the long-term trend was removed by subtracting the LSF curve from the data, and the residual time series data were thus obtained. The MEM-PSDs of the residual time were then calculated. The seasonality of mumps epidemics was investigated with contribution ratios (Equation (6)) for periodic modes of the residual data. Segment time series analysis was finally conducted. Number of Mumps Cases and Mean Daily Meteorological Data From April 1999 to December 2020 a total of 2,315,511 cases of mumps were reported in Japan. The number of patients aged 3-6 years reportedly accounts for approximately 60% of the total number of mumps patients [3]. Descriptive statistics for the weekly meteorological data are shown in Table 2. The overall mean daily temperatures from 1999 to 2020 were 9.3 • C in Hokkaido (latitude 43 • N), 16.6 • C in Tokyo (latitude 35 • N), and 23.4 • C in Okinawa (latitude 26 • N). Table 2. Mean, standard deviation and standard deviation/mean for the daily temperature (a), daily relative humidity (b), daily rainfall (c), summation of daily rainfall (d), and daily wind velocity (e) from 1999 to 2020 in three prefectures of Japan. Temporal Variations in Mumps Incidence Data The three weekly incidence datasets gathered from April 1999 to December 2020 are shown in Figure 1. All incidence data exhibited long-term oscillations of an approximately 3to 5-year period with shorter-term variations within a 1-year cycle. In Hokkaido (Figure 1a) and Tokyo (Figure 1b), the long-term oscillations were largely modulated by relatively irregular shorter-term variations within the long-term cycles. In Okinawa (Figure 1c), a long-term cycle was evident. Temporal Variations in Mumps Incidence Data The three weekly incidence datasets gathered from April 1999 to December 2020 are shown in Figure 1. All incidence data exhibited long-term oscillations of an approximately 3-to 5-year period with shorter-term variations within a 1-year cycle. In Hokkaido ( Figure 1a) and Tokyo (Figure 1b), the long-term oscillations were largely modulated by relatively irregular shorter-term variations within the long-term cycles. In Okinawa (Figure 1c), a long-term cycle was evident. Long-Term Periodicities of the Mumps Incidence Data The PSDs, P(f) (f [1/year]: frequency), were calculated for all the time series data shown in Figure 1a-c, and the respective results are shown in Figure 1a'-c' (f ≤ 0.95). In each PSD, the most dominant spectral peak was observed during an approximately 3-to 5-year period, and the longest period appeared as a prominent peak at a frequency position longer than the length of the original data (20 years and 9 months, from April 1999 to December 2020)-for example, a 33-year period for Hokkaido (Figure 1a'). For the spectral peaks observed in the frequency range of the long-term periodic mode (>1 year), corresponding periods for the three prefectures are listed in Table 3. Using the periods listed Long-Term Periodicities of the Mumps Incidence Data The PSDs, P(f ) (f [1/year]: frequency), were calculated for all the time series data shown in Figure 1a-c, and the respective results are shown in Figure 1a'-c' (f ≤ 0.95). In each PSD, the most dominant spectral peak was observed during an approximately 3-to 5-year period, and the longest period appeared as a prominent peak at a frequency position longer than the length of the original data (20 years and 9 months, from April 1999 to December 2020)-for example, a 33-year period for Hokkaido (Figure 1a'). For the spectral peaks observed in the frequency range of the long-term periodic mode (>1 year), corresponding periods for the three prefectures are listed in Table 3. Using the periods listed in Table 3, the long-term trends in the mumps data for each prefecture were estimated via LSF using Equation (3). The results are shown in Figure 1a-c. The LSF curves for all prefectures reproduced the long-term trends in the original mumps data well. The good fit of the LSF curve to the original data is supported by the high respective ρ values of 0.91, 0.89 and 0.95 for Hokkaido, Tokyo, and Okinawa prefectures. Thus, the LSF curves are regarded as representative of the long-term variations in the original incidence data. Table 3. Long-term periodic mode (>1 year) corresponding to the spectral peaks observed in the low-frequency range (f ≤ 1.1) of the power spectral densities (Figure 1a'-c') for three prefectures in Japan. Short-Term Periodicities of the Mumps Incidence Data The residual data obtained by subtracting the LSF curves from the original data are shown in Figure 2a-c. By using these residual data, periodicities in mumps data within periods of less than 1 year were investigated. The PSDs for the residual data are shown in Figure 2a'-c'. In each PSD, a prominent spectral peak was observed at f = 1.0 (=f 1 ), corresponding to a 1.0-year period, and a spectral line of f 2 = (f 1 × 2) corresponding to the 6-month cycle was observed at f = 2.0. For each PSD (Figure 2a'-c') the prominent spectral peak at f 2 (6 months) is a point of interest because it evokes the question of whether the f 2 mode has its origin in the harmonics of f 1 , in the 6-month cycle (bimodal cycle), or in a superposition of both. in Table 3, the long-term trends in the mumps data for each prefecture were estimated v LSF using Equation (3). The results are shown in Figure 1a-c. The LSF curves for all pr fectures reproduced the long-term trends in the original mumps data well. The good of the LSF curve to the original data is supported by the high respective ρ values of 0.9 0.89 and 0.95 for Hokkaido, Tokyo, and Okinawa prefectures. Thus, the LSF curves a regarded as representative of the long-term variations in the original incidence data. Table 3. Long-term periodic mode (>1 year) corresponding to the spectral peaks observed in the lowfrequency range (f ≤ 1.1) of the power spectral densities (Figure 1a'-c') for three prefectures in Japan. Short-Term Periodicities of the Mumps Incidence Data The residual data obtained by subtracting the LSF curves from the original data a shown in Figure 2a-c. By using these residual data, periodicities in mumps data with periods of less than 1 year were investigated. The PSDs for the residual data are shown Figure 2a'-c'. In each PSD, a prominent spectral peak was observed at f = 1.0 (=f1), corr sponding to a 1.0-year period, and a spectral line of f2 = (f1 × 2) corresponding to the month cycle was observed at f = 2.0. For each PSD (Figure 2a'-c') the prominent spectr peak at f2 (6 months) is a point of interest because it evokes the question of whether the mode has its origin in the harmonics of f1, in the 6-month cycle (bimodal cycle), or in superposition of both. Figure 3 shows plots of the contribution ratios of the 1-year cycle (Q 1 ; panels a-d) and the 6-month cycle (Q 2 ; panels a'-d') by mean temperature, relative humidity, rainfall, and wind velocity data for all 47 prefectures. Figure 4a,a' show respective plots of Q 1 and Q 2 by population density for all 47 prefectures. Spearman's ρ correlation coefficients between the contribution ratio (Q 1 and Q 2 ) and meteorological data and population density were calculated, and the results are shown in Table 4. Associations between Mumps Incidence and Meteorological Conditions and Population Density Epidemiologia 2021, 2, FOR PEER REVIEW 8 Figure 2. Seasonality of mumps incidence. (a-c) Residual time series data obtained by subtracting the long-term trends in mumps data from the mumps data for Hokkaido, Tokyo and Okinawa. (a'-c') Power spectral density of the residual time series data for Hokkaido, Tokyo, and Okinawa. Figure 3 shows plots of the contribution ratios of the 1-year cycle (Q1; panels a-d) and the 6-month cycle (Q2; panels a'-d') by mean temperature, relative humidity, rainfall, and wind velocity data for all 47 prefectures. Figure 4a,a' show respective plots of Q1 and Q2 by population density for all 47 prefectures. Spearman's ρ correlation coefficients between the contribution ratio (Q1 and Q2) and meteorological data and population density were calculated, and the results are shown in Table 4. Unimodal Cycles in the Mumps Incidence Data Q1 was significantly correlated with mean temperature (ρ = 0.331, p < 0.05; Figure 3a) and relative humidity (ρ = -0.381, p < 0.01; Figure 3b) but not with rainfall (ρ = -0.032, p = 0.832; Figure 3c) or wind velocity (ρ = 0.084, p = 0.573; Figure 3d). Q1 increased as the population density increased, although there was some scattering of points (ρ = 0.514, p < 0.01; Figure 4a). These results indicate that the unimodal cycle of reported cases of mumps in Japan is significantly associated with temperature, relative humidity, and population density. These results indicate that the bimodal cycle of reported cases of mumps in Japan is associated with the mean temperature and rainfall. Peak Months of Mumps Epidemics To investigate the peak months of mumps epidemics, the LSF curves for the residual data (Figure 2a-c) were calculated with the 1-year and the 6-month periodic modes. The results are shown in Figure 5a-c. The respective correlations between the residual data and the LSF curves in Figure 5a,b, and c were ρ = 0.36, 0.49, and 0.34. The peaks in the LSF curve for Hokkaido (Figure 5a) were in early summer (June) and winter (December). For Tokyo (Figure 5b) the peaks in the LSF curve were also in early summer (June) and winter (December). For Okinawa (Figure 5c) the peaks in the LSF curve were in winter (February). 3.6. Unimodal Cycles in the Mumps Incidence Data Q 1 was significantly correlated with mean temperature (ρ = 0.331, p < 0.05; Figure 3a) and relative humidity (ρ = -0.381, p < 0.01; Figure 3b) but not with rainfall (ρ = -0.032, p = 0.832; Figure 3c) or wind velocity (ρ = 0.084, p = 0.573; Figure 3d). Q 1 increased as the population density increased, although there was some scattering of points (ρ = 0.514, p < 0.01; Figure 4a). These results indicate that the unimodal cycle of reported cases of mumps in Japan is significantly associated with temperature, relative humidity, and population density. These results indicate that the bimodal cycle of reported cases of mumps in Japan is associated with the mean temperature and rainfall. Peak Months of Mumps Epidemics To investigate the peak months of mumps epidemics, the LSF curves for the residual data (Figure 2a-c) were calculated with the 1-year and the 6-month periodic modes. The results are shown in Figure 5a-c. The respective correlations between the residual data and the LSF curves in Figure 5a,b, and c were ρ = 0.36, 0.49, and 0.34. The peaks in the LSF curve for Hokkaido (Figure 5a) were in early summer (June) and winter (December). For Tokyo (Figure 5b) the peaks in the LSF curve were also in early summer (June) and winter (December). For Okinawa (Figure 5c) the peaks in the LSF curve were in winter (February). Hokkaido, (b). Tokyo, and (c). Okinawa. Effect of Vaccination on Periodic Structures of Mumps Epidemics To quantitatively estimate the effect of mass vaccination on the 1-year cycle and 6-month cycle of mumps epidemics, we analyzed the incidence data of mumps for the whole of Japan during 1981-2020, as shown in Figure 6a. Therein, a decreasing trend of the incidence data was observed at the beginning of the MMR vaccine, which was started in April 1989 and discontinued in April 1993. The average incidence from July 1981 to March 1989, when mumps vaccination was completely voluntary, is approximately 1 .33 (per 100,000). The average incidence from April 1989 to March 1993 when the MMR vaccination program was introduced and that from April 1993 to December 2020 when mumps vaccination was completely voluntary again are 0.73 and 0.72, respectively, and are reductions of 55% and 54%, respectively, as compared with the average incidence before the MMR vaccine was introduced (from July 1981 to March 1989). Effect of Vaccination on Periodic Structures of Mumps Epidemics To quantitatively estimate the effect of mass vaccination on the 1-year cycle and 6month cycle of mumps epidemics, we analyzed the incidence data of mumps for the whole of Japan during 1981-2020, as shown in Figure 6a. Therein, a decreasing trend of the incidence data was observed at the beginning of the MMR vaccine, which was started in April 1989 and discontinued in April 1993. The average incidence from July 1981 to March 1989, when mumps vaccination was completely voluntary, is approximately 1.33 (per 100,000). The average incidence from April 1989 to March 1993 when the MMR vaccination program was introduced and that from April 1993 to December 2020 when mumps vaccination was completely voluntary again are 0.73 and 0.72, respectively, and are reductions of 55% and 54%, respectively, as compared with the average incidence before the MMR vaccine was introduced (from July 1981 to March 1989). The time-series analysis of the incidence data (Figure 6a) was conducted with the same procedure used for the prefecture's data shown in Figure 1a-c. First, a spectral analysis of the original data (Figure 6a) was performed, and the PSD was obtained (Figure 6b). The long-term periods (>1 year) determined from the PSD (Figure 6b) are listed in Table 5. Next, the long-term trend was calculated as the LSF curve with Equation (2) (Figure 6a). This trend was removed by subtracting the LSF curve from the original data, and the residual data were obtained (Figure 6c). Table 5. Long-term periodic mode (>1 year) corresponding to the spectral peaks observed in the low frequency range (f ≤ 0.95) of the power spectral densities (Figure 6b) for the incidence data of mumps for the whole of Japan from July 1981 to December 2020 (Figure 6a). Period (Year) 65.6, 20.5, 13.4, 6.8, 5.6, 4.9, 3.9, 3. For the residual data in phases I, II and III (Figure 6c), MEM-PSDs were calculated. Semi-log plots of the PSDs are shown in Figure 6d,e, and f for phases I, II, and III, respectively. In each PSD (Figure 6d-f), common prominent peaks were observed at approximately ƒ = 1.0 and ƒ = 2.0, corresponding to the 1-year cycle and 6-month cycle of epidemics, respectively. Q 1 values for phases I, II, and III are 0.13, 0.14 and 0.09, respectively. Q 2 values for phases I, II and III are 0.24, 0.12, and 0.12, respectively. To further investigate the effect of vaccination on periodic structures of mumps epidemics, segment analysis was conducted for the residual data ( Figure 6c). All the residual data ( Figure 6c) were divided into 200 segments. The segments each had a time range of 5 years, and the beginning of the range was delayed by 2 months. The PSD was then calculated for each segment. The 200 PSDs thus obtained were arranged in the order of the time sequence to construct the 3D spectral array, as shown in Figure 7, in which frequency is represented on the horizontal axis and time on the perpendicular axis running from bottom to top. In Figure 7, spectral peaks at the frequency f = 1.0 corresponding to a 1-year period and f = 2.0 corresponding to a 6-month period were unchangeably observed as a fine array over the entire time range. Epidemiologia 2021, 2, FOR PEER REVIEW 13 Figure 7. Three-dimensional spectral array for the residual data (Figure 6c). Discussion The present result that the occurrence of mumps was associated with the mean temperature and relative humidity was consistent with the results of previous studies conducted for Japan's southern prefecture [8] and Taiwan [7]. With respect to the mean temperature, in the current study, there was a statistically significant relationship between the contribution ratio of the 1-year (Q1) and 6-month (Q2) cycles of reported cases of mumps and mean temperature (Figure 3a,a'). A similar relationship was observed with regard to Discussion The present result that the occurrence of mumps was associated with the mean temperature and relative humidity was consistent with the results of previous studies conducted for Japan's southern prefecture [8] and Taiwan [7]. With respect to the mean temperature, in the current study, there was a statistically significant relationship between the contribution ratio of the 1-year (Q 1 ) and 6-month (Q 2 ) cycles of reported cases of mumps and mean temperature (Figure 3a,a'). A similar relationship was observed with regard to reported cases of chickenpox in previous studies [20,33,34], and the observations are concordant with results reported by Shoji et al. [35]. Shoji et al. [35] showed that the incidence of chickenpox increased at temperatures of 5-20 • C (i.e., the temperature range at which the chickenpox virus is activated) and decreased at temperatures lower than 5 • C and higher than 20 • C. In regions of northern Japan, such as Hokkaido (latitude 43 • N) where the temperature falls below 5 • C in winter and exceeds 20 • C in summer, the occurrence of chickenpox epidemics was bimodal [33]. In that same study bimodal cycles of chickenpox incidence were not evident at lower latitude, and unimodal cycles were evident in the southernmost prefecture, Okinawa (latitude 26 • N), where the temperature rarely falls below 5 • C in winter and exceeds 20 • C in summer. This transition of patterns of chickenpox incidences in Japan was thought to depend on temperature [33]. With respect to mumps incidences, the present study found that the occurrence of epidemics transitions from bimodal cycles in Hokkaido (Figure 5a) to unimodal cycles in Okinawa (Figure 5c), as is the case for chickenpox. It is thus reasonable to hypothesize that temporal patterns of mumps incidence in Japan ( Figure 5) are associated with temperature. This hypothesis is supported by the report that the mumps virus can tolerate environmental conditions remarkably well [36] and is relatively stable at 21 • C, and the reproduction of the mumps virus decreases when the external temperature is 4 • C and rapidly declines when the external temperature is 37 • C, resulting in a remarkable loss of infectivity [37,38]. Q 1 and Q 2 , respectively, were significantly negatively associated with relative humidity ( Figure 3b) and rainfall (Figure 3c'). The reasons behind the influence of relative humidity and rainfall on the transmission of mumps are unclear [15], but one potential explanation is that high relative humidity and large amounts of rainfall render outdoor activities unsuitable for children [11], which may in turn function to reduce the periodicity of mumps epidemics, resulting in the reduced Q 1 associated with relative humidity (Figure 3b) and reduced Q 2 associated with rainfall (Figure 3c'). In Table 2, there is clearly large variance (corresponding to the value of SD/mean) in the daily rainfall data for the three prefectures. The amount of rainfall depends on the amount of water vapor in the atmosphere, which affects relative humidity [39]. The variance in the relative humidity for the three prefectures (Table 2) was relatively small compared with that for rainfall ( Table 2). This finding is the result of relative humidity being constrained by the amount of saturated water vapor, which is dependent on air temperature [39]. It is thus reasonable to infer that unimodal and bimodal cycles observed in temporal variations of the reported mumps incidence were dominated by temperature. We found no statistically significant association between wind velocity and Q 1 (Figure 3d) and Q 2 (Figure 3d'). Meanwhile, researchers found that the occurrence of mumps cases is positively associated with a wind speed of 1.8 m/s for Taiwan [14] and 2.2 m/s for Fujian province in southern China [12]. The mean values of the wind velocity of Fujian province (2.2 m/s) and Taiwan (1.8 m/s) are lower than those of 41 and 44 prefectures of all 47 prefectures in Japan, respectively. It is possible that there is a lower threshold effect below a wind speed of 2.2 or 1.8 m/s, which is not exceeded by the 41 and 47 prefectures in Japan, respectively. The dominant summer peak relative to the winter peak observed in Tokyo (Figure 5b) may be associated with the observation that the degree of seasonality of mumps was significantly associated with population density (Figure 4a) and the fact that Tokyo has a much higher population density (5896 people/km 2 ) than Hokkaido (71 people/km 2 ) and Okinawa (605 people/km 2 ). Given that it has been reported that patients aged 3 to 6 years account for approximately 60% of the total number of mumps cases [3], the present result that Q 1 values of mumps varied with population density (Figure 4a) may be related to environmental and/or biological conditions affecting individuals aged 3-6 years. In that age group, there is a specific type of mumps infection risk. In Tokyo, people partake in outdoor activities more frequently in early summer, especially children, and this increases the likelihood of contact. This may cause the disease to spread more easily in Tokyo, which has a high population density, resulting in the dominant summer peak (Figure 5b). From 1981 until recently, the vaccination coverage rate has remained low at approximately 30-40% [3], and the 1-year and 6-month modes are unchangeably observed as dominant spectral peaks in the PSDs for phases I, II, and III (Figure 6d-f, respectively) and in the 3D spectral array (Figure 7). Thus, the vaccination coverage rate might not have affected the 1-year and 6-month modes of the incidence data for the whole of Japan throughout the time range that was investigated in this study . When the vaccination coverage exceeds that required to prevent the spread of infection, 75-90% [3], the 1-year cycle and seasonal peak superposed on a 1-year cycle will diminish, as observed in Finland [40]. Conclusions We confirmed that, in Japan, vaccination does not eliminate the seasonality of the mumps epidemics (Figures 6d-f and 7). The control of mumps requires that the vaccination coverage exceeds that required to prevent the spread of infection (75-90%) [3] and, at the same time, the quantitative monitoring of the effect of the vaccination coverage on the 1-year and 6-month modes of the incidence data. The seasonality of the mumps epidemics has a significant correlation with meteorological factors (Figure 3), and we thus need to facilitate more informed preparation for the effects of climatic change on mumps epidemiology. We anticipate that the time series analysis methodology adopted in the present study, including MEM spectral analysis and LSM, will be useful in future studies investigating the seasonality of various medical conditions as well as mumps. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/epidemiologia2020013/s1, Dataset S1: Time-series data of the weekly incidence data from all 47 prefectures of Japan. Data Availability Statement: The dataset of mumps analyzed during the current study are contained in Supplementary materials (Dataset S1). The data are also available from refs. [21,22].
2021-05-04T22:05:42.192Z
2021-04-02T00:00:00.000
{ "year": 2021, "sha1": "4672ac00d8864c082ca876de5108e5c41c7cb058", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-3986/2/2/13/pdf?version=1617939568", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a963b3be252bdc8335039918b04d140c2f16f3e4", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
218811207
pes2o/s2orc
v3-fos-license
Bioinformatics Analysis Identifying Key Biomarkers in Bladder Cancer : Our goal was to find new diagnostic and prognostic biomarkers in bladder cancer (BCa), and to predict molecular mechanisms and processes involved in BCa development and progression. Notably, the data collection is an inevitable step and time-consuming work. Furthermore, identification of the complementary results and considerable literature retrieval were requested. Here, we provide detailed information of the used datasets, the study design, and on data mining. We analyzed di ff erentially expressed genes (DEGs) in the di ff erent datasets and the most important hub genes were retrieved. We report on the meta-data information of the population, such as gender, race, tumor stage, and the expression levels of the hub genes. We include comprehensive information about the gene ontology (GO) enrichment analyses and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses. We also retrieved information about the up- and down-regulation of genes. All in all, the presented datasets can be used to evaluate potential biomarkers and to predict the performance of di ff erent preclinical biomarkers in BCa. Summary Bladder cancer is one of the most common malignancies [1]. Although new treatment strategies and tools for surgical resection [2], neoadjuvant chemotherapy [3,4], and photodynamic therapy (PDT) [5] have been developed, BCa remains with a high rate of recurrence [1]. Up to now, cystoscopy and bioptic histology are still the gold standards to diagnose bladder cancer (BCa) [6]. No consent about urinary marker or non-invasive screening strategies could be found by the European Association of Urology (EUA). Furthermore, nuclear matrix protein 22 (NMP22) is only recommended by American Urological Association (AUA) under certain conditions [7]. Therefore, it remains a priority to develop reliable, safe, and non-invasive diagnostic/prognostic biomarkers and therapeutic targets for BCa, and considerable efforts are ongoing. Bioinformatics analysis is necessary for the integration of, e.g., huge amounts of transcriptome, microarray, and RNA-sequencing data to disclose alterations in gene expression, mutational burden, transcriptome, and proteome of cancer compared to non-cancer controls [8]. Of special importance are so-called hub genes, defined as highly connected genes, which can be regarded as a representative of a distinct module in the genes network [9]. Furthermore, hub genes potentially play an important role in the progression of cancer. Therefore, they are good biomarker candidates and may even provide new therapeutic targets [8,10]. Following, we provide supplemental results from our recent investigation [11]. For detailed information on the analytical methods and the software packages please refer to the Materials and Methods section in the cited paper. [11]. For detailed information on the analytical methods and the software packages please refer to the Materials and Methods section in the cited paper. Molecular complex detection (MCODE) [15][16][17] analysis identified 11 relevant modules (subnetworks) and cytoHubba (based on Cytoscape software) [18] classified 376 of those 418 DEGs as hub genes, which are the gene most interconnected in the networks/modules (Table S4) [8]. To reduce the hub genes to the most promising, we defined 11 seed genes [15,16] for the most important 11 modules and the subsequent analysis yielded 14 hub genes on the basis of correlation to overall survival and degree of interaction (Table S5). The hub genes were ordered by their descending interaction degree: CDK1(98), CCNB1(92), CCNA2(84), KIF11(84), CDC20(83), UBE2C(83), MAD2L1(81), AURKA(80), KIF20A(80), KIF2C(80), KPNA2(67), TPM1(29), CASQ2 (11), and CRYAB (11). Figure 2 depicts the results of the protein-protein interaction (PPI) analysis. extracted the clinical meta-data from TCGA-BLCA for the correlation to overall survival (OS) and disease-free survival (DFS) based on the 14 hub genes (Table S5). [11]. For detailed information on the analytical methods and the software packages please refer to the Materials and Methods section in the cited paper. In addition, we performed other subgroup analyses, such as the expression levels in different groups based on tumor stage, lymph nodal metastasis, race of patients, gender of patients, histological subtype, and molecular subtypes. We found that CKD1, CCNB1, CCNA2, KIF11, CDC20, UBE2C, MD2L1, AURKA, KIF20A, KIF2C, KPNA2, TPM1, CASQ2, and CRYAB were significantly higher expression in Caucasian and African American than in the ASI cohort. Except for TPM1, CASQ2, and CRYAB, all the genes were significantly overexpressed in both, male and female bladder cancer patients. However, no significant difference was found between males and females. All the genes were significantly higher expressed in non-papillary tumors than in papillary tumors ( Table 2) [28]. In addition, CKD1, CCNB1, CCNA2, KIF11, CDC20, UBE2C, MD2L1, AURKA, KIF20A, KIF2C, and KPNA2 were significantly upregulated in papillary tumors and non-papillary tumors than in non-cancerous tissues; in contrast, TPM1, CASQ2, and CRYAB We then performed a gene ontology (GO) enrichment analysis [26] and used the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis [27] to identify the pathways potentially related to these protein-coding genes and to predict the roles of these genes in BCa. The Go and KEGG analyses significantly enriched the 'Pathways in cancers', 'Viral carcinogenesis', and 'Cell cycle', which relate to carcinogenesis and progression of cancer. We listed all significant results of GO and KEGG analysis based on the 418 DEGs and Benjamini-Hochberg value <0.05 in Table S6. The GO and KEGG analysis results based on the 14 hub genes are available and searchable at http://www.mdpi.com/2075-4418/10/2/66/s1, Table S5. Moreover, we also listed all significant results of GO and KEGG analyses of up- (Table S7) and downregulated DEGs (Table S8). In addition, we Data 2020, 5, 38 4 of 12 extracted the clinical meta-data from TCGA-BLCA for the correlation to overall survival (OS) and disease-free survival (DFS) based on the 14 hub genes (Table S5). In addition, we performed other subgroup analyses, such as the expression levels in different groups based on tumor stage, lymph nodal metastasis, race of patients, gender of patients, histological subtype, and molecular subtypes. We found that CKD1, CCNB1, CCNA2, KIF11, CDC20, UBE2C, MD2L1, AURKA, KIF20A, KIF2C, KPNA2, TPM1, CASQ2, and CRYAB were significantly higher expression in Caucasian and African American than in the ASI cohort. Except for TPM1, CASQ2, and CRYAB, all the genes were significantly overexpressed in both, male and female bladder cancer patients. However, no significant difference was found between males and females. All the genes were significantly higher expressed in non-papillary tumors than in papillary tumors ( Table 2) [28]. In addition, CKD1, CCNB1, CCNA2, KIF11, CDC20, UBE2C, MD2L1, AURKA, KIF20A, KIF2C, and KPNA2 were significantly upregulated in papillary tumors and non-papillary tumors than in non-cancerous tissues; in contrast, TPM1, CASQ2, and CRYAB were significantly downregulated in papillary tumors and non-papillary tumors than in non-cancerous tissues. Intriguingly, except for CRYAB, we found that TPM1 and CASQ2 were most significantly downregulated in "Luminal Papillary" tumors, while the other genes were most significantly upregulated in subtype of "Neuronal" and "Basal squamous" based on molecular subtyping ( Table 2). Literature Research We retrieved seven bioinformatics studies for BCa biomarkers based on public database analyses [9,[29][30][31][32][33][34], and we compared the biomarkers in the present study with the ones that were reported in the literature studies. We found that CRYAB and CASQ2 were so far unrecognized as biomarkers in previous studies. On the basis of Oncomine meta-analysis (https://www.oncomine.org/), we here present the meta-analysis of the expression levels of the hub genes described, but not been shown in our previous study ( Figure 3 and Table S9). The genes compared in the meta-analysis were CCNB1, CCNA2, KIF11, CDC20, UBE2C, MAD2L1, AURKA, KIF2C, CASQ2, CRYAB, and KIF20A. Furthermore, except for the CRYAB and CASQ2, which have been shown in our previous paper, we constructed the expression body maps of the other 12 hub genes reported in our previous study using GEPIA (http://gepia.cancer-pku.cn, accessed on Nov.11. 2019). Body maps are an impressive way to visualize the differences in gene expression between normal and tumor tissues ( Figure 4). Ultimately, our research results were roughly in line with the majority of the retrieved studies. Furthermore, except for the CRYAB and CASQ2, which have been shown in our previous paper, we constructed the expression body maps of the other 12 hub genes reported in our previous study using GEPIA (http://gepia.cancer-pku.cn, accessed on Nov.11. 2019). Body maps are an impressive way to visualize the differences in gene expression between normal and tumor tissues ( Figure 4). Ultimately, our research results were roughly in line with the majority of the retrieved studies. Methods The workflow of the current study is depicted in Figure 1, which has been published before [11]. For more detailed information of the datasets, materials and methods please refer to this article. Data Source Identification and Data Mining The quality control of microarray data was conducted by relative log expression (RLE) box plot through R studio (version 1-1-463). Criteria were made for defining DEGs, compared the expression levels between the non-cancerous tissues and cancer samples, where |Log FC (fold change)| > 1 and a p-value < 0.05 were considered statistically significant [15,40]. Acquisition of the Hub Genes DEGs should at least be expressed in two different GEO datasets. The overlap between DEGs in different datasets was determined by FUNRICH software (version 3.1.3) and 418 DEGs were identified. Based on the interaction degree of 418 DEGs extracted from STRING database [14], cytoHubba analysis [18,41] reported 376 hub genes, of which 135 hub genes fulfilled the criterion of degree ≥11. However, to find the most important hub genes, we focused on the top 10 hub genes, all showing a degree ≥80. Additionally, we included another four hub genes KPNA2, TPM1, CASQ2, and CRYAB, which not only significantly correlated with overall survival based on the results from Human Protein Atlas, but also showed a degree ≥11 [15,40]. Methods The workflow of the current study is depicted in Figure 1, which has been published before [11]. For more detailed information of the datasets, materials and methods please refer to this article. Data Source Identification and Data Mining The quality control of microarray data was conducted by relative log expression (RLE) box plot through R studio (version 1-1-463). Criteria were made for defining DEGs, compared the expression levels between the non-cancerous tissues and cancer samples, where |Log FC (fold change)| > 1 and a p-value < 0.05 were considered statistically significant [15,40]. Acquisition of the Hub Genes DEGs should at least be expressed in two different GEO datasets. The overlap between DEGs in different datasets was determined by FUNRICH software (version 3.1.3) and 418 DEGs were identified. Based on the interaction degree of 418 DEGs extracted from STRING database [14], cytoHubba analysis [18,41] reported 376 hub genes, of which 135 hub genes fulfilled the criterion of degree ≥11. However, to find the most important hub genes, we focused on the top 10 hub genes, all showing a degree ≥80. Additionally, we included another four hub genes KPNA2, TPM1, CASQ2, and CRYAB, which not only significantly correlated with overall survival based on the results from Human Protein Atlas, but also showed a degree ≥11 [15,40]. . Both software packages were used to annotate, visualize, and integrate the discoveries, and to extract the crucial biological information. We also used DAVID to analyze the up-and downregulated genes, separately. A Benjamini-Hochberg FDR <0.05 was considered significant in DAVID and FUNRICH analyses. In addition, we performed GO and KEGG analysis to identify the critical biological process (BP), cellular component (CC), molecular function (MP), and essential pathways potentially related to the initiation and development of BCa. p < 0.05 was considered statistically significant. Clinical information was extracted from TCGA-BLCA using R software and, subsequently, the expression levels of 14 hub genes in subgroups were analyzed based on tumor stage, lymph nodal metastasis, race of patients, gender of patients, histological subtype, and molecular subtypes. To evaluate the prognostic value of the identified DEGs, we did Kaplan-Meier survival analyses of overall survival (OS) and disease-free survival (DFS). p < 0.05 was considered statistically significant. Literature Retrieval and Oncomine Meta-Analysis PubMed, EMBASE, Science Direct, and Google Scholar databases were used to search and identify published results about bioinformatics analysis in BCa, via the Google search engine. The data collection process was undertaken and ended in November 2019. The retrieval criteria were rigorous, filtering with bioinformatics analysis and BCa, only 7 full-text papers were returned [9,[29][30][31][32][33][34]. On the basis of Oncomine database, 5 studies were relevant [35] and we gathered the information of the 14 hub genes from the 5 previous studies. User Notes The present report describes the character of the research "Identification of key biomarkers in bladder cancer: Evidence from bioinformatics analysis" [11]. Furthermore, the present report provides a convenient way to use extended datasets for biomarker discovery and hypothesis generation. Supplementary Materials: We provide the supplementary Tables S1-S3 listing step by step the identification process of the most promising DEGs. The DEGs listed in Table S3 were used during the following analysis steps. We also indicated up-or downregulation of those DEGs (↑; ↓) in bladder cancer. Table S4 provides the 376 hub genes defined by cytoHubba. Table S5 summarizes the clinical and gene expression data of the n = 406 TCGA-BLCA patients for the 14 hub genes, defined from their degree of network interaction and the 11 seed genes, defined from the 11 most important modules. This table is the basis for the gene expression analyses. Tables S6-S8 summarize the results of the GO and KEGG pathway analyses. Table S9 provides the data of the Oncomine meta-analysis. The data tables may be used to construct a complete data set after applying different normalization strategies as cross-platform normalization or batch effects removal. This data set can be used as a benchmarking data set for machine learning-based feature selection in data-driven biomarker research. The following supplemental data are available online at http://www.mdpi.com/2306-5729/5/2/38/s1, Table S1: DESs extracted from different dataset, Table S2: 726 DEGs extracted from 5 GEO datasets,
2020-04-23T09:03:25.944Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "dd19e84bf4b68bfee6a26d8eeca97b905f7bfe87", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-5729/5/2/38/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cee085ccd25eedd6624e39286ada8314f40eb059", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
15250968
pes2o/s2orc
v3-fos-license
Principal Components Analysis of Atopy-Related Traits in a Random Sample of Children Aim. To study the relationship between atopy-related traits in a random sample of children. Methods. A total of 1007 randomly selected children, 7–17 years of age, from Copenhagen, Denmark were studied. The children were interviewed about symptoms of atopic diseases, and skin test reactivity, serum total IgE, lung function, and airway responsiveness were measured. Principal components analysis was performed in order to examine the relationship between the different traits. Results. Most of the studied traits were significantly correlated. A three-component solution explained about 55% of the variation in the observed traits. The first component loaded most strongly on hay fever, serum total IgE, skin test reactivity and sensitisation to grass, cat and house dust mite allergen; the second factor was most associated with asthma, airway obstruction, and airway hyperresponsiveness, whereas the third factor corresponded most strongly to atopic dermatitis. There was some indication of cross-relations between the three components with respect to serum total IgE. Conclusion. Asthma, hay fever, and atopic dermatitis are characterised by different sets of biomarkers suggestive of a high degree of heterogeneity within the atopic syndrome. Introduction Asthma, hay fever, and atopic dermatitis often cooccur, and this can be attributed both to shared genetic and environmental risk factors [1]. Furthermore, different biomarkers for atopic disease, such as serum IgE, lung function, airway responsiveness, airway inflammation, atopic sensitisation, and serum eosinophilia, have been shown to correlate, also in part because of a shared aetiological basis [2]. However, the correlations between both the atopic diseases and their intermediate phenotypes are incomplete. Notably, asthma and airway hyperresponsiveness (AHR) are closely linked, but far from all asthmatics exhibit AHR [3]. Furthermore, airway obstruction and low lung function are inconstant features of asthma. Atopic sensitisation is closely associated both with hay fever, asthma, and atopic dermatitis, but this association seems to be stronger in children than in adults [4]. Sensitisation to specific allergens is seen both in asthma and hay fever but with sensitisation to indoor allergens like house dust mite being more closely linked to asthma, whereas hay fever more often is characterised by sensitisation to outdoor allergens like, for example, grass pollen [4]. Studying the relationship between atopic diseases and their intermediate phenotypes can help in elucidating novel pathways towards which different treatments can be targeted. We performed a principal components analysis of a series of clinical and intermediate indicators of atopic disease in a general population of children. Population. Two different random samples of children from Copenhagen, Denmark were studied. The first sample was examined in 1986 [5] and the second in 2001 [6]. All subjects were randomly ascertained through The Civil Registration System and had a mean age of 12 years, agerange 7-17 years. In 1986 and 2001, a total of 1000 and 1500 subjects, respectively, were identified. However, due to immigration only 983 and 1440 subjects, respectively, were eligible for the studies in 1986 and 2001. A total of 527 (53.6%) participated in 1986 whereas 480 (33.3%) participated in 2001. Due to the low participation rates, telephone interviews were conducted among 100 and 116 randomly selected families from the group of nonrespondents in 1986 and 2001, respectively. For every household, the interview was first performed with the child, and subsequently a parent was consulted to reach consensus. The subjects interviewed by telephone did not differ significantly from the subjects who were clinically examined with respect to sex, age, and predisposition to atopic disease, but on both occasions there were significantly fewer children with symptoms of allergy among the group of nonrespondents. Only data for subjects who participated in the clinical examination were included in the present analysis. The local Scientific Ethics Committee approved the study, and informed consent was obtained from all participating subjects and their parents. Clinical Interview. All participants were interviewed about atopic diseases. Subjects were considered to have asthma, hay fever, and atopic dermatitis if they responded affirmatively to a series of questions adopted from the American Thoracic Society [7]. For details, see [6,8,9]. Skin Prick Test and Measurement of Serum Total IgE. Skin prick tests (SPTs) were performed using standard dilutions of nine common aeroallergens. The allergens used were birch, grass, mugwort, horse, dog, cat, house dust mite (HDM) (Dermatophagoides pteronyssinus), and mould (Alternaria iridis and Cladosporium herbarium). The concentrations of allergen were 100.000 BU/mL (Phazet system; Pharmacia, Denmark) in 1986 and 10 HEP (Soluprick SQ system; ALK Albelló, Denmark) in 2001. Reactions were read after 15 minutes. A positive result was defined as a positive reaction to at least one of the allergens, and a reaction was considered positive if the mean wheal diameter was at least 3 mm. The participants were requested to discontinue medications that contained antihistamines at least 3 days before skin testing. Levels of serum total IgE were measured with a paper radio immunosorbent test (PRIST, Pharmacia, Copenhagen, Denmark) in 1986 and with an enzyme-linked immunosorbent assay (ELISA) on Immulite 2500 (DPC, New York, USA) in 2001. Results were expressed as KIU/L. Lung Function and Bronchial Responsiveness Test. The preprovocation values of forced expiratory volume in 1 second (FEV 1 ) and forced vital capacity (FVC) were measured and the ratio FEV 1 /FVC was calculated. The methods by Cockcroft et al. and Yan et al. were used for measuring airway responsiveness to inhaled histamine in 1986 and 2001, respectively [10,11]. According to the Cockcroft method, a Wright nebulizer delivered the histamine and the subjects inhaled by normal tidal volume breathing. Nine concentrations of histamine were used, from 0 (saline) to 8 mg/mL, and the test was terminated when the maximum concentration was reached or when a drop in FEV 1 of more than 20% was observed-the provocative concentration (PC 20 ). AHR was defined as a PC 20 of 8 mg/mL. According to the method by Yan, each aerosol was inhaled starting with saline and followed by increasing doses of histamine until a cumulative dose of 7.8 µmol had been reached. The test was terminated when the maximum concentration had been reached or when a 20% decline in FEV 1 had occurred before the end of the dosing regimen. For all subjects experiencing at least a 20% decline in FEV 1 , the concentration causing a 20% fall in FEV 1 (PD 20 ) was calculated. AHR was defined as a PD 20 below 3.9 µmol. Statistical Analysis. The following variables were included in the analysis: asthma, hay fever, atopic dermatitis, FEV 1 /FVC, AHR, serum total IgE, positive skin prick test and sensitisation to grass, cat, and HDM allergen. Principal components analysis was used to examine the correlational structure of the data. For the optimal solution we used varimax rotation with Kaiser normalisation. Only components with eigenvalues above 1.0 were retained in the solution. The data were analysed with the statistical package SPSS 17.0 (SPSS Inc Chicago, IL, USA). Results The prevalence of asthma, hay fever, and atopic dermatitis was 7.1, 17.3, and 22.1%, respectively. The overall rate of atopic sensitisation was 21.8%, whereas the prevalence of AHR was 11.6% (Table 1). Significant correlations were observed among most of the traits (test of sphericity, P < .001). Particularly, positive skin prick test and serum total IgE correlated well with all other traits (except FEV 1 /FVC). Asthma was most strongly correlated with AHR (r = 0.38), whereas hay fever correlated most with positive skin prick test (r = 0.46) and grass allergen (r = 0.43). Cat allergen was significantly correlated with all three atopic diseases (asthma (r = 0.25), hay fever (r = 0.34), and atopic dermatitis (r = 0.18)). FEV 1 /FVC was most strongly associated with AHR (r = −0.18). A three-component solution explained about 55% of the variation in the observed traits (Figure 1). The first component loaded most strongly on hay fever (r = 0.66), serum total IgE (r = 0.50), skin test reactivity (r = 0.89) and sensitisation to grass (r = 0.69), cat (r = 0.65), and house dust mite (r = 0.71) allergen; the second factor was most associated with asthma (r = 0.67), airway obstruction Table 1: Correlations between atopy-related traits in a population of 1007 children, 7-17 years of age. Trait Traits (1) Principal components (2) (r = −0.62), and airway hyperresponsiveness (r = 0.74), whereas the third factor corresponded most strongly to atopic dermatitis (r = 0.90). There was some indication of cross-relations between the three components in relation to serum total IgE (Table 1). Discussion We examined the relationship between different atopic indicators in a random sample of ∼1000 children, aged 7-17 years, and identified three major classes of the studied traits relating to (1) hay fever and atopy, (2) asthma, airway responsiveness and airway obstruction, and (3) atopic dermatitis. Furthermore, we found that serum total IgE seemed to explain some cross-relation between these three groupings indicating that IgE production is an underlying trait common to the different atopic manifestations. Our analysis only retained about 55% of the variability in the studied traits consistent with a high degree of heterogeneity within the atopic syndrome. So although some categorisation could be made in regard to separating upper airway symptoms from lower airway symptoms and skin symptoms there is still a high degree of unexplained variation that cannot be sufficiently accounted for by only three latent factors. Our definition of atopic diseases was based on a semistructured interview, which can be biased by parental recall and subjective interpretation. A more detailed symptom registration and longitudinal data with information on change in quality and severity of symptoms over time within the same individual could have made disease definitions more robust. Furthermore, differences in prevalence rates of atopic diseases between the two cohorts could have influenced the results. Also, inclusion of additional biomarkers, such as sputum and blood eosinophils, exhaled nitric oxide, and inflammatory proteins would have been favourable but could have induced more missing data. We tested for AHR with histamine, which lack specificity for detecting airway inflammation. Furthermore, different tests for AHR, skin test reactivity, and IgE were used in the two cohorts. Genotype data and measurements of other confounding variables such as lifestyle factors could have contributed to a more comprehensive understanding of the interrelationships between the atopic diseases. The low participation rate in the study could have led to a skewed selection of subjects; particularly there was some indication of over recruitment of symptomatic individuals, which could have had an influence on the distribution of the studied traits. Our results may only be representative for children and adolescents, whereas adults may exhibit a different pattern of correlations between traits, as would populations from other geographical areas. We conclude that asthma, hay fever, and atopic dermatitis, to some extent, are characterised by different sets of biomarkers. However, a large proportion of the variation in the studied traits was not explained by our proposed decomposition indicating significant heterogeneity within the atopic syndrome.
2018-04-03T02:53:51.410Z
2011-06-15T00:00:00.000
{ "year": 2011, "sha1": "d6c31b86747b480727ac57549676c535e62fbb1c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5402/2011/170989", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6c31b86747b480727ac57549676c535e62fbb1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37038455
pes2o/s2orc
v3-fos-license
ORDER-PARAMETER MODEL FOR SYNERGETIC THEORY-BASED RAILWAY FREIGHT SYSTEM AND EVOLUTION IN CHINA The paper has investigated the synergetic nature and complexity of the railway freight system and selected thirteen parameters (railway fixed asset investment, GDP, railway revenue kilometres, etc.) as the system’s state variables. By using the method of least square and the method of external function, an order-parameter model for synergetic theory-based railway freight system has been built, which will potentially support the studies on the railway freight system evolution. The result shows that railway fixed asset investment is the order-parameter that governs the evolution of the railway freight system: the average random fluctuation parameter ~ is 0.7060, which means that the mean fluctuation period of railway freight system is nine years. The evolution of the railway freight system is a gradual process with abrupt changes from time to time. INTRODUCTION The railway freight system in China is the primary means for the delivery of coals, petroleum, iron, steel, non-metallic ore, metallic ore, grains and other key materials which are vital to the development of the nation's economy.Railway freight system is a complicated open system, involving economy, transport and social aspects that are mutually beneficial and restrictive.Research on the targeted-control of the order-parameter of railway freight system will guide the development of railway freight system into a more efficient and orderly way.Healthy growth of the complex trans-port system will better serve the society and economy.Synergetics was proposed by the famous physicist Herman Haken in 1970s and is now the leading theory in varied complex systems [1].Synergetics is used to discover the self-organization of a complex system [2][3][4][5],to discuss the influence of order-parameter on a system [6][7][8],to study the rules according to which the complex systems evolve from disordered status to ordered status [9][10][11][12][13][14][15].Few studies have been conducted on synergetics in the field of transport, mostly about the sustainable transport development [16][17][18], traffic administration [19][20][21][22], multi-modal transport [23][24] and other applications [25][26][27][28].Most of these studies focus on local issues without understanding the rules of evolution of the transport system.This article uses the synergetics theory to construct the order-parameter model of the railway freight system.By locating the order-parameters that influence the system, the evolution of the railway freight system is analyzed. INTERPRETATION OF RAILWAY FREIGHT SYSTEM SYNERGETIC NATURE Railway freight system is a complex system combining the natural system and the manual system.With the time going on, the synergetic result is achieved by the movements between the sub-systems inside.As an open system, the railway freight system evolves from a disordered status to an ordered status, exchanging substance and energy with the external world.In this large system, thousands of people gather, engage in political, economic and cultural activities.It is really a mega system featured with complexity.The synergetic theory is applied to research on the structure and order of the complicated railway freight system, because the railway freight system has the following synergetic characteristics: (1) Railway freight system developed in a self-organized way.It is designed and operated by certain rules, in other words, it runs in an ordered way.China is a typical continental country, characterized by asymmetric resource distribution and industrial distribution.The economic activities cover a wide area.With the large-scale transportation of product and production elements, the railway freight system is born as an outcome of the synergetic coordination between transportation supplies and transportation demands.The synergetic effects of the stream of people, logistics stream and information stream result in the spatial clusters of railway freight system.The result is: elements of the railway freight system meet and part in the space.The railway freight system forms by self-organization, and once it comes out, it develops a structure of its own nature and rules.It is in essence an organized organization.It operates by using its own rules, which are in fact the order of the railway freight system. (2) The self-organization of the railway system is more complicated than the natural system. Self-organization of the natural system is an optimized way for evolution after a long-time natural maturation.It is a relatively efficient way for the recycling of natural resources and matter energies.The railway freight system comes out spontaneously and is driven by many factors, such as the environment of transport market, demands on the transport market, social and economic development, the competitiveness of railway freight service and management.The development of the railway freight system thus will exhibit different structures and complexities than the nature system.The railway freight service involves trains, machines, engineering, power supply and vehicles.Its growth is closely related to the society and economy, environment and resources.The departments and elements in the system's development are mutually cooperative and competitive, which is a synergetic phenomenon.It increases the complexity of the self-organization of railway freight system.Railway freight system exchanges substances and energies with the external world, a self-organization process dynamically adapting to the outside world. (3) Railway freight system based on synergetics is an organized continuum.The development of the railway freight system includes three processes: the first process is the evolution of railway freight system from a non-organized status to an organized one, from disorder to orders.It is the origin of an organization.The second is the process featuring with the railway freight system's in-shot.The system starts running according to certain rules, and the orderliness is improved by the in-shot.This process features complexity.The third process is about the railway freight system's organizational structure and function: from simplicity to complexity.The railway freight system synergetics is a self-organized process, with synergetic process running through the whole development process. The railway freight system is an open and complicated system, which exchanges substances, energies and information with external world to keep it running on the right track. Through the position movement of cargoes, it is evolving.Railway freight system is characterized by non-equilibrium; with the distribution of substances and energies in an unbalanced way within the system.From the perspective of time, the system is in the growth stage, and from the perspective of space, the system exhibits regional differences (relatively developed in coastal areas and lagging in western regions).The elements inside the railway freight system follow the non-linear mechanism: random fluctuation of any element might result in minor changes of the system status, and then be amplified by the non-linear feedback mechanism, which in turn leads to major changes of the system.Fluctuations thus bring abrupt changes to the system on the whole, which results in a more coordinated and orderly status. ORDER-PARAMETER MODEL FOR SYNERGETICS-BASED RAILWAY FREIGHT SYSTEM Selection of railway freight system status variable In the social and economic system, the development of railway freight system is not a separate process; instead, it is closely related to the growth of the national economy, the adjustment of the industrial structure and some other primary social features. To locate the order-parameters of railway freight system, it is necessary to find out first the major factors that influence the transport of railway cargos.Based on the synergetic theory, there are a variety of parameters that can be used to evaluate and indicate the system development within the railway freight system.The railway freight system interconnects the national economy system and the transport system, while being under the influence of other systems.Therefore, the article defines thirteen status variables for railway freight system.For these 13 parameters, data from 1991 to 2009 are indicated in Description of order-parameter modelling Suppose a non-equilibrium system characterized by n varying parameters.Thus it can be indicated with n-dimension vectors as follows: , , , , , q q q q q i n Generally, the motion form of q v is expressed by the generalized Langevin equation: (1) where t j p ^h is the random fluctuation generated by perturbation; , , , , , k q k q q q q j j i n suggests the non-linear function of the varying parameters in the system.With t j p ^h left out, equation ( 1) can be expressed as: In equation ( 2), f q j v ^h is a non-linear function, and equation ( 2) is expressed as: , , , , , q q a q f q q q q , jk k k k In equation (3), , , , n 1 2 f c c c ^h is the relaxation coefficient, and a i i i c = .According to Harken's synergetics theory, the order-parameter, on one hand, is the outcome of the subsystems' collective movements within the system (mutually competitive and synergetic).On the other hand, once it comes into being (supports or uses the subsystem), it dominates the overall evolving process.Order-parameter, as a slow relaxation parameter, takes quite a long time, or even infinite time, to achieve the new steady relaxation state.In the system's evolving process, it always plays a decisive role.Therefore, i c can be used to judge the order-parameter: the lower the i c is, the role of status variable it plays is more important.The status variable of the minimum value of i c is the order parameter.For the order-parameter u of a general non-linear transition system, the system's evolution is dominated by u and all other variables used by u. .In the process of multi-variable system's evolvement, there is a synergetic effect working between different parameters within the system, that is, the synergetic and competitive relationship between variables.The synergetic item is a x t ij j 1 ^h h (parameter j has a synergetic effect on parameter i), Construction and solution of the model , (parameter j has a competitive effect on parameter i), the average rate of change of x t i 4), the non-linear different equation is: means: According to the Least Square Method principle: Based on the necessary condition for extreme value of multi-variable functions, the following is obtained: Matrix Bi can be proven as a symmetrical definite matrix; therefore, there is a unique solution Insert the data specified in Table 1 into equation (13).By MATLAB(R2010a)programming, it is obtained that Model (5) , model ( 5) simply reflects the average effects of non-linear actions between sub-systems, but does not indicate the random fluctuations of the system in the evolvement process.However, fluctuations are very important for the system's self-organized evolvement. With fourth-order Runge-Kutta method, the numerical solution of non-linear differential equation ( 14 18) is 0.7060, the fluctuation period is .2 0 7060 9 .r , which means railways in China take on the average 9 years into a new development cycle.(7) Order-parameter of system evolvement.Find solution of equation ( 14), to obtain the relaxation coefficients of status variable as follows: . By sequencing of the relaxation coefficients, the following is concluded: The results indicate that the relaxation coefficient of railway fixed assets is the minimum value, being followed by the quantity of railway locomotives.The relaxation coefficient of railway freight volume is the maximum. Result analysis for the model solution According to Haken's synergetic servo theory, fast relaxation variable obeys the slow relaxation variable, which in turn determines the system evolvement.The railway fixed asset investments are the major order-parameter for railway freight system evolvement, dominating the evolvement and development of railway freight system. Railway fixed asset investments are primarily used for the construction and upgrading of railway lines, purchases of locomotives and vehicles.The investments are directly transformed into railway fixed assets for the improvement of railway system transport capacity.Therefore, fixed railway asset investments are the key features that determine the improvement of railway network structure, expansion of the production capacity, enhancement of transport efficiency and solution of transport bottleneck.Meanwhile, the railway infrastructure construction will support the development of other related industries, expand the domestic demands and push forward the national economy.As a result, it will promote the circulation of materials.Railway fixed asset investments also help in restructuring of regional industries, speeding up the growth of local economy and the urbanization process.Railway fixed asset investments, as an order parameter, influence the collective synergetic actions of the status variables of railway freight system.Investments dominate the whole system evolvement process and determine the result of system evolvement.Therefore, the government is expected to develop proper plans for railway development in line with the economic conditions, geographic difference, industrial arrangement and resource distributions in China, and in view of the railway developments at home and abroad. The second key factor influencing railway freight system is the quantity of railway locomotives.The locomotives power up the railway transport and stand for the technological performance of the railway transport.Heavy-haul and fast transport are the two major trends of railway freight service, and the locomotive standards determine if China can achieve the F. Feng, L. Yang, D. Lan: Order-Parameter Model for Synergetic Theory-Based Railway Freight System and Evolution in China two goals of heavy-haul transport and fast delivery.Therefore, locomotive quality is the basic condition for China's railway freight industry to adapt to the development condition.Locomotive quality is closely related to railway fixed asset investments.China is expected to import more advanced locomotive technologies from abroad and at the same time improve its innovation capacity for the integration of advanced locomotive technologies and breakthroughs. According to the result of the model solution, society's goods turnover, second industry, railway freight turnover, society's freight turnover, and total investments in transport industry, quantity of railway locomotives, GDP, first industry, service mileage, railway freight transport revenue, railway freight volume influences on the evolvement of railway freight system decrease.Society's goods turnover on the whole reflects the social transport demands and the outputs of varied transport means.Therefore, it also has a major influence on the system.Second industry provides the basic sources of goods to be delivered by railway freight service, and is the primary service target of railway freight.Railway freight turnover is the result of railway freight turnover multiplied by average freight haul distance.It indicates the changes in service mileage and railway freight volume.Society's freight turnover reflects the total social demands, and investments in transport industry on the other hand show the railway investments and transport market climate.The quantity of vehicles influences the railway transport capacity.Meanwhile, it is also affected by the railway fixed asset investments.GDP reflects the macro-economic environment.Grains, wood and cotton from the first industry are the key service target for railway freight business.Service mileage is directly influenced by the railway fixed asset investments, and it is also a symbol of the railway transport capacity.Railway freight transport revenues show the changes in railway freight volume and freight service price. From this it can be seen that under the influence of railway fixed asset investments, the status variables of railway freight system exert influences, to different degrees, on the system evolvement and its direction through synergetic actions. ANALYSIS OF RAILWAY FREIGHT SYSTEM EVOLVEMENT PROCESS 4.1 Analysis of railway fixed asset investment potential function Railway fixed asset investments, as the order-parameter, influence the evolvement of railway freight system and its direction.Potential function refers to the function of behavioural variables.Generally, there are two ways to construct the variable potential function: one is to construct the description model for variables based on qualitative analysis, and the other one is to change into the potential function for variables based on the features of system variables.This paper applies the second method to construct the potential function of railway fixed asset investments to analyze the evolvement of railway freight system. First, analyze the features of railway fixed asset investments.By fitting the historical data, the evolvement equation is constructed as follows: v x ax bx cx d 24), the equilibrium curve M is obtained.The 3D evolvement of railway freight system is indicated in Figure 1. Figure 1 shows the 3D evolvement of railway fixed asset investments.As railway investments are the order-parameter of railway freight system, the curve also indicates the evolvement of railway freight system.In 1 the curve exhibits smooth folds, which are divided into upper, middle and lower lobes that get narrower towards the back and finally disappear at point Ol.Ol is the point of origin in 3D coordinates, u and v represent the control coefficients beyond the system that have influence on the order-parameter of railway freight system.Intentional control of u and v guides the system into a more efficient and orderly direction.When u 0 > the system evolvement has a tendency of continuous smoothness like Curve H'G' in the Figure .The bigger u, the smoother the curve.When u 0 < the system evolvement exhibits an obvious catastrophe, and the railway investments are seen on the upper lobe, middle lobe and lower lobe along with the changes in v.However, as the middle lobe is an unstable status, final railway investments are observed on the balanced status (upper or lower lobe) after passing through the folded margin, and the tendency is indicated by Q'J'F'H'P'. Mathematically, u 0 > changes in v will only result in smooth change in z, which makes v a regular parameter; u 0 < changes in v will only result in the discontinuous changes in z inside some M, which makes u a part parameter.In China, railway investments are primarily determined by two factors, government supportive policies for railway transport industry and development of national economy.Therefore, in the potential function of railway fixed asset investments, control factor u represents the development of national economy; when u 0 > , the national economy is growing healthily and steadily; when u 0 < , the national economy is in crisis or predicament; control coefficient v represents the government supportive policies for railway transport industry; when v 0 > , the policy is relatively tight, when v 0 < , the policy is relatively slack. Project the evolvement process in Figure 1 onto the control plane , C u v ^h, as shown in Figure 2. Closed angle in Figure 2 is the projection of the fold area in Figure 1.As equation ( 24) is a cubic expression, it can be learnt from algebra that it has one real root or three real roots.The number of real roots is determined by Cardano estimation formula u v 8 27 , there are three different real roots; when 0 D = , if neither u nor v is 0, there are three real roots, but if there is one double-root, and both u and v are 0, the three roots are all 0; when 0 > D , there is one real root and a couple of conjugate complex roots. Singularity collection S is a sub-collection of M that is made up by all degenerated critical points of V, which means it has to meet the requirements of equation (24) and to meet the condition: 23) is indicated in Figure 3(a).The potential function has two minimum values, which means two balanced positions.It shows that when the national economy depresses, in order to expand the domestic demands, the government may adopt a slack policy to support the railway industry and expand the investments in railway infrastructure.Railway freight system is in the upper-lobe balancing status of heavy investments; the government may also adopt a tight or relatively steady policy in railway industry, and upgrade the investments in other infrastructure industries, then the system is in the lower-lobe balancing status of low investments.0 D = corresponds to the curves OF and OJ in the control plane , C u v ^h in Figure 2, the bifurcation set.It also corresponds to fold margins of the curves in Figure 1, where catastrophe happens during the system's evolving process.Potential function ( 23) is indicated in Figure 3 (b) and (c).The potential function has two minimal values but only one minimum value.Figure 3(b) means that when the national economy is in sag, the government may expand the investments in railway infrastructure, so that the railway freight system completes a catastrophe from the low-investment balancing status to the high-investment balancing status. Figure 3(c) means that when the national economy is in sag, the government may reduce the investments in railway infrastructure, so that the railway freight system completes a catastrophe from the high-investment balancing status to the low-investment balancing status.0 > D corresponds to areas other than the closed angle area OJF in the control plane , C u v ^h in Figure 2. Potential function has only one smallest point, which means when the national economy is growing steadily and healthily, the government will keep its investments in the railway projects, and thus bring the system in a steady and balancing status, as indicated in Empirical analysis of railway freight system The above analysis shows changes in control coefficients u, v in the control space , C u v ^h which may result in the gradual change or the catastrophe of railway investments.Then, railway investments, as the order-parameter, will result in the gradual change or the catastrophe of the overall railway freight system.When u 0 > , the national economy grows steadily and healthily, the social demands for transport are increasing, and the government's policy for railway industry turns from relatively tight to slack.Gradual increase in railway investments will gradually push the railway system from the low-investment balancing status to the high-investment balancing status.When u 0 < , the national economy is in crisis or in predicament, the government may adopt three strategies: first, to expand the domestic demands and step up infrastructure construction, the government's policy for railway industry turns from relatively tight to relatively slack, so as to increase investment in railway projects.The railway freight system completes a catastrophe from the low-investment balancing status to the high-investment balancing status.Second, the government may adopt a relatively tight policy for railway industry, invest less or even none in railway industry.The railway freight system completes a catastrophe from the highinvestment balancing status to the low-investment balancing status.Third, the government may keep the original investment policy, and the railway freight system will keep the high-investment balancing status or the low-investment balancing status. In view of the history of China's railway industry, the investments in railway industry are rising along with the development of the national economy.Howev-er under government administrations, the investments are kept at a relatively low balancing status.In 2008, the national economy of China slowed down due to the world economic crisis.To expand domestic demands, and to step up infrastructure constructions, and as a result of the steady constructions of high-speed railway projects as per middle-and long-term plan for railway industry, the investments in railways increased significantly.Railway freight system is observed to have a catastrophe from the low-investment balancing status to the high-investment balancing status.The system's transport capacity is improved for higher efficiency and higher regularity.With the economic crisis being solved, national economy picked up speed, and thus the government has adopted a tight investment policy, which put the railway freight system back onto the track of gradual continuity.Therefore, the railway freight system evolvement is a unification of gradual changes and catastrophes. CONCLUSION (1) Railway fixed asset investments, as the orderparameter of railway freight system evolvement, dominate and control the evolvement and development of railway freight system.Status variables are ordered in terms of their influences on railway freight system evolvement: quantity of railway locomotives, society's goods turnover, second industry, railway freight turnover, railway freight turnover, total investments in transport industry, quantity of vehicles, GDP, first industry, service mileage, total investments in transport industry, railway freight volume. (2) In the random fluctuations, the mean value of parameter ~ is 0.7060, which means the fluctuation (3) When u 0 > , national economy is growing steadily and healthily, social demands for transport are increasing and the government's policy for railway industry turns from relatively tight to slack, which will put the railway system onto a continuous and gradually changing course from low-investment balancing status to high-investment balancing status.When u 0 < , national economy is in crisis or in predicament, the government may adopt three strategies: first, expand the domestic demands and step up infrastructure construction, the government's policy for railway industry turns from relatively tight to relatively slack, so as to increase investment in railway projects.The railway freight system completes a catastrophe from the lowinvestment balancing status to the high-investment balancing status.Second, the government may adopt a relatively tight policy for railway industry.The railway freight system completes a catastrophe from the highinvestment balancing status to the low-investment balancing status.Third, the government may keep the original investment policy, and the railway freight system will keep the high-investment balancing status or the low-investment balancing status. 20) calculate the integral function of variable X, and obtain its function form: to the Catastrophic Theory in mathematics, formula(23) suggests that railway fixed asset investment's potential function is a standard form with cusp catastrophe.Using the cusp catastrophe theory to analyze the equation for evolvement of railway fixed asset investments, it is found that the critical point of equation (23) is the solution of Equation C. : Figure Figure1the curve exhibits smooth folds, which are divided into upper, middle and lower lobes that get narrower towards the back and finally disappear at point Ol.Ol is the point of origin in 3D coordinates, u and v represent the control coefficients beyond the system that have influence on the order-parameter of railway freight system.Intentional control of u and v guides the system into a more efficient and orderly direction.When u 0 > the system evolvement has a tendency of continuous smoothness like Curve H'G' in the Figure.The bigger u, the smoother the curve.When u 0 < the system evolvement exhibits an obvious catastrophe, and the railway investments are seen on the upper lobe, middle lobe and lower lobe along with the changes in v.However, as the middle lobe is an unstable status, final railway investments are observed on the balanced status (upper or lower lobe) after passing through the folded margin, and the tendency is indicated by Q'J'F'H'P'.Mathematically, u 0 > changes in v will only result in smooth change in z, which makes v a regular parameter; u 0 < changes in v will only result in the discontinuous changes in z inside some M, which makes u a part parameter.In China, railway investments are primarily determined by two factors, government supportive policies for railway transport industry and development of national economy.Therefore, in the potential function of railway fixed asset investments, control factor u represents the development of national economy; when u 0 > , the national economy is growing healthily and steadily; when u 0 < , the national economy is in crisis or predicament; control coefficient v represents the government supportive policies for railway transport industry; when v 0 > , the policy is relatively tight, when v 0 < , the policy is relatively slack.Project the evolvement process in Figure1onto the control plane , C u v ^h, as shown in Figure2.Closed angle in Figure2is the projection of the fold area in Figure1.As equation (24) is a cubic expression, it can be learnt from algebra that it has one real root or three real roots.The number of real roots is determined by Cardano estimation formula u v 8 27 25 ) Project of S on the control plane , C u v ^h is termed as the bifurcation set.Remove z along with equation (24) and (25), and obtain: It is the bifurcation set.According to Cardano estimation formula, 0 < D corresponds to the closed angle area OJF in the control plane , C u v ^h in Figure2.Potential function ( Figure 2 - Figure 2 -Control plane of railway fixed asset investments Figure 3 - Figure 3 -Curve of potential function of railway fixed asset investments Table 1 . F. Feng, L. Yang, D. Lan: Order-Parameter Model for Synergetic Theory-Based Railway Freight System and Evolution in China Table 1 - The state variable of the railway freight transportation system in China Year Feng, L. Yang, D. Lan: Order-Parameter Model for Synergetic Theory-Based Railway Freight System and Evolution in China Define parameters ai , bi , aij with the least square method.As Order-Parameter Model for Synergetic Theory-Based Railway Freight System and Evolution in China F. Feng, L. Yang, D. Lan:
2017-05-03T12:22:42.127Z
2013-06-19T00:00:00.000
{ "year": 2013, "sha1": "3083d86a9257e1a0f77a64bf250af898c78c1331", "oa_license": "CCBY", "oa_url": "https://traffic.fpz.hr/index.php/PROMTT/article/download/307/1054", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3083d86a9257e1a0f77a64bf250af898c78c1331", "s2fieldsofstudy": [ "Economics", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
100466522
pes2o/s2orc
v3-fos-license
6-(4-Aminophenyl)-4,5-dihydro-3(2H)-pyridazinone - An important chemical moiety for development of cardioactive agents: A review 6-(4-Aminophenyl)-4,5-dihydro-3(2H)-pyridazinone moiety is a vital structural part of many cardio-active pyridazinone derivatives which are either in clinical use or have been tested in clinical trials. These include imazodan, CI-930, pimobendan, indolidan, levosimendan, SK&F-93741, Y-590, meribendan, NSP-804, NSP-805, bemoradan, senazodan, amipizone, prinoxodan, SKF 95654, siguazodan and KF 15232. This article briefly reviews relevant literature on various reports on the synthesis and use of this moiety for development of cardio-active agents. INTRODUCTION Cardiovascular disease is a major public health problem worldwide, even in the United States of America, and it accounts for approximately 30 % of all deaths [1]. Cardiovascular disease has also been considered to be the major cause of death in the Kingdom of Saudi Arabia [2]. Due to increasing prevalence of cardiovascular disease in children, researchers have recommended the establishment of well-equipped hospitals to for the care of children with cardiovascular disease in developing countries as well as in Kingdom of Saudi Arabia [3]. Studies have also revealed that there is a need for more research in the field of cardiovascular disease in developing countries because of the likelihood of prevalence of cardiovascular disease in all age groups in these countries [4,5]. The current review gives an insight on the potential of 6-(4-aminophenyl)-4,5-dihydro-3(2H)pyridazinone moiety for the development of cardio-active agents, and briefly discusses relevant literature related to the synthesis and use of this chemical for the preparation of cardioactive agents. Accordingly, literature references wherein 6-(4-aminophenyl)-4,5-dihydro-3(2H)pyridazinone moiety was not synthesized and/or not used for the preparation of cardio-active agents were excluded. Thyes et al [29] prepared 6-Aryl-4,5-dihydro-3(2H)-pyridazinones which exhibited aggregation-inhibiting activity on human platelets in vitro and on rat platelets ex vivo, as well as a hypotensive action on rats. The strongest pharmacological effects were found with dihydropyridazinones that have R = chloroalkanoyl substituent, together with a methyl group in the 5-position (3). The hypotensive actions of these compounds were 40 times higher than that of dihydralazine. These authors further demonstrated that the para-substituted compounds had a strong inhibiting effect on collagen-induced and ADPinduced aggregation of human platelets. It is known that platelet aggregation plays an important role in the pathogenesis of cardiovascular disease [30]. The in vitro human platelet aggregation and the ex vivo rat platelet aggregation-inhibiting activities of 6-aryl-4,5-dihydropyridazinones (4) with R 1 = R 2 = R 4 = Me or H; and R 3 = amine containing, groups were correlated with the van der Waals volume (Vw) of R 3 by Gupta et al [31]. Their results suggested that the size of the substituent on the aryl group plays an important role in the inhibition of platelet aggregation in this series of compounds. Based on the correlating equations obtained, it was further suggested that the inhibition of platelet aggregation most likely involved hydrophobic interaction. A moderate correlation existed between the hypotensive activity of these drugs in rats and Vw, indicating that hypotensive activity also was partly affected by the size of the substituent on the aryl group. Although it was assumed that hydrophobic interactions also played some role in the hypotensive action, it was argued, based on the results, that platelet aggregation inhibition and hypotensive activity involved two different receptor sites. . Most members of this series produced dose-related increases in myocardial contractility that were associated with relative minor increase in heart rate and decrease in systemic arterial blood pressure. Among the synthesized compounds (5), the one with R = H (CI-914) and R = Methyl (CI-930) were more potent than amrinone and milrinone, respectively. It was also postulated that the positive inotropic effect of these compounds was due to the inhibition of cardiac phosphodiesterase fraction III, rather than the stimulation of β-adrenergic receptors. Sircar et al investigated the structure-activity relationships of a series of 4,5-dihydro-6-[4-(1Himidazol-1-yl)phenyl]-3-(2H)-pyridazinones (7) with R = H, Me, CH 2 Ph, CH 2 CH 2 OH, CH 2 CH 2 OAc; R 1 = H, Me, NH 2 , CONH 2 ; and R 2 = H, Me, Et; R 3 = H, Me, SH, SMe, SOMe, Et, for their in vivo inhibition of different forms of cyclic nucleotide phosphodiesterase (PDE) isolated from guinea pig ventricular muscle [34]. With few exceptions, these 4,5-dihydropyridazinones were potent inhibitors of cardiac type III phosphodiesterase. The most selective PDE III inhibitor was CI-930 (R = R 1 = R 3 = H, R 2 = Me) with an ED 50 of 0.6 µM. Slater et al [35] reported the design and synthesis of a series of combined vasodilator-βadrenoceptor antagonists based on 6arylpyridazinones, and evaluated them as vasodilator-β-adrenoceptor antagonists and potential antihypertensive agents. Many of the synthesized compounds showed high level of intrinsic sympathomimetic activities (ISA) and relatively short durations of action. Di-substitution in the 2,3-positions or in the 4-position of the aryloxy ring produced compounds with low ISA levels and, in some cases, improved duration of action. The 5-methylpyridazinone derivatives displayed more antihypertensive activity than their 5-H homologs. The compound, SK&F 95018, was selected for further development. Alfred et al [37] have reported 4,5-dihydro-6-(1Hindol-5-yl)-pyridazin-3(2H)-ones and related compounds with positive inotropic activities. Most of these compounds produced increases in myocardial contractility with little effects on heart rate and blood pressure. The cardiotonic effect of compound (10) was at least 2-fold higher than that of pimobendan following oral administration. It has been suggested that, for optimal cardiotonic activity within this class of indole derivatives, a heterocyclic aromatic ring in position 2, a hydrogen or a Me group in position 3 and a dihydropyridazinone ring system in position 5 of the indole are necessary. ones have been synthesized and their PDE III inhibitory, inotropic and vasodilator potencies compared with those of their normethyl and their bicyclic 4,5-dihydro-6-phenylpyridazinone analogues by Bakewell et al [38]. The study revealed that the structure-activity relationships of the tricyclic pyridazinones differ from those of bicyclic pyridazinones mainly in respect of the effect produced by introducing a methyl group in the pyridazinone ring. Introduction of a 5-methyl group has been widely reported to lead to compounds of significantly greater potencies in the 4,5-dihydro-6-phenylpyridazin-3(2H)-ones. On the other hand, the tricyclic 4amethylpyridazinones showed similar levels of inotropic, vasodilator and PDE III inhibitory potencies to their normethyl analogues. In this series of compounds, the tricyclic 4amethylpyridazinones (11) with R = cyano, CONH2, NH2, NHAc, or OMe, and n = 1,2, …., showed good inotropic, vasodilator and PDE III inhibitory potencies. The synthesis and platelet aggregation-inhibitory activities of 6-(4-substituted acylamidophenyl)-4,5-dihydro-3(2H)-pyridazinones and 6-(4substituted acylaminophenyl)-4,5-dihydro-3(2H)pyridazonones have been described by Liu et al [43,44]. Preliminary pharmacological tests revealed that all the synthesized compounds inhibited appreciable ADP-induced platelet aggregation activities in rabbits. Liu et al [45] have further reported synthesis of 6-(4substituted acylaminophenyl)-4,5-dihydro-3(2H)pyridazinones and their inhibitory actions on platelet aggregation. These compounds were synthesized based on structure-activity relationships of anti-platelet aggregation of dihydropyridazinones. CONCLUSION Cardiovascular disease has become the leading cause of death worldwide and remains the foremost cause of preventable death globally. The need for more research in the field of cardiovascular disease in developing countries is underscored by the prevalence of cardiovascular disease in all age group of patients in these countries. 6-(4-Aminophenyl)-4,5-dihydro-3(2H)pyridazinone is an important chemical moiety that is useful for the development of cardio-active agents. The potential of its derivatives as cardioactive agents is evident from the literature as reviewed in this article. It is our belief that the exploitation of 6-(4-aminophenyl)-4,5-dihydro-3(2H)-pyridazinone derivatives can produce more potent cardio-active agents for clinical use in the treatment of cardiovascular disease.
2019-02-06T20:45:55.272Z
2016-08-12T00:00:00.000
{ "year": 2016, "sha1": "b10df86c4b69194fd60fc8a6258034141a8ac33f", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/141961/131703", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b045bd0c01bd1e5674bdcfeed22d2ba1a336dfc3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
204942604
pes2o/s2orc
v3-fos-license
Effects of aerobic exercise in the treatment of older adults with chronic musculoskeletal pain: a protocol of a systematic review Background Chronic musculoskeletal pain affects the quality of life of older adults by interfering in their ability to perform activities of daily living. Aerobic exercise programs have been used in the treatment of various health conditions, including musculoskeletal disorders. However, there is still little evidence on the effects of aerobic exercise for the treatment of older adults with chronic musculoskeletal pain. Thus, the objective of this study is to assess the effects of aerobic exercise in improving pain and function of older adults with chronic pain as a consequence of different chronic musculoskeletal conditions. Methods The databases to be used in the search are PubMed, EMBASE, CINAHL, PEDro, and Cochrane Central Register of Controlled Trials (CENTRAL). Randomized controlled trials that used aerobic exercise in the treatment of older adults with chronic musculoskeletal pain will be included. Primary outcomes will be pain and function. We will use the PEDro scale to evaluate the methodological quality and statistical description of each included study, and the strength of the recommendations will be summarized using GRADE. Discussion The results of this systematic review will provide a synthesis of the current evidence on the effects of aerobic exercise in the treatment of older adults with chronic musculoskeletal pain. In addition, this information can help health professionals in decision-making about the use of aerobic exercise in the treatment of older adults with chronic musculoskeletal pain. Ethics and dissemination This systematic review was recorded prospectively, and the results will be part of a doctoral thesis to be published in a peer-reviewed international journal and possibly presented at international conferences. Systematic review registration PROSPERO, CRD42019118903. Background Aging is a natural process that includes a dynamic and irreversible decline in physiological function, usually associated with an increase in the manifestation of chronic degenerative diseases [1]. Pain can be characterized as an unpleasant, sensitive, and emotional experience, associated or not with actual or potential tissue damage [2]. With aging, the global prevalence of chronic pain increases, and in 50 to 75% of cases, it may be underdiagnosed or undertreated [3]. Musculoskeletal pain is one of the main types of chronic pain in older adults [4], affecting approximately 50% of community-dwelling older adults [5]. Chronic musculoskeletal pain is one of the main causes of disability in older adults and is associated with difficulties with mobility and daily activities. It affects more women than men and generates high socioeconomic costs [4,6,7]. A systematic review indicated that older adults with chronic musculoskeletal pain are less active and may present with disability combined with poor mobility, frailty, depression, cognitive impairment, falls, and poor quality of sleep [4]. Changes in the pain signal associated with aging include a decrease in the integrity and density of cellular elements in the peripheral nervous system, leading to loss of nociceptive function [8,9]. In the central nervous system, there is a reduction in the neurotransmission paths, affecting the adequate transmission of the pain signal and its neuromodulation [9,10]. In addition, a systematic review with meta-analysis suggests that older adults tend to have greater intolerance to pain and an increased perception of pain [11]. The treatment of chronic musculoskeletal pain in older adults involves pharmacological and non-pharmacological interventions [12]. Due to the short-and long-term side effects of medication, the non-pharmacological approach has been gaining prominence [9]. Among non-pharmacological interventions, physical exercise is an option, with the objective of preserving the functional independence and quality of life of older adults [9]. Exercise interventions for older adults with chronic musculoskeletal pain should meet the needs of each patient, and should consider their preferences for type and mode of exercise [13]. Regular physical exercise has a protective effect on cardiovascular changes, depressive symptoms, and physical disuse in older adults [14,15]. In addition, it may limit the development and progression of disabling conditions [15], such as chronic musculoskeletal pain. In the USA, the Centers for Disease Control and Prevention recommend that older adults perform strengthening exercises and aerobic activities to reduce the risk of mortality [16]. Aerobic exercise for 30 to 40 min stimulates the production of endorphins, which bind to opioid receptors in the pain control system of the brain and spinal cord to decrease the perception of pain [17]. To date, only one systematic review has been published [18] with the objective of verifying the effects of walking in patients with chronic musculoskeletal pain. Improvements in pain and short-term function were observed; however, this systematic review did not include other modalities of aerobic exercise and the results are not specific for older adults [18]. Thus, no systematic review has verified the effects of different types of aerobic exercise in the treatment of older adults with chronic musculoskeletal pain. Therefore, the objective of this study is to assess the effect of aerobic exercise on pain and function in older adults with chronic pain caused by different musculoskeletal conditions. Study design Systematic review. Inclusion criteria Study design Only published randomized controlled trials assessing the use of aerobic exercise in older adults with chronic pain caused by various musculoskeletal conditions compared to any other type of medical or non-medical intervention or no intervention will be included. Participants We will include studies that assess older adults aged 65 years or more, that is, all the participants of the study should be aged over 65, from both sexes, and with chronic musculoskeletal pain. Chronic musculoskeletal pain will be defined as any muscle, joint, or tendon pain present for a minimum of 3 months [19]. Studies that assess older adults with chronic pain of non-musculoskeletal origin, such as cancer, will be excluded. Types of intervention and comparison The investigated intervention will be aerobic exercise used in the treatment of chronic musculoskeletal pain, such as walking, swimming, and cycling, among others. There will be no restrictions in the included studies regarding which professional prescribed the exercise and whether the exercise was supervised or not. Studies included can present the intervention of interest compared to a placebo group, control group with no intervention or minimal intervention (such as waiting list or follow-up booklets), other interventions (medical/pharmacological treatment, physical therapy, yoga, or other exercise modalities such as stabilizing and strengthening exercises) and other types of aerobic exercise. Outcomes Primary outcomes will be pain intensity (e.g., measured by the Pain Numerical Rating Scale) and function (e.g., measured by the Patient-Specific Functional Scale), assessed by means of questionnaires or specific tests. The secondary outcomes included will be quality of life (e.g., measured by the SF-36 Quality of Life Questionnaire), depression (e.g., measured by the Geriatric Depression Scale), sleep quality (e.g., measured by the Pittsburgh Sleep Quality Index), kinesiophobia (e.g., measured by the Tampa Scale for Kinesiophobia), and adverse effects. The outcomes will be classified into three periods: periods close to 4 weeks will be classified as short term, periods close to 6 months will be medium term, and periods close to 1 year will be long term [20]. Search procedures and selection of studies The searches will be performed in the following databases: PubMed, EMBASE, CINAHL, PEDro, and Cochrane Central Register of Controlled Trials (CENTRAL). The search strategy is shown in Additional file 1. Manual searches will also be carried out through the reference list of previous systematic reviews on the topic and of the clinical trials included in this review. Searches will not be restricted by language or date of publication [21,22]. We plan to finish the search on August 30, 2019. Data collection and analysis Selection of studies The studies will be assessed according to the eligibility criteria, and the selection will be divided into two phases. Initially, two independent reviewers will select the titles of the articles, and in the second phase, the reviewers will read the abstracts and full texts. Any disagreement will be resolved by a third reviewer. In case of doubt regarding the eligibility of an article, the authors may be contacted for clarification. Data extraction and management The data will be extracted onto an Excel spreadsheet containing information such as authors' name, place and year of publication, type of chronic musculoskeletal disease, and assessed outcomes. In addition, data on sample characteristics and size, characteristics of interventions performed, instruments used to assess outcomes, results of included studies, and follow-up of the study will also be extracted. The spreadsheet will be pre-tested with two randomized controlled trials similar to those eligible in this review. Two independent reviewers will perform the data extraction, and any disagreements will be resolved by a third reviewer. When data is not available in the manuscripts or if data is unclear, the authors of the studies may be contacted for clarification. All data from questionnaires presented on different scales will be converted to a scale ranging from 0 to 100. Assessment of risk of bias The assessment for risk of bias and statistical description of the studies will be performed by the PEDro scale, which has good validity and reliability levels, and is strongly correlated with the risk of bias scale from the Cochrane Collaboration [23,24]. This scale has 11 items: 8 items (items 2-9) refer to methodological quality (random allocation, concealed allocation, baseline similarity, blinding of therapist, blinding of patient, blinding of assessor, appropriate follow-up, and intention to treat analysis) and 2 items (10 and 11) refer to the statistical description (between-group statistical comparison, point measures, and measures of variability) [24]. The first item (eligibility criteria) is not considered in the total score because it is related to external validity [24]. The total PEDro score ranges from 0 to 10 points; the higher the score, the better the methodological quality and statistical description of the article [24]. For studies that are not available in the PEDro database, the PEDro scale will be applied by two independent reviewers and a third reviewer will mediate any disagreements. The studies will be considered as low risk of bias if they have a score equal to or higher than 6 points and as a high risk of bias with a score lower than 6 points [25]. Measures of treatment effect The effects of treatment for continuous outcomes will be reported by determining the effect size for pain intensity and function. If data is sufficient, meta-analyses will be performed using the random-effects model according to the short-, medium-, and long-term follow-up periods [20] to analyze pain intensity and function through the mean difference and 95% confidence intervals. Sensitivity analysis will be conducted to identify the results of the effectiveness between groups when the studies present a high risk of bias. If possible, subgroup analyses will be performed for the musculoskeletal diseases and for age. Meta-analyses will be performed in the Review Manager 5.2. Analysis of heterogeneity To identify the heterogeneity in the data from the included studies the chi-square test will be used. The magnitude of the heterogeneity will be ascertained by calculating I 2 , a measure that ranges from 0 to 100% [24]. An I 2 above 50% indicates significant heterogeneity and will result in a reduction of one level in the quality of the evidence due to inconsistency [20,[23][24][25]. Synthesis of data The quality of the evidence will be classified using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) approach [25]. According to GRADE, evidence quality assessment is performed for each outcome, and the combined available evidence is considered. The quality of evidence is classified into four levels (high, moderate, low, and very low) based on the comprehensive assessment of inconsistency, indirect evidence (not generalizable), inaccuracy, and publication bias. These levels represent confidence in the estimation of the treatment effects presented (Table 1) [26]. The level of evidence and strength of recommendation will be determined by discussion involving all authors. As we expect some degree of heterogeneity, narrative synthesis of the results would be used as needed. Discussion This systematic review aims to summarize the available evidence on the studies that verified the effects of aerobic exercise in improving chronic musculoskeletal pain in older adults. So far, we are unaware of any similar published systematic review. To obtain a high-quality study, we will follow all the recommendations of the Cochrane Handbook of Systematic Reviews. Primary outcomes were chosen taking into account their importance in the assessment of chronic musculoskeletal pain in older adults, so that the results of this review can easily be compared or combined with those of other systematic reviews on the treatment of chronic musculoskeletal pain. The results of this systematic review will inform physical therapists and other health professionals, as well as patients, about the value of an intervention based on aerobic exercise, given that it is an affordable, low-cost intervention commonly used by the general population. In addition, this study can identify gaps in the literature and guide future studies. Ethics and dissemination This study was registered prospectively, and the results will form part of a doctoral thesis and will be published in a peer-reviewed international journal and presented at international conferences. Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s13643-019-1165-7. It is very unlikely that further research will alter our confidence in the estimated treatment effect
2019-10-30T16:12:44.754Z
2019-10-30T00:00:00.000
{ "year": 2019, "sha1": "842596c10084bac421668114e2e47964d115099c", "oa_license": "CCBY", "oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-019-1165-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "842596c10084bac421668114e2e47964d115099c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256370088
pes2o/s2orc
v3-fos-license
Improved up-and-down procedure for acute toxicity measurement with reliable LD50 verified by typical toxic alkaloids and modified Karber method Up-and-down procedure (UDP) was recommended to replace traditional acute toxicity methods. However, it was limited due to the long experimental period (20–42 days). To improve UDP, an improved UDP method (iUDP) was developed by shortening observation time between sequence dosages. The aim of this study was to test the reliability of iUDP to provide a reliable method for the acute toxicity measurement of valuable or minor amount compounds. Oral median lethal dose (LD50) of nicotine, sinomenine hydrochloride and berberine hydrochloride were measured both by iUDP and modified Karber method (mKM). LD50 of the three alkaloids measured by iUDP with 23 mice were 32.71 ± 7.46, 453.54 ± 104.59, 2954.93 ± 794.88 mg/kg, respectively. LD50 of the three alkaloids measured by mKM with 240 mice were 22.99 ± 3.01, 456.56 ± 53.38, 2825.53 ± 1212.92 mg/kg, respectively. The average time consumed by the two methods were 22 days and 14 days respectively. Total grams of the alkaloids used by the two methods were 0.0082 and 0.0673 (nicotine), 0.114 and 1.24 (sinomenine hydrochloride), 1.9 and 12.7 (berberine hydrochloride). iUDP could replace mKM to detect acute toxicity of substances with comparable and reliable result. And it is suitable for valuable or minor amount substances. Background Median lethal dose (LD 50 ) was first proposed by J. W. Trevan in 1976 [1]. It is used to study acute toxicity and classify toxic substance [2]. The 95% confidence interval (95% CI, μ ± σ) is used to describe LD 50 mean [3,4]. Traditional acute toxicity methods to detect LD 50 and 95% CI include Bliss method [5,6], modified Karber method (mKM) [7,8], arithmetical method of Reed and Muench [9], and Miller and Tainter method [10]. For one substance, 50~80 mice would be administered to obtain LD 50 in 14 days by mKM or other traditional methods (a 14-day observation would carry on survival animals) [11,12]. In addition, the calculation of mKM is simple to obtain an accurately LD 50 value and standard error. However, mKM violates animal rights and increase economic pressure [2,[13][14][15]. With 3Rs principles proposed (Reduction, Replacement, Refinement) [16,17], up-and-down procedure (UDP) was advocated [14,18]. In UDP, the dosage of (N + 1) th would be determined by the poisoning symptoms of N th animal after administration. Observed the N th animal for 48 h, if it died, the dosage of (N + 1) th would be reduced; Otherwise, dosage would be increased. It is particularly time-consuming to test acute toxicity of one compound by UDP using 4-15 animals (Different toxicity compounds show different death and survival reversals, which may take 20-42 days, Table 1). 10,259 journal articles about acute toxicity tests from January 2008 to August 2021 were analyzed by using SCI Finder. We found that UDP was employed in only 246 articles (Fig. 1). It is not ruled out that other alternatives are being used, but the low utilization rate of UDP is also noticeable. Low precision and long period are the two major factors that limit the popularity of UDP in acute toxicity studies [19][20][21]. Recently, several studies had gradually increased animal numbers to improve the usability of UDP [22][23][24][25]. In addition, Hiller, D.B. and Yu Y used UDP to detect drug intravenous toxicity. And they increased mice number at each dosage to improve precision of the results [26,27]. Sarah C. Finch used UDP to test acute toxicity of tetrodotoxin and tetrodotoxin-saxitoxin mixtures under different routes (i.p. and p.o.) [28]. However, more animals mean more substances would be consumed which is not friendly to valuable or minor amount compounds. In this research, reducing observation time between sequence dosages rather than increasing animal number is applied to improve UDP. Nicotine, sinomenine hydrochloride and berberine hydrochloride, the three known toxic compounds are classic representatives of highly toxic, moderately toxic, and mildly toxic alkaloids. And they were poorly reported about oral acute toxicity of in mice [29,30]. This study aimed to evaluate the feasibility and reliability of iUDP by comparing the LD 50 of the three alkaloids tested both by iUDP and mKM. Experimental animals A total of 263 ICR female mice (7~8-week-old, 263 0 g) were used. They were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. The mice were housed in individually ventilated cages and had free access to food and water. A 12 h light/ dark cycle was used in the room. The room temperature and humidity were 20~22°C, 50~70%, respectively. Before the start of the study, the animal experiments were approved by the Division of Animal Control and Inspection, Department of Food and Animal Inspection and Control, Instituto para os Assuntos Cívicos e Municipais (IACM), Macao (AL020/ DICV/SIS/2018). In the experiment, each mouse was weighed and fasted 4 h with drink water freely before administration. For oral administration of nicotine and sinomenine hydrochloride, 0.2 ml was given for every 10 g of mice body weight. And 0.4 ml of berberine hydrochloride was given for every 10 g of mice body weight. After administration, the mice were fasted for 1 h with drink water freely. The survival or death of two consecutive animals is called reversal. For the main test, the testing stops when one of the following stopping criteria is occurred: (a) 3 subsequent animals survive at the highest dosage; (b) 5 reversals occur in any 6 subsequent animals administered; (c) at least 4 animals have followed the first reversal and the specified likelihood-ratios exceed the critical value. When the experiment was stopped, all the survived mice were humanely killed and necropsied after a 14day observation. Observed and recorded the pathological changes of organs. Materials Nicotine (purity > 99%, CAS number: 54-11-5) and berberine hydrochloride (purity > 99%, CAS number: 2086-83-1) were obtained from Sigma Chemical The acute toxicity assay of sinomenine hydrochloride in mice by iUDP According to previous literature results, sinomenine hydrochloride was moderately toxic with a significant dosage-response relationship [30,33]. Therefore, the estimated initial LD 50 was 175 mg/kg. Sigma was 0.2, slope was 5, and T was 1. The acute toxicity assay of nicotine in mice by mKM Twenty-four ICR female mice were randomly divided into 4 groups. The dosage ratio was 0.7, and oral dosage was 14, 20, 28.5, 40.8 mg/kg. The lowest dosage with 100% mortality (Dm = 40.8 mg/kg) and the highest dosage with 0% mortality (14 mg/kg) were obtained to provide references for subsequent experiments. Fifty ICR female mice were randomly divided into 5 groups. The lowest and highest dosage were selected (16 mg/kg, 39.1 mg/kg, respectively). And 0.8 was chosen as the dosage ratio. After dosing, symptoms of poisoning, number of survival and dead mice were recorded. All mice were subjected to gross necropsy. The acute toxicity assay of sinomenine hydrochloride in mice by mKM Twenty-four ICR female mice were randomly divided into 4 groups. The dosage ratio was 0.7, and oral dosage was 350, 500, 665, 715 mg/kg. Obtained the lowest dosage of 100% mortality (Dm = 665 mg/kg) and the highest dosage of 16% mortality (350 mg/kg). To obtain the highest dosage with 0% mortality (Dn), 300 mg/kg was added. Fifty ICR female mice were randomly into 5 groups. The lowest and highest dosage were selected (300 mg/ kg, 665 mg/kg, respectively). And 0.82 was chosen as the dosage ratio. After dosing, symptoms of poisoning, number of survival and dead mice were recorded. All mice were subjected to gross necropsy. The acute toxicity assay of berberine hydrochloride in mice by mKM Twenty-four ICR female mice were randomly divided into 4 groups. The dosage ratio was 0.5, and oral dosage was 1000, 2000, 4000, 8000 mg/kg. The lowest dosage with 90% mortality (8000 mg/kg) and the highest dosage with 16.7% mortality (1000 mg/kg) were obtained. Then 11,428 (100% mortality) and 700 mg/kg (0% mortality) were carried out. Fifty ICR female mice were randomly into 5 groups. The lowest and highest dosage were selected (703 mg/ kg, 11,250 mg/kg, respectively). And 0.5 was chosen as the dosage ratio. After dosing, symptoms of poisoning, number of survival and dead mice were recorded. All mice were subjected to gross necropsy. Statistical analyses In iUDP, the dosage and numbers of all survival and dead mice were recorded. The computational formula are as follows: Wherein, Xi was the dosage level, N was the total number of animals, A and C values were obtained from Dixon's tables [30], which were obtained from the number of O and X in N trials. And d was lgDn minus lgD(n + 1), SE was the standard error, SD was the standard deviation of all dosages in N trails. In mKM, mortality rate of each group was calculated, and then values were substituted into formulas to obtain LD 50 [34]. The computational formula are as follows: Wherein, m was LgLD 50 , D was the dosage of each group, Dmax was maximum dosage level, DN was the dosage of N group, D(N + 1) was the dosage of (N + 1) group, p was the mortality of each group of animals, and d was the standard error (σ), I was LgDN minus LgD (N + 1), and n was the number of animals in each group. Data of organ indexes were plotted in GraphPad Prism (7.0) using One-way ANOVA and Dunnett's multiple comparisons test. The data were presented in mean ± SD, *P < 0.05 vs Normal, **P < 0.01 vs Normal. Results The LD 50 and toxicity of nicotine in mice detected by iUDP The result was calculated as follows according to the results of Table 2 and formula (1), (2). Therefore, the LD 50 for nicotine was 32.71 mg/kg and the 95% CI was [25.25, 40.17]. Compared with normal mice, lung in mice administered with different dosage of nicotine were enlarged (Table 3). There was a good dosage-effect relationship of nicotine on lung injury in mice. As seen in Tables 3, 32 mg/kg of nicotine increased lung weight in mice (P = 0.007). And 50 mg/kg of nicotine significantly increased heart and lung weight in mice (P = 0.009, P = 0.010). The LD 50 and toxicity of sinomenine hydrochloride in mice detected by iUDP The result was calculated as follows according to the results of Table 4 and formula (1), (2). Compared with normal mice, sinomenine hydrochloride has no effect on the organ indexes (Table 5). No visible alterations were found in organs and tissues in mice administered with low dosage of sinomenine hydrochloride. 700 mg/kg of sinomenine hydrochloride significantly increased heart, spleen and kidney weight in mice by comparison with normal mice (P = 0.010, P = 0.001, P = 0.007). The LD 50 and toxicity of berberine hydrochloride in mice detected by iUDP The result was calculated as follows according to the results of Table 6 and formula (1) Compared with normal mice, 5000 mg/kg of berberine hydrochloride increased spleen weight in mice (P = 0.049, Table 7). No visible alterations were found in organs and tissues in mice administered with berberine hydrochloride. The LD 50 and toxicity of nicotine in mice detected by mKM The result was calculated as follows according to Table 8 and formula (3,4,5,6). Compared with normal mice, 25 and 31.25 mg/kg of nicotine increased lung weight in mice (P = 0.024, P = 0.009, respectively). 39.10 mg/kg of nicotine significantly increased lung weight in mice (P = 0.005, Table 9). The LD 50 and toxicity of sinomenine hydrochloride in mice detected by mKM The result was calculated as follows according to Table 10 Compared with normal mice, the heart and kidney in mice administered by 665 mg/kg of sinomenine hydrochloride were enlarged (P = 0.035, P = 0.003, respectively, Table 11). The LD 50 and toxicity of berberine hydrochloride in mice detected by mKM The result was calculated as follows according to Table 12 and formula (3,4,5,6). Compared with normal mice, the liver, spleen and lung in mice administered by 11,250 mg/kg of berberine hydrochloride were enlarged (P = 0.002, P = 0.009, P = 0.01, respectively Table 13). Discussion We have improved UDP for acute toxicity testing of substances. The improved UDP (iUDP) has several advantages. It shortens the experiment period to improve the usability of UDP. Besides, iUDP is very friendly to valuable or minor amount substances. Different kinds of new natural products or monomers from Traditional Chinese Medicine or herbal medicine, often with low yield or high cost. To confirm the safety of such compounds, iUDP is a viable option. However, what cannot be ignored is that oral LD 50 is affected by many factors such as gender, age and fasting time, etc. [2]. Gender differences plays an important role in dose-effect response [35,36]. Females are more sensitive to compound than males [37]. It is recommended to use females for general acute toxicity studies [33]. Age, which is often poorly reported, affects the physiological state and sensitivity to substance [38]. Four to eight weeks mice (18~30 g) are often used in toxicity tests [39][40][41][42]. It is indicated that ICR, KM, and BALB/c mice (26~30 g) under the state of 8~10 weeks are equivalent to the human adulthood [43]. To increase scientific validity and reduce experimental variability, the adult rodent animals are used in acute toxicity experiments [44]. In addition, the fasting status is often overlooked. It was reported that overnightfasting affected the level of hormone and sensitivity of animals to drugs [45]. In this study, a 4 h-fasting is recommended for mice. According to toxicity categories in Classification Criteria for Acute Toxicity (Table 14) [46] and LD 50 results (Table 15), nicotine, sinomenine hydrochloride and berberine hydrochloride were divided into Category II (Toxicity), IV (Mildly toxicity) and V (Low toxicity). Consequently, we believe that compounds with the same or similar toxicity as these three alkaloids can be tested by iUDP. However, iUDP is not suitable for acute toxicity test of completely non-toxic compound, which is also the defect caused by shortening the observation interval time to 24 h. In the experiment, surviving mice returned to normal after 2~18 h administration (Tables 2, 4, 6). Nicotine and sinomenine hydrochloride have a fast-poisoning reaction which was relieve within 4-6 h. But unknown chemicals may take a longer time to show its toxic reaction which is the same as berberine hydrochloride (dead after administration of 8-18 h). To improve the repeatability of iUDP, the state of each animal should be as consistent as possible to reduce individual differences of animals [2,47,48]. It is best to fix the fasting start time and end time for each mouse. In this article, the mice were fasted daily from 9:00 am to 13:00 pm and the weight loss of each mouse was between 0.9 to 2.0 g. In addition, the reliability and accuracy of iUDP could be improved by choosing appropriate initial dosage and slope. Initial dosage should be valued from all known toxicity information [49]. Slope of dosage response curve is a key regulator for sequential dosage. A larger slope would bring a good 95%CI, which may lead to increase animal. A smaller slope would reduce the accuracy of 95%CI. Once the slope setting is not suitable, the entire experiment faced the risk of failure. Conclusion In light of experimental results, it may be concluded that iUDP is reliable to detect acute toxicity of unknown substances. Compared with traditional acute toxicity method, iUDP is more animal-friendly and economy and therefore suitable for valuable or minor amount substances. Availability of data and materials All data generated or analyzed during this study are included in this published article. Declarations Ethics approval and consent to participate The animal experiments were approved by the Division of Animal Control and Inspection, Department of Food and Animal Inspection and Control, Instituto para os Assuntos Cívicos e Municipais (IACM), Macao (AL020/DICV/ SIS/2018). For animal welfare reasons, all the animals were treated and
2023-01-30T15:19:56.139Z
2022-01-04T00:00:00.000
{ "year": 2022, "sha1": "9c3662bf531df6df03d194a17397c15c2fa0ab1d", "oa_license": "CCBY", "oa_url": "https://bmcpharmacoltoxicol.biomedcentral.com/track/pdf/10.1186/s40360-021-00541-7", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9c3662bf531df6df03d194a17397c15c2fa0ab1d", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [] }
19499248
pes2o/s2orc
v3-fos-license
Facilitating case studies in massage therapy clinical education. The integration of evidence into reflective health care practice has been on the rise in recent years and is a phenomenon that has affected all health care professions, including massage therapy. Clinical case studies are a research design that follows one patient or subject, making the studies ideal for use in clinical practice. They are valuable for communicating information from clinical practice to the broader community. Case studies have face validity that may be more valuable to individual practitioners than homogeneous randomized controlled trials, as the practitioner may recognize a complex patient in the case report. At Humber College, Student Massage Therapists (SMTs) create, conduct, and communicate results of a clinical case study prior to graduation. This article describes the process and experience. 1 designs, (1) are a research design that follows one patient or subject-a feature that makes case studies ideal for clinical practice. Despite the inability to generalize the results of a case study to the larger population, they are valuable for communicating information from clinical practice to the broader community, including other health care professionals and researchers. Case studies have face validity that may be more valuable to individual practitioners than homogeneous randomized controlled trials, as the practitioner may recognize a complex patient in the case report. (2) On the hierarchy of research, case studies are often found along with qualitative research at the base of the pyramid. Case studies also provide clinicians with a standardized process by which to communicate adverse effects and side effects of, novel approaches to, and both positive and negative unexpected outcomes of, treatment. This is of particular importance in massage therapy as there is a paucity of evidence for many aspects of practice, including safety, efficacy, effectiveness, and cost-effectiveness. This statement is not to suggest that massage therapy is not a useful or cost-effective treatment, rather than more research needs to be conducted on outstanding topics of inquiry to better describe massage therapy practice in general. The integration of evidence into reflective health care practice has been on the rise in recent years and is a phenomenon that has affected all health care professions, including massage therapy. (4)(5)(6)(7) Evidenceinformed practice (EIP) is the use of the best available evidence, patient values, and practitioner expertise to inform the clinical decision-making process. (8) In order for a practitioner to engage in evidenceinformed practice they must have research literacy skills or the ability to find, understand, analyze, and apply evidence to practice. (9) Research in massage therapy continues to grow as attitudes around evidence seem to change. Anecdotal evidence points to a greater demand for research in massage therapy to support or refute current practices and reinforce the safety of the practice. This may be, in part, as the result of changing expectations that massage therapists will have skills in research literacy. In 2005, the regulatory authority for massage therapy in Ontario, Canada included research literacy into the entry-to-practice competencies. (10) The 2012 Inter-jurisdictional Competency Document, which will replace the 2005 competency document in Ontario, goes further to require an academic knowledge of evidence-informed practice. (11) While it is this author's opinion that evidence-informed practice should be required at least at a simulation level and where possible at a clinical level, this inclusion does demonstrate a change in attitude of the regulators and the profession of massage therapy in Canada. The faculty in the Massage Therapy Program at Humber College believe the skills of designing, conducting, and communicating the results of a case study are important tools for therapists to have in order to communicate with the larger massage therapy, health care, and research communities. As such, steps were taken to include case study creation and implementation into the curriculum. The case study proposal is broken down into the introduction, methods, ethical considerations, and references. In the introduction, students present the background and context for their clinical case study topic. The research question and hypothesis are also a part of this section. In the methods section, students describe the population, inclusion and exclusion criteria, intervention, data collection, and data analysis. The ethical consideration section includes an information sheet for participants and informed consent form. Students are given REB-approved templates for these sections in which they insert the details of their studies. Over the semester, students submitted their proposals in stages. After each section submission, students received feedback on their proposals that was incorporated in future submissions. Although this involves a considerable amount of time on the part of the instructor, it is valuable to the progression of the students and their proposals. After receiving feedback on the introduction and methods, the students submitted the whole proposal. Using the feedback with which they are provided, students also get one final resubmission. By the end of this process, most submissions are ready for use in the following semester. Conducting a Clinical Case study SMTs conduct the clinical case study in their oncampus internship (student massage therapy clinic) in the fifth semester. Using the inclusion/exclusion criteria from the proposals, subjects are recruited for the studies between the terms. Once subjects are identified, they attend the student clinic for their first appointment where the SMTs enroll the subject using the information sheet and consent form. Once subjects are enrolled, each study progresses on its own pre-established schedule. Students have already had to ensure that the study fits the available treatment time in the semester. In other words, the study can be no longer than 12 weeks, including enrolling the patient, due to the length of the semester. For students who were given a subject later in the term, they must apply for an amendment to the length of the study. During the study, SMTs were supported by their clinical instructors and a research supervisor. For this first attempt, the author was the research supervisor for all 17 projects during the term. This support was intended to help the SMTs collect data according to the schedule. Both data collection and data organization were also supported through learning modules. Learning modules are assignments given within clinical education at Humber College. The modules outline activities related to proscribed learning objectives that must be accomplished. If a student was not able to find a subject, or if the data were not able to be accurately collected, they were required to complete an alternate project. They could choose between a retrospective case study Methods At Humber College, Student Massage Therapists (SMTs) create, conduct, and communicate results of a clinical case study prior to graduation. It is anticipated that, through this experience, SMTs develop useful skills in research literacy and research capacity that they can use once they enter practice. The learning outcomes associated with this project include, but are not limited to: (12) ). 4. Discuss the value of conducting research for the massage therapy profession. 5. Apply knowledge and skills related to the research process to create a clinical research study proposal. 6. Conduct a prospective case study and gather the related data. 7. Synthesize the background information, methods, results, and conclusions of a clinical case study into a case report for publication and presentation (oral and poster). 8. Defend a case report in front of a panel of researchers and massage therapy practitioners. It is also hoped that the SMTs develop an appreciation for the research process and the usefulness of this type of knowledge. Creating a Clinical Case study The ability to develop a research study is the first of the competencies related to research capacity. In the Humber program, SMTs create a proposal for a clinical case study in the fourth semester of the sixsemester program. They are encouraged to choose a topic in which they are interested. There are some limitations placed upon those topics by the Humber College Research Ethics Board (REB), which does not allow students to conduct research on vulnerable persons. SMTs must take into consideration this limitation set by the REB, along with the notion of feasibility when choosing their topic for study. In other words, topics should involve populations of patients that will likely be able to be accessed in the area around Humber College. Students are required to search for relevant literature to inform themselves as to the research that has already been conducted in their topic areas. With that said, the focus of this proposal is to experience the research process, not necessarily contribute new knowledge. papers. The clinical case studies spanned the populations of patients with scars (2), and one project each for multiple sclerosis, low back pain during pregnancy, muscle cramps, cerebral palsy, fibromyalgia, narcolepsy, delayed onset muscle soreness, sciatic nerve pain, and well-being. The discussion papers tackled issues such as the usefulness of proper clinical documentation of treatments, and reviews of the literature for massage therapy in the management of delayed onset muscle soreness, body image, HIV/ AIDS, and well-being in a postmastectomy patient. The sixth paper dealt with the proper use of photography as a way of recording outcomes in clinical practice or research. student perspective In their speeches at the end of the Student Massage Therapy Research Night, the student representatives mentioned some of the challenges and rewards of being engaged in a research project. The challenges included performing an accurate and extensive literature search with a limited amount of time and ability to analyze the research articles, incomplete data collection due to patient absenteeism or practitioner forgetfulness, and nervousness about presenting results to experts. However, the students also reported that they felt more confident and knowledgeable in their massage therapy practice. They felt better equipped to incorporate research studies into practice in order to communicate better with patients and provide a more effective treatment. Most of all, they felt a sense of pride in their accomplishments. recommended Changes As a result of this experience, there are some recommendations for instructors, students or clinicians who wish to try this model in their own clinics. First, someone must take the time between semesters to review and revise the information sheets and consent forms. The students are provided with considerable feedback and sometimes the changes are not made throughout their documents. In order to fulfill the REB requirements, the materials need to be reviewed. Second, where possible, multiple research supervisors should be used. There is a considerable amount of time that is spent with the students to answer questions and determine whether the study is progressing as it should. Finally, while the discussion papers were good, another alternate assignment closer in similarity to the case study should be considered. In the next year of this project, students without a case study will create a critically appraised topic (CAT)-a tool that helps health practitioners learn about current research that is relevant to practice. (15) This evidence can then be applied to clinical decision-making. created from an existing patient record in the student clinic, or a discussion paper related to one of the difficult aspects of their study. For example, a student used photography to capture posture. Unfortunately, she did not have enough consistency in her approach to complete her case study. For her alternate project, she explored procedures to follow to ensure photographs taken in clinical practice or research would be useful, by looking at the related literature for suggestions. Communication of a Clinical Case study Once the data were collected and organized, SMTs were required to write up the results for publication and presentation, both poster and oral, in a capstone course in the sixth, and final, semester of their program. The first task was to analyze the data collected (with support by the research supervisor who taught the course). Once the data were analyzed, it became clear which projects could continue and which projects did not have sufficient data. In the cases where a project had insufficient data, the students chose to do a discussion paper on an aspect of their topic. Once the project type was confirmed (case study or discussion paper), students wrote papers for publication. They used the author guidelines from the International Journal of Therapeutic Massage and Bodywork (13) to organize and format their papers. Examples of previously published case studies were given to inform and inspire the students. Students had the opportunity to submit their publications twice, once to receive feedback and a second final submission. Using their papers as a base, students then created posters for the Student Massage Therapy Research Night. Students could use a PowerPoint poster template or a software called PosterGenius (14) to create the posters. The size of the poster was 3' by 2'. A draft of the poster presentations was submitted for feedback and revision prior to the Research Night. Students hosted the Research Night in March 2012 and presented the completed posters. Family, friends, faculty, and Humber administration were invited to attend. After opening remarks from the President and Dean of Research, attendees were invited to view the posters and ask students questions about their projects. The evening concluded with closing remarks from two student representatives. The final assignment for the course was a 15-minute oral presentation to a panel of experts. The panel consisted of the research supervisor, a massage therapy researcher, and a registered massage therapist. Following the presentation, five minutes were allotted for questions. results & disCussion In all, 17 projects were completed. Of the 17, 11 were clinical case studies and six were discussion Future studies should evaluate whether or not the opportunity to design and carry out a study changes attitudes toward research literacy and capacity. It would also be interesting to investigate whether research knowledge increases as a result of the process. Similarly, research studies should explore whether students and graduates develop an appreciation for research by conducting their own study. A final area of investigation would be the question of confidence: Do students have more confidence in their clinical practice or professional role following this project? ConClusion Creating, conducting, and communicating clinical case studies in the massage therapy program at Humber College has been challenging but rewarding. The process is complex and requires time and effort. Although there is much research to be done on the educational value of this experience, student reports indicate that this process improves confidence in their clinical and research abilities. Future studies are needed that investigate the impact of conducting clinical case studies on student and graduate attitudes and abilities. aCknowledgMents The author wishes to acknowledge Humber College's Graduating Class of 2012 in Massage Therapy who were the first to participate in this process.
2016-05-16T15:22:47.094Z
2013-03-19T00:00:00.000
{ "year": 2013, "sha1": "21ff15e4f8436069b9feae1faf753ae1c82f96c7", "oa_license": "CCBYNCND", "oa_url": "http://www.ijtmb.org/index.php/ijtmb/article/download/204/255", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "21ff15e4f8436069b9feae1faf753ae1c82f96c7", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203115142
pes2o/s2orc
v3-fos-license
Stochastic Extended Korteweg-De Vries Equation In the paper, we consider stochastic Korteweg-de Vries - type equation. We give sufficient conditions for the existence and uniqueness of the local mild solution to the equation with additive noise.  We discuss the possibility of the globalization of mild solution, as well. Introduction Nonlinear wave equations attracted enormous attention in many fields, e.g. physics (hydrodynamics, plasma physics, optics), technology (electric circuits, light impulses propagation) and biology (neuroscience models, protein, and DNA motion). Usually, such equations are obtained as a kind of approximation and/or simplification of the set of several more fundamental equations governing the system with their boundary and initial conditions. Approximations are usually based on the perturbative approach in which some small parameters, related to particular properties of the considered system, appear. Then the relevant quantities are expanded in power series of these small parameters. The limitation to terms of the first or second order allows deriving approximate nonlinear wave equations describing the evolution of a given system. In several fields the lowest (first) order equation takes form of the Korteveg-de Vries equation (commonly denoted as KdV) [1] ∂ u ∂t It was derived firstly for surface gravity waves on shallow water but later found in many other systems, see, e.g. [2,3,4,5]. Although the KdV equation displays dominant features of weekly dispersive nonlinear waves, it is a valid approximation only for constant water depth and waves with small amplitudes. For waves with a larger amplitudes perturbative approach to Euler equations should be applied up to second order in small parameters. Then linear terms with fifth order derivatives and new nonlinear terms appear in final nonlinear wave equation. This equation was derived by Marchant and Smyth and called the extended KdV in [6]. For short we call this equation KdV2 stressing second order perturbation expansion. Contrary to KdV this equation is non-integrable. Despite this fact, we found three kinds of analytic solutions to KdV2, namely single soliton solutions, periodic cnoidal solutions, and periodic superposition solutions, see. e.g. [7,8,9]. Nonlinear dispersive waves attracted the considerable attention of mathematicians. Among many examples of mathematical description of those problems, we point out books of Linares and Ponce [10] and Tao [11]. Surface water waves are subjected to some unpredictable influences of the environment, like winds, bottom fluctuations, etc. These unknown factors can be accounted for by introducing a forcing term of stochastic nature into wave equation. In the current paper, we study stochastic version of KdV2. We supply sufficient conditions for the existence and uniqueness of a local mild solution to the Korteweg-de Vries type equation of the form (2.1) below. We follow and generalize the approach of de Bouard and Debussche [12] and Kenig, Ponce and Vega [13,14] We obtained the existence and uniqueness results on a random interval. The generalization of these results to any time interval with the approach due to de Bouard and Debussche [12] is not possible since they use some properties of classical KdV equation and its invariants. In our case, for extended KdV equation, there exists only one (the lowest) exact invariant, the other ones are only adiabatic (approximate) [15]. In Section 3 we discuss the possibility for some globalization of obtained mild solution to stochastic extended KdV equation studied. We use the near-identity transformation (NIT for short) Kodama [16], Dullin et al. [17] to transform original non-integrable extended KdV equation into the asymptotically equivalent equation which has Hamiltonian form and therefore is integrable. The term asymptotic equivalence means that solutions of both equations coincide when physically relevant coefficients of the equations tend to zero (for details, see Section 3). Existence and uniqueness In this section, we prove the existence and uniqueness of mild solution on a random interval to the stochastic extended KdV-type equation of the form Motivation for studying the equation (2.1) is given in Section 3. In (2.1), W is a cylindrical Wiener process defined on the stochastic basis (Ω, F , (F t ) t≥0 , P) with values on L 2 (R) adapted to the filtration (F t ) t≥0 . The operator Φ belongs to L 0 2 , where L 0 2 := L 0 2 (L 2 (R); H σ (R)) is the space of Hilbert-Schmidt operators acting from L 2 (R) into H σ (R) and H σ (R) is the Sobolev space (see, e.g., Adams [18]), σ > 0. The equation (2.1) is supplemented with an initial condition , is a unitary group generated by the linear part of the KdV equation (1.1). To simplify notation we will use the following abbreviation for stochastic convolution Definition 2.2. For a given set A by n A we shall denote the biggest subset of A defined as n A := u ∈ A : ∂ kn u ∂ x kn ∈ A, k ∈ N . In the paper we shall use the following notation Proof. Proof comes from Proposition 3.5 in de Bouard and Debussche [12]. Now, we can formulate first result. (2.5) , for any T > 0 and all σ , such that 3 4 < σ < 1. Proof. For reader's convenience the proof of Theorem 2.4 is postponed to the section 4. Now, we are able to formulate the existence and uniqueness result. Proof. As we have already written, in the proof we follow the method used in de Bouard and Debussche [12]. We introduce the mapping T defined as follows We want to obtain the following condition From Theorem 3.2 and Proposition 3.5 in de Bouard and Debussche [12] and because u, From Theorem 3.2, Proposition 3.5 de Bouard and Debussche [12], Lemma 2.3 and Theorem 2.4 above, and equations (2.9)-(2.11) we obtain that the mapping T maps the set 2 X σ (T ) into itself if u 0 ∈ 2 H σ (R) and Φ ∈ 2 L 2 L 2 (R, H σ (R)) . We want to find a ball B in 2 X σ (T ) centered at point 0 and radius 2R such that the mapping T B is contraction. More precisely, we want to have the following conditions First, let us note that for any (2.14) From (2.14) and Proposition 3.5. de Bouard and Debussche [12] we obtain the following estimate Here and below we write for shortening which is nondecreasing with respect to T to our estimate. We obtain Now, we shall find R fulfilling condition (2.13)(i). Assume that |u| X σ (T ) < 2R. Then we have From the second inequality we obtain Hence, in order to obtain (2.13) (i), the following inequalities must hold Let us note that the second condition in (2.15) will hold too, if κ4RC(σ , T )T . Then Finally we have Since |u| X σ (T ) ≤ 2R and |v| X σ (T ) ≤ 2R, we have , what is satisfied for any κ > 1. So, we have to choose R 0 and T such that (2.18) Remark 2.7. In order to do this it is enough to take M := sup{M u : u ∈ X σ (T ), |u| X σ (T ) ≤ 4R}. Hence, the mapping T maps the ball B in 2 X σ (T ) centered at 0 with radius 2R into itself and, restricted to this ball, the mapping T is contraction. By Banach contraction theorem, the mapping T has fixed point in the set 2 X σ (T ), which is a unique solution to the equation (2.1) with initial condition (2.2). Near-identity transformation for KdV2 The famous Korteweg-de Vries equation [1] was first obtained in consideration of shallow water wave problem with the ideal fluid model. It is assumed that the fluid is inviscid and its motion is irrotational. Then the set of hydrodynamic (Euler's) equations with appropriate boundary conditions at the flat bottom and unknown surface is obtained. Scaling transformation to dimensionless variables introduces small parameters that allow us to apply perturbation approach. First order perturbation approach leads to KdV equation (below written in a fixed reference frame) More exact, second order perturbation approach gives the extended KdV equation [6] called by us KdV2 which has the following form In both equations (3.1) and (3.2) there appear parameters α, β , which should be small. Parameter α := A h is the ratio of wave amplitude A to water depth h and determines nonlinear terms. Parameter β := ( h l ) 2 , where l is an average wavelength describes the dispersion properties. When α ≈ β 1 we have a classical shallow water problem. However, our recent paper [7] showed that exact solutions of KdV2 (3.2) occur when β is much less than α. Therefore for further considerations we can safely neglect in (3.2) the last term with fifth derivative. Transformation to a moving reference frame x = x − t and t = t yields KdV2 equation in the form In next steps we drop signs at x and t , having in mind that (3.3) represents the KdV2 in a moving frame. Kodama [16] showed that several nonlinear partial differential equations are asymptotically equivalent. This term means that solutions to these equations converge to the same solution when parameters α, β → 0. Kodama and several other authors [17,19,20] have shown that asymptotically equivalent equations are related to each other by near-identity transformation (NIT). Let us introduce Near Identity Transformation (NIT for short) in the form used in Dullin et al. [17] η = η ± αaη 2 ± β bη xx + · · · (3.4) [In the sequel we set the sign +. Then the inverse transformation, up to O(α 2 ) is η = η − αaη 2 − β bη xx + · · · ] NIT preserves the structure of the equation (3.3), at most altering some coefficients. Insertion (3.4) into (3.3) gives (up to 2nd order in α, β ) Since terms with derivatives with respect to t appear with coefficients α and β , we can replace them by appropriate expressions obtained from (3.2) limited to first order (that is from KdV) and Then terms (3.6) and (3.7) cause the following changes Insertion of (3.8) into (3.5) yields Comparison of (3.9) with (3.3) shows that only two coefficients are altered, that at the term containing α 2 , where − 3 8 → − 3 8 + 3 2 a and that with αβ η x η 2x , where 23 24 → 23 24 + a − 3b. Equation (3.9) is asymptotically equivalent to (3.3). NIT gives us some freadom in choosing coefficients a, b. They can be chosen such that the most nonlinear term (with 3-rd order nonlinearity) is canceled and the final equations is integrable. The first goal is obtained if Integrability is achieved when coefficient in front of the term with η x η 2x is twice the coefficient in front of the term with ηη 3x . So, we can choose b such that 23 24 Then, applying to (3.3) NIT (3.4) with parameters a = 1 4 and b = 1 8 we obtain asymptotically equivalent integrable equation in the form We will show that for (3.10) there exists Hamiltonian form where Hamiltonian H = ∞ −∞ H dx has the density Since H = H (η , η x ), then functional derivative is given by Insertion of (3.12) into (3.11) gives what coincides with (3.10). It is worth to notice, that application of inverse NIT to (3.10) brings back the equation (3.3) (up to second order in α, β ). The existence of the Hamiltonian implies that there exist invariants of the equation (3.10). This is the first step towards obtaining a global mild solution according to approach due to de Bouard and Debussche [12]. Proof of Theorem 2.4 To make the paper self-contained, we recall the following results. q ω (L 2 t ) or A = L q (Ω), with 1 < q < ∞, and let u be an A-valued function of x ∈ R. Assume that for some p, with 1 < p < ∞ and some σ > 0 Proof. Let, as usually, . We have (4.1) Let us substitute in Theorem 4.2 v 0 = D σ + 5 2 Φe i and α = 1. Then we obtain Since Theorem 4.2 holds for all x ∈ R and D σ + 5 Insertion of (4.2) into (4.1), gives . Basing on proof of Proposition 3.3 in de Bouard and Debussche [12] we have that ) and there exists a constant C, such that Then we have Moreover, basing on the proof of Proposition 3.3 in de Bouard and Debussche [12], (4.4) , then from (4.3) oraz (4.4) we obtain where H is the Hilbert transform, what finishes the proof.
2019-09-17T03:02:42.224Z
2019-08-30T00:00:00.000
{ "year": 2019, "sha1": "324e2bfc081845399bf1f116d05ffdff7a95894c", "oa_license": "CCBYNC", "oa_url": "https://dergipark.org.tr/tr/download/article-file/803984", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b2d3719d253a127f75cacd559381fd4dae661e0c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
235712837
pes2o/s2orc
v3-fos-license
Spatiotemporal analysis of COVID-19 outbreaks in Wuhan, China Few study has revealed spatial transmission characteristics of COVID-19 in Wuhan, China. We aimed to analyze the spatiotemporal spread of COVID-19 in Wuhan and its influence factors. Information of 32,682 COVID-19 cases reported through March 18 were extracted from the national infectious disease surveillance system. Geographic information system methods were applied to analysis transmission of COVID-19 and its influence factors in different periods. We found decrease in effective reproduction number (Rt) and COVID-19 related indicators through taking a series of effective public health measures including restricting traffic, centralized quarantine and strict stay-at home policy. The distribution of COVID-19 cases number in Wuhan showed obvious global aggregation and local aggregation. In addition, the analysis at streets-level suggested population density and the number of hospitals were associated with COVID-19 cases number. The epidemic situation showed obvious global and local spatial aggregations. High population density with larger number of hospitals may account for the aggregations. The epidemic in Wuhan was under control in a short time after strong quarantine measures and restrictions on movement of residents were implanted. www.nature.com/scientificreports/ a spatiotemporal analysis of COVID-19 transmission and its potential driving factors in Wuhan as of Mar. 18, 2020 by using GIS methods. Materials and methods Data source. Data source was well-described in a previous publication 9 . In simple term, information of COVID-19 cases as of March 18 were extracted from the national infectious disease surveillance system, which collected age, sex, residential address (specific to street level), date of illness onset (the self-reported date of symptoms such as fever, cough, or other respiratory symptoms), and date of confirmed diagnosis (the laboratory confirmation date of SARS-CoV-2 in the bio-samples or the date on which the clinician determines the case as a clinically diagnosed case). The population data (including population size, population density and ratio of the elderly population) was obtained from the statistical yearbooks issued by Wuhan in 2018. The number of public facilities (traffic station, shopping center and hospital) were obtained from Google Maps. Population density was the number of permanent residents per square kilometer; ratio of elderly population was the proportion of the population over 60 years who live permanently in the areas; traffic stations contained both bus stations and subway stations; shopping centers referred to the combinations of retail stores and service facilities in a single building or area that provides comprehensive services to consumers; hospitals with more than 20 beds were included. Ethics approval and consent to participate. Data collection and analysis of data were determined by the national infectious disease surveillance system; thus written informed consent or ethics committee/institutional review board approval was not applicable. All subjects were well-informed by the physicians and agreed to report their data to the national infectious disease surveillance system at the time of their medical attention. The system keeps patient information confidential, and all personally identifiable information, such as ID and name, was removed before analyzing the data. Specifically, the addresses of the subjects in this study were only detailed to street level to protect their privacy. Case definitions. Diagnosis of confirmed COVID-19 was conducted according to the diagnostic criteria recommended by the National Health Commission of China 10 . Confirmed case was defined as a patient, with corresponding clinical symptoms and a contact history, who had a positive test of SARS-CoV-2 virus by the realtime reverse-transcription-polymerase-chain-reaction (RT-PCR) assay or high-throughput sequencing of nasal and pharyngeal swab specimens. Statistical analysis. To better reflect the epidemic of COVID-19, the effective reproduction number (Rt) was calculated using the method described by a previous publication 11 . The serial interval (mean: 7.5 days, SD: 3.4 days) derived from a reported of first 425 cases in Wuhan 12 were applied to estimate Rt and its 95% coefficient intervals via a 10-days moving average. According to Rt changes at different time, the outbreak was classified into three periods. Period 1: the time before Jan.24, the pre-cognitive period, when no strong intervention was imposed and the epidemic spread naturally. Period 2: Jan. 24-Feb. 7, the control period, the spread of COVID-19 was gradually under control, but the number of cases was still growing (Rt more than 1). Period 3: Feb.8-Mar.18, the transmission fading period, (Rt less than 1), when all shops were required to close and the residents were required to stay at home. Cumulative cases, average daily new cases, double time and interval from disease onset to diagnosis in different periods were calculated. The doubling time of COVID-19 in each street was calculated according to the equation introduced by Weon 13 . More specific calculation methods of the doubling time and other definitions of COVID-19 indicators were described in the methods section of the supplementary material. In order to explore the spatial characteristics of COVID-19 spread, we visualized the distribution trend of the onset cases number of each street by constructing a cubic polynomial in different periods on a 3D grid plot. In addition, Moran's I was calculated to reflect the global spatial autocorrelation and local spatial autocorrelation of onset COVID-19 cases number distribution in different periods. Monte-Carlo method was used to test the significance of Moran's I by simulating 999 times. Cluster map of local indicators of spatial association (LISA) was drawn to show the degree and significance of local cases spatial clustering of one street and its adjacent streets. The modes of local case spatial clustering were divided into five kinds: (1) high-high (area with high cases number surrounded by areas with high cases number), (2) low-low (area with low cases number surrounded by areas with low cases number), (3) low-high (area with low cases number surrounded by areas with high cases number), (4) high-low (area with high cases number surrounded by areas with high cases number), (5) not significant (no significant clustering was found). The calculation method of Moran's I was described in detail in a previous literature 14 . In quest of contribution degree of population density and public facilities in each street to COVID-19 onset cases number, Spatial lag model (SLM) was applied to conduct spatial correlation analysis 15 . Given the possibility that the impact of mediators between the possible risk factors and the outcome. We tried to test this possibility with a mediation model (supplementary material). All analyses were performed with the use of R software (version 3.6.2), ArcGIS 10.2 and GeoDa 1.14.0.0. All figures were created via ArcGIS or GeoDa. All two-sided tests were considered as statistically significant when P value was less than 0.05. Reporting regulations. Experiments on humans and/or use of human clinical data were not included in this study, so we reported it according to general epidemiological studies. Results Transmission of COVID-19 in 3 time periods. By March 18, a total of 32,682 cases were identified from the national infectious disease surveillance system (Table S1). Estimates of the effective reproduction number Rt through the whole epidemic period was shown in Fig. 1. The Rt varied in the period 1 with a peak of 3.86 on Jan. 23, and declined in the period 2 and 3. The Rt fell below 1.0 on Feb. 8, 2020 and further decreased to below 0.1 on Mar. 15, 2020. Basic epidemiological analysis of epidemic differences among different periods was shown in Table 1. The number of onset cases in three periods were 6,981, 18,381 and 7,320, respectively. Average daily new cases in three periods were 166.2, 1,225.4 and 209.1, respectively. Cumulative prevalence (per thousand) raised from 0.6 in period 1 to 2.9 in period 3. Average daily attack rate (per million) in three periods were 0.003, 0.019 and 0.003, respectively. The median of double time elevated from 3.6 days in period 1 to 103.9 days in periods 3, but the median of interval from disease onset to diagnosis decreased form 20.0 day in period 1 to 3.0 days in period 3. The spatiotemporal distribution of COVID-19 cases in Wuhan. A total of 179 streets in Wuhan city were included in the present analysis and COVID-19 cases were reported from 177 of them. Global spatial trends in whole epidemic and 3 time periods were visualized in Fig. 2. The trend lines suggested COVID-19 cases aggregated in central urban area in all periods, but such overall trend of aggregation reduced obviously in period 3. Global spatial autocorrelations in whole epidemic and different periods were examined by Moran's I (Fig. 3). In all Moran scatter plots, bubbles mainly aggregated in the first, second and third quadrants, suggested that the spatial distribution form of COVID-19 onset cases in all period were mainly composed of three main patterns: high-high, low-high and low-low. Moran's I in all periods was more than 0, but decreased from 0.31 in period 1 to 0.12 in period 3. Significance tests of Moran's I performed by Monte-Carlo method with 999-time simulations indicated significant (pseudo p value < 0.05) global autocorrelation existed in all periods ( Figure S1). In order to have a more detailed view of spatial distribution of COVID-19 onset cases in different periods, LISA cluster map was employed to graphically demonstrate local autocorrelation of COVID-19 onset cases in street-level (Fig. 4). From the perspective of the whole epidemic, the main models of onset cases clustering from Cases in Wuhan, China. Period 1: the pre-cognitive period, when COVID-19 spread without strong inventions. Period 2: the control period, the spread of COVID-19 is gradually being controlled, but the number of cases is still growing (Rt more than 1). Period 3: the transmission fading period (Rt less than 1). www.nature.com/scientificreports/ the central urban area to the marginal urban area were high-high, high-low or low-high, and low-low, successively. As shown in Table 2, the number of streets which did not present significant clustering elevated from 18 in period 1 to 54 in period 3. Closer inspection of the Table 2 showed such trend of reduction was due to the decrease in high-high and low-low aggregation. Analysis of spatial differentiation drivers. To explore the driving factors of COVID-19 cases spatial differentiation, we performed a tertile analysis of the street according to the population density or the number of public facilities in each street (Table S2). The results suggested that all COVID-19 indictors (including cumulative number of case, average prevalence, doubling time and daily new cases were monotonic increase across tertiles of population density (all P trend < 0.05). The number of daily new cases in three periods, as well as the average prevalence and the cumulative cases of COVID-19 (all P trend < 0.05) elevated significantly with the increase in the number of hospitals. We didn't observe any one-way variation trend between shopping center (except number of average daily new cases) and other COVID-19 related indicators, or between the number of traffic station and COVID-19 indicators. To further validate such potential associations, spatial lag models were constructed to detect the association of the number of COVID-19 onset cases with population density, ratio of the elderly population and number of public facilities in street-level. As shown in Table 3, population density (coefficient: 0.001) and number of hospitals (coefficient: 27.236) were significantly associated with the number of onset cases at street-level (both P < 0.05) rather than ratio of elderly population and the number of other public facilities throughout the whole epidemic. When stratified into three periods, significant associations of onset cases with population density (coefficient: 0.001 in period 1 and 2) and the number of hospitals (coefficient: 5.660 in period 1, 14.694 in period 2) were observed in period 1 and 2. In addition, the number of traffic stations was positively associated with onset cases with a coefficient of 4.416 in period 2. Strikingly, no significant association between population density and onset cases was found in period 3. Nonetheless, the number of hospitals was still positive associated with onset cases elevation in period 3, but the coefficient was lower than that in period 2 (6.928 vs 14.694). In further mediation analysis significant mediation effect of number of hospitals on the association between population density and COVID-19 cases number of whole epidemic was observed. The mediation proportion was 29.7% ( Figure S2). Discussion The present study found that the transmission of COVID-19 in Wuhan experienced three periods of outbreak, control and decline in time, and presented spatial clustering in the central urban area. In addition, population density and the number of hospitals were both positive associated with COVID-19 indicators at streets-level. In the early stage, the Rt reached a peak on Jan.23. However, the government intervened with a series of public health measures after the discovery of conclusive evidence that COVID-19 could be passed from person to person 16 . The present study divided the epidemic of COVID-19 in Wuhan into three periods. In period 1, when no strong intervention was implemented, the doubling time of COVID-19 cases was 3.6 days, which was shorter than the 7.5 (5.3-19) days calculated by model simulations in an earlier study 12 . Such difference may be due to the limitation of detection capacity in the early stage of the outbreak, resulting some cases not being confirmed in a timely manner and the transmission not being properly assessed. In period 2, indicators of transmission, including onset cases and average daily new cases indicated that the epidemic was still rising, but changes in doubling time and Rt both suggested the epidemic was under control in some degree. On one hand, as the median incubation period of COVID-19 is up to 14 days 17,18 , changes in indicators may lag behind the impact of intervention measures. On the other hand, mild and suspected cases were required to isolate at home in that period, which still had a great risk of transmission, especially in areas with high population density. In period 3, the doubling time increased more than 10 times that of the previous period. In fact, almost all of the identified potential infectors were isolated in the period 3, and the strict stay-at-home policy for all residences cut off transmission to a great extent. Therefore, strict measures to isolate and limit population movements, (C) Period 2, the control period, the spread of COVID-19 is gradually being controlled, but the number of cases is still growing (Rt more than 1). (D): Period 3, the transmission fading period (Rt less than 1). The map was created via software GeoDa (1.14.0.0, URL http:// geoda center. github. io/ downl oad. html). The map data was obtained from a public website (https:// data. wuhan. gov. cn/ page/ data/ data_ set_ detai ls. html? cataId= 72a11 27f-ffa1-11ea-8202-00ff9 7c29d 31). Table 2. Street-level spatial clustering models of COVID-19 onset cases in different periods. Significance of local spatial clustering was tested by local Moran's I. Period 1, the pre-cognitive period, when COVID-19 spread without strong inventions. Period 2, the control period, the spread of COVID-19 is gradually being controlled, but the number of cases is still growing (Rt more than 1). Period 3, the transmission fading period (Rt less than 1). www.nature.com/scientificreports/ rather than just restricting public transportation and population gathering, are needed to control the outbreak of COVID-19 in a short time. The present study found that the epidemic situation showed obvious aggregation in central urban areas, where found the first case. In three periods, significant spatial autocorrelations of COVID-19 onset cases number in Wuhan were found, especially in period 1 and 2. The transmission of COVID-19 in first two periods tended to spread from high-incidence areas to low-incidence areas. The size of aggregation reduced in the later stage (after the implementation of strict population movement control measures, period 3) of the epidemic. Such a change in spatial distribution characteristics suggested that the maximum restriction of human movement during the outbreak may have a significant effect, especially in high-incidence areas. Our study also found that the population density as well as the number of hospitals in the streets was associated with COVID-19 indicators. In addition, the number of hospital may act an important mediation role. Studies have proposed that hospital may become a source of infection due to public health emergency 19 . Several studies 20,21 on the investigation of nosocomial infection concluded that the incidence of COVID-19 due to the nosocomial infection is not low. An investigation of 662 inpatients with COVID-19 at an NHS Trust in South London suggested that 45 (6.8%) inpatients were likely infected while seeking medical attention 20 . An analysis of 138 COVID-19 cases conducted by a hospital in Wuhan showed that the ratio of nosocomial infection was up to 41.3% 21 . In fact, large number of residences with similar or suspected symptoms of COVID-19 flocked to hospitals to seek for treatment, which not only led to the directional movement of cases, but also increased the risk of cross-infections. However, a number of public health interventions were implemented by the Wuhan government from Jan.23 to Feb.18, including shutdown of public gathering places, restrictions of inner-city traffic, and strict stay-at-home policy for all residences. These effective interventions might lead to the fact we did not observe the association of traffic stations with increased number of average daily new cases. Restricting traffic eliminated the impact of the number of stations on COVID-19 indictors. It is surprising that no association was observed between ratio of elderly population and the number of onset cases, even though multiple studies 3-6 and our results jointly confirmed the susceptibility of elderly to COVID-19. We thought that it may be because area with ratio of elderly population had lower population density and some of them are located in remote areas 22 . The lower population density and lower population mobility resulted in the reduced a lower probability of infection among the residents in these areas. Application of GIS methods in infectious diseases were may provide additional epidemiological clues for COVID-19 outbreak. For example, Rui Huang et al. 23 made a prediction on spatial-temporal distribution of COVID-19 in China at the early stage of the epidemic by constructing GIS model. In addition, Mohsen Shariati et al. 24 used hot spot analysis coupled with Anselin local Moran's I to determine the high-risk districts of COVID-19 over the world. The present study performed a spatiotemporal analysis of the COVID-19 transmission in Wuhan, China for the first time. Further investigations are needed to identify more spatial characteristics of Table 3. Street-level correlation of COVID-19 cases number with population density and the number of public facilities of Wuhan city in different periods. Spatial lag model was applied to detect the correlation of COVID-19 cases number with population density and the number of public facilities. Period 1, the precognitive period, when COVID-19 spread without strong inventions. Period 2, the control period, the spread of COVID-19 is gradually being controlled, but the number of cases is still growing (Rt more than 1). Period 3, the transmission fading period (Rt less than 1). www.nature.com/scientificreports/ COVID-19 epidemic. This is of important public health implications, especially in terms of providing a basis for public health measure. There are some limitations in this study. First, the retrospective observational study design precludes causal inference. Second, due to date were extracted from the national infectious disease surveillance system, other extraneous factors, such as incubation period, medical treatment strategies, and vital status was not available. Therefore, counterfactual control may not be enough. Third, street characteristics data and COVID-19 cases data were not from the same data source. This may lead to the possibility of bias in the results. Conclusion The epidemic of COVID-19 in Wuhan shows obvious aggregation. High population density and high number of hospitals may be risk factors for the transmission of the COVID-19 in Wuhan. The spatiotemporal analysis of COVID-19 transmission in Wuhan suggest that maximum restriction of human movement and strict isolation should be taken into consideration in order to control the outbreak in a short time. Data availability The datasets used and/or analyzed in the current study are available from the corresponding author on reasonable request. Contact information for the data access committee: hbcdc_limingyan@163.com (e-mail).
2021-07-03T06:17:05.109Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "d1afcd759d882fc5ccf98ec598b0399e5c81bbf4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-93020-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e7036b6f6a40cc87d99a382b0ba7d7dfc386310", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3031855
pes2o/s2orc
v3-fos-license
The in vitro and vivo anti-tumor effects and molecular mechanisms of suberoylanilide hydroxamic acid (SAHA) and MG132 on the aggressive phenotypes of gastric cancer cells Here, we found that both SAHA and MG132 synergistically inhibited proliferation, glycolysis and mitochondrial oxidization, induced cell cycle arrest and apoptosis in MGC-803 and MKN28 cells. SAHA increased cell migration and invasionat a low concentration. SAHA induced the overexpression of acetyl histone 3 and 4, which were recruited to p21, p27, Cyclin D1, c-myc and nanog promoters to transcriptionally up-regulate the former two and down-regulate the latter three. The expression of acetyl-histone 3 and 4 was increased during gastric carcinogenesis and positively correlated with cancer differentiation. SAHA and MG132 exposure suppressed tumor growth by inhibiting proliferation and inducing apoptosis in nude mice, increased serum ALT and AST levels and decreased hemaglobin level, white blood cell and neutrophil numbers. These data indicated that SAHA and MG132 in vivo and vitro synergistically induced cytotoxicity and apoptosis, suppressed proliferation, growth, migration and invasion of gastric cancer cells. Therefore, they might potentially be employed as chemotherapeutic agents if the hepatic injury and the killing effects of peripheral blood cells are avoided or ameliorated. INTRODUCTION Histone deacetylases (HDAC) function together with histone acetyltransferases (HAT) to accurately control the gene expression by altering nucleosome conformation, and the stability of several large transcription factor complexes. HDAC inhibitors are a promising class of anticancer epigenetic drugs to suppress growth, induce differentiation and apoptosis in cancer cells in vitro and in vivo [1,2]. Suberoylanilide hydroxamic acid (SAHA, vorinostat) is a synthetic hydroxamic acid that inhibits class I and II HDACs via the coordination of its hydroxamic acid group with a zinc atom at the bottom of the catalytic cavity, and finally acetylates the histones within transcription factors [3,4]. Actually, SAHA is approved by US Food and Drug Administration (FDA) and limitedly applied for solid tumors [5]. Reportedly, SAHA acts directly on the promoter region of the thioredoxin (TRx) binding protein-2 (TBP-2) gene and up-regulates TBP-2 expression. TBP-2 protein interacts with TRx protein, which inactivates such biological functions as scavenging reactive oxygen species (ROS) and activating ribonucleotide reductases [6,7]. You et al. [8] demonstrated that SAHA inhibited the growth of HeLa cells, and induced their apoptosis, which was accompanied by PARP cleavage, caspase-3 activation, loss of mitochondrial membrane potential, and ROS production. Ding et al. [9] found that SAHA triggered MET and Akt phosphorylation in an HGF-independent manner. siRNA silencing of MET enhanced SAHA to induce the apoptosis of PC3 and A549 cells. Liu et al. [10] reported that SAHA Research Paper inhibited the growth, reduced the migration and induced cell-cycle arrest, apoptosis and autophagy of paclitaxelresistant ovarian cancer OC3/P cells. Gastric cancer continues to be one of the deadliest cancers in the world and therefore the identification of new target drugs is thus of significant importance [11]. Yoo et al. [12] demonstrated that three-weekly SAHA-cisplatin regimen was feasible and recommended for further development in advanced gastric cancer. Zhou et al. [13] found that SAHA in vitro and vivo enhanced the antitumor activity of oxaliplatin by reversing the oxaliplatin-induced Src activation, increasing γH2AX expression, the cleavage of Caspase-3 and PARP in gastric cancer cells. Huang et al. [14] reported that RUNX3 was up-regulated by SAHA and increased the SAHA chemosensitivity in gastric cancer cells. Here, we observed the effects of SAHA and/ or MG132 (a proteosome inhibitor) on the phenotypes of gastric cancer cells and its synergistic effects and subsequently clarified the related molecular mechanisms. To clarify the clinicopathological significance of acetylhistones 3 and 4, their expressions were determined in gastric cancer and non-neoplastic mucosa (NNM) by western blot or immunohistochemisty, and compared with clinicopathological parameters of gastric cancers. Finally, their inhibitory effect on tumor growth was determined in tumor-bearing nude model. The effects of SAHA and MG132 on the phenotypes of gastric cancer cells The exposure to SAHA and MG132 suppressed the proliferation of MGC-803 and MKN28 in both concentration-and time-dependent manners with a synergistic effect ( Figure 1A, p<0.05). According to PI staining, SAHA treatment induced G 1 arrest, while MG132 induced G 2 /M arrest in MGC-803 and MKN28 cells ( Figure 1B). SAHA could reciprocally weaken the effects of MG132 on cell cycle. As shown in Figure 2A, the treatment with either SAHA or MG132 induced the apoptosis of MGC-803 and MKN28 cells in either concentration-dependent or synergistic manner according to Annexin-V and PI staining. It was the same for senescence, evidenced by β-galactosidase staining ( Figure 2B). SAHA and MG132 synergistically suppressed glycolysis and mitochondrial respiration of MKN28 cells ( Figure 2C, p<0.05). Wound healing and matrigel transwell invasion assays indicated that SAHA increased cell migration and invasion at a low concentration. MG132 suppressed the ability of gastric cancer cells to migrate and invade. MG132 ameliorated the effects of SAHA (0.6μM) on migration and invasion of gastric cancer cells ( Figure 3A-3C). As shown in Figure 3D, 2.0μM SAHA relieved the lamellipodia formation in gastric cancer cells, while MG132 didn't. The association of acetyl-histone 3 and 4 expression with the tumorigenesis and clinicopathological parameters of gastric cancer Immunohistochemically, acetyl-histones 3 and 4 were distributed in the nuclei of gastric epithelial cell, adenoma and cancer ( Figure 5A-5F). Statistically, acetylhistone 3 immunoreactivity was stronger in gastric adenoma and cancer than gastritis ( Figure 5G, p<0.01). As shown in Figure 5I, acetyl-histone 4 protein showed higher expression level in gastric cancer than gastric adenoma (p<0.01) and gastritis (p<0.001). Acetyl-histone 4 positivity was stronger in gastric adenoma than gastritis ( Figure 5I, p<0.001). In addition, both acetyl-histone 3 and 4 proteins were more expressed in intestinal-than diffuse-type carcinomas ( Figure 5H and 5J, p<0.001). As shown in Figure 5K, a higher expression of acetyl-histone 3 and 4 was detectable in gastric cancer than the paired mucosa, evidenced by Western blot (p<0.05). The inhibitory effects of SAHA and MG132 treatment on the tumor growth of gastric cancer cells in nude mice MGC-803 and MKN28 were subcutaneously transplanted into immune-deficient mice. The tumor volumes of xenografts become smaller than the control after the treatment with SAHA or/and MG132 by calculation and weighting respectively ( Figure 6A-6C, p<0.05). SAHA or MG132 exposure increased the serum levels of aminoleucine (ALT) and aspartate (AST) aminotransferase to SAHA and MG132, there appeared a high apoptotic A. and senescence B. level in MGC-803 and MKN28 in comparison to the control by Annexin V or β-galactoside staining respectively. Both drugs had a synergistic induction of apoptosis and senescence (A and B). Cellular energy metabolism assay was performed after the treatment of MKN28 cells with SAHA and/or MG132 C. www.impactjournals.com/oncotarget in nude model, but only SAHA decreased the AST/ ALT ratio ( Figure 6D, p<0.05). Furthermore, MG132 remarkably reduced hemoglobin (HGB) level, the number of white blood cell (WBC) and neutrophile granulocyte (GRA), while SAHA didn't ( Figure 6E, p<0.05). There appeared synergistic effects of both reagents for abovementioned six values ( Figure 6A-6E). The exposure to SAHA or/and MG132 didn't alter the morphology of bone marrow according to Wright-Giemsa staining, but reduced the proliferation and induced the apoptosis in comparison to the control by ki-67 immunostaining and TUNEL assay respectively ( Figure 6F). After treated with SAHA, MGC-803 and MKN28 cells showed a higher expression of acetyl-histone 4 and 3 in the xenograft cancer cells than the control, while not for MG132 ( Figure 6F). DISCUSSION The anti-tumor activity of SAHA has been in vitro reported in leukemia, mantle cell lymphoma, chondrosarcoma, hepatoma, pancreatic, breast, prostate and colon cancers [7][8][9]. The cytotoxicity of SAHA was independent of cellular chemoresistance and P-glycoprotein expression [7,15]. SAHA/parthenolide combination induced GSH depletion, fall in Δψm, release of cytochrome c, Caspase 3 activation and apoptosis by targeting Akt/mTOR/Nrf2 pathway [16]. Xu et al. [17] reported that SAHA exerted significant inhibitory efficiency against cellular survival, proliferation, migration and vasculogenic mimicry of pancreatic cancer cells. The combination of 5-aza-2,-deoxycytidine, cisplatin, or paclitaxel with SAHA inhibited ovarian cancer growth, induced apoptosis, G 2 /M phase arrest and autophagy [18,19]. Our experimental evidences have shown that SAHA and MG132 reduced cell viability in gastric cancer cells in both dose-and time-dependent manners. In addition, both reagents suppressed the glycolysis and mitochondrial function, induced apoptosis, cell cycle arrest and senescence of gastric cancer cells. It was worth noting that SAHA enhanced cell migration and invasion at a low concentration. Taken together, these findings suggest that SAHA or/ and MG132 may inhibit the aggressive phenotypes of gastric cancer cells, but it is essential to maintain a higher serum concentration of SAHA if not combined with MG132. SAHA is approved for the treatment of cutaneous T-cell lymphoma by the US FDA [20]. Phase I and www.impactjournals.com/oncotarget pharmacodynamic study showed that the combination of SAHA with pelvic palliative radiotherapy, or capecitabine and cisplatin treatment was safe and effective for gastric cancer [21,22]. It was documented that the combination of SAHA and Cabozantinib resulted in a synergistic induction of cell apoptosis and growth suppression in prostate and lung cancers [23]. Oxaliplatin and SAHA were found to suppress cell survival and growth of gastric [13] and hepatocellular [24] cancer cells by inducing apoptosis or DNA damage. In nude mice, we demonstrated that SAHA and/or MG132 significantly suppressed the tumor growth of gastric cancer cells by decreasing proliferation and increasing apoptosis, in line with the other evidences in prostate [25] and pancreatic [26] cancers. Reportedly, SAHA-mediated inhibition of cell cycle progression and induction of apoptosis were dependent on cell microenvironment and subsequently caused the tumor growth inhibition of colorectal cancer cells [27]. In addition, we found that the administration of SAHA and MG132 damaged the hepatic function, but only MG132 had the cytotoxic effects on peripheral blood cells. Taken together, we suggest that SAHA and MG132 might potentially be employed as chemotherapeutic agents of gastric cancer if the hepatic injury and the killing effects of peripheral blood cells are prevented. Here, we found that SAHA promoted acetylation of histones 3 and 4 in gastric cancer cells, which were recruited to the promoter of p21, p27, c-myc, Cyclin D1 and nanog for the up-regulated transcription of the former two and the down-regulated expression of the latter three. Both p21cip1/waf1 and p27Kip1 can interact with cyclin-CDK complex and induce G 1 arrest [28]. Therefore, SAHA might transcriptionally up-regulate the expression of p21 and p27 at both protein and mRNA level to cause G 1 arrest of gastric cancer cells, which SAHA-induced Cyclin D1 hypoexpression was also responsible for. It was reported that SAHA exposure up-regulated the expression of Cyclin D1 in colon cancer cells [29] and mantle cell lymphoma cells [30]. SAHA was demonstrated to reverse chemoresistance in head and neck cancer cells by targeting cancer stem cells via the down-regulation of nanog [31], in line with our result. Wang et al. [32] found that K-ras confered SAHA resistance by up-regulating c-myc expression. Although SAHA decreased levels of c-myc in pancreatic cancer cells [26], we found that acetyl histones bound to the promoter of c-myc and suppressed its transcription, finally to reverse the aggressive phenotypes of gastric cancer cells. Additionally, our results also showed no difference in the expression of VEGF and MMP-2 in gastric cancer cells treated by SAHA or/ and MG132, suggesting that their regulatory effects on migration and invasion were independent of both molecules. SAHA increased LC-3B expression, but didn't alter the expression level of Beclin 1, suggesting that the (Continued ) inducing effect of SAHA on autophagy was independent of Beclin 1 and didn't belong to classic pathway of autophagy. In xenograft model and cell experiment, we found that SAHA up-regulated histone acetylation, supporting the opinion that histones 3 and 4 are the potent targets of SAHA. According to the data of immunohistochemistry and Western blot, we for the first time found that expression of both acetyl-histones 3 and 4 were significantly higher in gastric cancer than in adenoma and gastritis, indicating that their overexpression may be a reactive response and might increase SAHA sensitivity during gastric carcinogenesis. Reportedly, the expression of histones 3 and 4 would be potential markers to monitor the efficacy of SAHA as described in peripheral blood mononuclear cells [33]. Opposite to previous reports (Continued ) about ovarian cancer [7] and renal cell carcinoma [34], both proteins were positively linked to the differentiation degree of gastric cancer, indicating that they might underlie the molecular mechanisms of the differentiation of gastric cancer. In conclusion, SAHA and MG132 in vivo and vitro have the synergistic effects of cytotoxicity and aggressive phenotypes reversing in gastric cancer cells by inducing apoptosis, suppressing proliferation, growth, migration, or invasion. SAHA of low concentration might promote migration and invasion of gastric cancer cells. SAHA may increase the expression of acetyl-histone 3 and 4 and thereby upregulate or down-regulate the mRNA expression of downstream genes, including Cyclin D1, p21, p27, c-myc and nanog. The histone acetylation may be positively linked to the tumorigenesis and differentiation of gastric cancer. Therefore, SAHA or/ and MG132 could potentially be employed as chemotherapeutic drugs in clinical practice. Cell culture Gastric cancer cell lines, MGC-803 (poorlydifferentiated adenocarcinoma) and MKN28 (welldifferentiated adenocarcinoma) were purchased from the ATCC (Manassas, VA, USA). The cells were maintained in RPMI-1640 media, supplemented with 10% fetal bovine serum (FBS), 100 U/mL penicillin and 100 μg/mL streptomycin in a humidified atmosphere of 5% CO 2 at 37°C. All cells were harvested by centrifugation, rinsed with phosphate buffered saline (PBS), and subjected to total protein and RNA extraction. We exposed cells to SAHA and MG132 for the following experiments. Proliferation assay Cell counting Kit-8 (CCK-8; Dojindo, Kumamoto, Japan) was employed to determine the number of viable cells. In brief, 2.5 × 10 3 cells/well were seeded into 96well plates and allowed to adhere. At specified time points, 10 μL CCK-8 solution was added to each well and the plates were incubated for a further 3h. The number of viable cells was counted by measuring the absorbance at 450 nm. Cell cycle analysis The cells were detached by trypsinization, collected, washed twice with PBS and fixed in 10 mL ice-cold ethanol for at least 2h. The cells were washed twice with PBS again and incubated with 1 mL RNase (0.25 mg/mL) at 37°C for 1h. The cells were pelleted, resuspended in propidium iodide (PI, 50 μg/mL), and incubated in the dark for 30 min. Cell cycle analysis was performed by analysis of PI staining by flow cytometry. Apoptosis assay Flow cytometry was performed following FITClabeled Annexin V and PI staining (KeyGEN Biotech) to detect phosphatidylserine externalization as an endpoint indicator of apoptosis. In brief, cells were washed with PBS, resuspended in 1 × Binding Buffer, and then incubated with 5 μL FITC-Annexin V and 5 μL PI. Samples were gently vortexed and incubated for 15min in the dark, then 400 μL 1 × Binding Buffer was added to each tube. Flow cytometry was performed within 1 h of incubation. Wound healing assay 1.0 × 10 6 cells were seeded in 6-well culture plates and scratched with a pipette tip when reaching 80% confluence. Cells were washed three times with PBS, and cultured in FBS-free medium. Cells were photographed and the scratch area was measured using Image J software. Cell invasion assay 2.5 × 10 5 cells were resuspended in serum-free RPMI-1640 and seeded into the top chamber of matrigelcoated transwell inserts. The lower compartment of the chamber contained 10% FBS as a chemoattractant. After incubation for 24 h, the cells on the upper surface of the membrane were wiped away, and the cells on the lower surface of the membrane were washed with PBS, fixed in methanol and stained with Giemsa dye to quantify the extent of invasion. Immunofluorescence Cells were grown on glass coverslips, fixed with PBS containing 4% formaldehyde for 10 min, and permeabilized with 0.2% Triton X-100 in PBS for 10. After washing with PBS, the cells were incubated overnight at 4°C with Alexa Fluor 594 Phalloidin (Invitrogen) to indicate the lamellipodia. Nuclei were stained with 1 μg/mL DAPI (Sigma-Aldrich) for 15 min at 37°C. The coverslips were then mounted with SlowFade Gold Antifade Reagent (Invitrogen) and observed under a confocal laser microscope (Olympus, Tokyo, Japan). β-galactosidase staining β-galactosidase staining kit (Beyotime, China) was used to indicate the senescence. Cells (5×10 5 ) were seeded in 6-well dishes, incubated for 2 days. All cells were washed two times with PBS and fixed with 4% paraformaldehyde for 15 min at room temperature. Then the cells were incubated overnight at 37 °C with the working solution containing 0.05 mg/mL X-gal. Finally, the cells were examined under a light inverted microscope. Metabolism assays Oxygen consumption rates and extracellular acidification rates were measured in XF media (nonbuffered RPMI 1640 containing either 10mM or 25 mM glucose or galactose, 2 mM L-glutamine, and 1 mM sodium pyruvate) under basal conditions and in response to mitochondrial inhibitors, 1 mM oligomycin and/or 100 nM rotenone + 1 mM antimycin A (Sigma) on the XF-24 or XF-96 Extracellular Flux Analyzers (Seahorse Bioscience). ATP measurements were performed with the ATP determination kit (Invitrogen) and glucose concentrations were measured with a glucose assay kit (Eton Bioscience Inc.). www.impactjournals.com/oncotarget Selection of patient samples Samples of gastric cancer (n=447), adenoma (n=47) and gastritis (n=72) were collected from patients undergoing surgical resection between January 2003 and December 2011 at our hospital. The average age of the patients at surgery was 51.6 years (range 20-81 years). Sixteen cases of fresh gastric cancer and matched mucosa were also sampled from our hospital. None of the patients had undergone chemotherapy, radiotherapy or adjuvant treatment prior to surgery. Informed written consent was obtained from all participants and the study was approved by our University Ethics Committee. Pathology and tissue microarray (TMA) analysis Tumor histology was determined according to Lauren's classification system [35]. TMA was established as reported previously [35]. Consecutive 4 μm sections were incised from the recipient block and transferred to poly-lysine-coated glass slides. Western blot analysis After protein, denatured proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis on acrylamide gel, and transferred to Hybond membranes. The membranes were blocked overnight in 5% skim milk in TBST. For immunoblotting, the membranes were incubated with the primary antibody (Table 3), rinsed with TBST, and incubated with IgG antibodies conjugated to horseradish peroxidase (HRP; Dako). Bands were visualized using X-ray film by ECL-Plus detection reagents. Densitometric quantification of acetyl-histone 3 and 4 protein expression in gastric samples was performed using Image J software, with GAPDH as a control. Chromatin immunoprecipitation (ChiP) ChiP assays were performed using Magna ChipTM G kit (Upstate) according to manufacturer's instructions. The primer sequences were targeted to the gene of c-myc, Cyclin D1, p21, p27, and nanog (Table 1). PCR amplification was performed in 20μL mixtures and amplicons were separated in agarose gel. Real-time reverse transcriptase-polymerase chain reaction (real-time RT-PCR) Total RNA was extracted from the gastric cancer cell lines using Trizol (Takara) Real-time RT-PCR was performed from 2 μg of total RNA using AMV reverse transcriptase and random primers (Takara). PCR primers were designed according to the sequences in GenBank and are listed in Table 2. Amplification of cDNA was performed using the SYBR Premix Ex Taq II kit (Takara), using GAPDH as an internal control. Xenograft models Locally bred female Balb/c nude (nu/nu) mice were used for implantation at the age of 6-8 weeks. They were maintained under specific pathogen-free conditions. sacrificed. For each tumor, measurements were made using calipers, and tumor volumes were calculated as follows: width 2 × length ×0.52. The tumors were subsequently fixed in 4% paraformaldehyde, and then embedded in paraffin for the preparation of blocks. The measurement of serum enzyme, blood and bone marrow cells The peripheral blood of nude mice was collected from abdominal vein, kept into a disposable venous blood sample collection vessel, and centrifuged at 4000 rpm/min for 5 min. Afterwards, the supernatant was analyzed for alanine aminatransferase, aspartate aminotransferase, AST/ ALT ratio by automatic biochemical analyzer (Hitachi 7600). And then, another peripheral blood samples were kept in BD vacutainer including EDTA-K 2 . The blood cells indexes, such as total white blood cell count, neutrophil and hemoglobin, were tested by automated hematology analyzer with five classifications (Sysmex XS-500i). Finally, following the separation of femurs, the bone marrow cells were harvested from epiphysis of femurs and microscope slides were prepared. The Giemsa-Wright staining method was used to observe the bone marrow cells morphology and the proportion of the cells under biological microscope. Immunohistochemistry Consecutive sections of tissue samples were deparaffinized with xylene, rehydrated with alcohol, and subjected to the immunohistochemical staining of intermittent wave irradiation as previously described [36]. The rabbit anti-acetyl-histone-3 (Lys 9/14), anti-acetylhistone 4 (Lys 8), and anti-ki-67 antibody and anti-rabbit antibodies conjugated to HRP were purchased from Santa Cruz Biotechnology and Dako respectively. Negative controls were prepared by omitting the primary antibody. Terminal digoxigenin-labeled dUTP nick-end labeling (TUNEL) Cell apoptosis was assessed using TUNEL, a method that is based on the specific binding O-TdT to the 3-OH ends of DNA. For this purpose, ApopTag Plus Peroxidase In Situ Apoptosis Detection Kit (Chemicon) was employed according to the recommendation. Omission of the working strength TdT enzyme was considered as a negative control. Statistical analysis Statistical evaluation was performed using Spearman's rank correlation coefficient to analyze ranked data, and Mann-Whitney U to differentiate the means of different groups. A p-value < 0.05 was considered statistically significant. SPSS 10.0 software was employed to analyze all data.
2018-04-03T02:21:23.225Z
2016-07-18T00:00:00.000
{ "year": 2016, "sha1": "8192c7ed249165005f1c93f2cd18de9569a0d8db", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=10643&path[]=34683", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8192c7ed249165005f1c93f2cd18de9569a0d8db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1902939
pes2o/s2orc
v3-fos-license
Sarcoidosis and multiple sclerosis : systemic toxicity associated with the use of interferon-beta therapy A 35-year-old Caucasian male with no previous illnesses was admitted to the hospital due to complaint of hyposthenia to the left part of the body and slow and progressive onset of paresthesias from the big toe to the lateral cervical and retro auricular region. On physical examination, no cranial district involvement was found, there was a slight pronation to the Barré on the left, left prevalence of osteotendinous reflexes with extension of reflexogenic area, hypoaesthesia tactile-sensory of the left part of the body. The Magnetic Resonance (MR) image was consistent with a demyelinating process, the compatible evocated potentials findings, and IgG presence in the cerebrospinal fluid all supported a diagnosis of multiple sclerosis (MS). The clinical and instrumental evaluation of chest and abdomen at onset and at subsequent visits did not reveal any abnormality suggestive of a differential diagnosis. Continuous therapy with systemic corticosteroids did not control the disease and the patient complained of a further episode of sensitive reactivation six months later. MR and clinical McDonald criteria for MS were met (multiple clinical attacks and MR showing typical lesion dissemination in space). Interferon1β (IFN-1 β) therapy was thus initiated (22 mcg sc tiw), resulting in clinical remission of the symptoms. This therapy was suspended 2 months later as the patient and his partner ABSTRACT: Sarcoidosis and multiple sclerosis: systemic toxicity associated with the use of interferon-beta therapy. C. Carbonelli, S. Montepietra, A. Caruso wanted to have a baby and were concerned about the possible effects taking this medication might have on a pregnancy.Four months later, an episode of optical neuritis was classified as further MS reactivation, and treatment with IFN-1 β 22 mcg tiw was therefore resumed.Six years later, mild dyspnoea developed gradually, with dry cough and transient pain to right upper abdomen exacerbated by deep breathing.There was no fever, weight loss, or night sweats, blood tests revealed abnormalities involving the increase of serum aminotranferase levels, hypoalbuminemia and hypergammaglobulinemia were found, and urine analysis revealed a marked hypercalciuria. Abdominal ultrasonography showed hepatosplenomegaly with multiple hypodense lesions.A chest X-ray showed bilateral hilar and right paratracheal adenopathy and diffuse reticulonodular disease pattern in the lungs. A CT scan evaluation and a total body 18F-FDG-PET confirmed the systemic spread of the disease, with the enlargement of most of the abdominal and mediastinal lymphnode stations, the involvement of the spleen (figure 1) and the liver entirely, and of the upper zone of lung parenchyma, with a micronodular pattern. A bronchoscopy was performed to obtain bronchial and transbronchial biopsies, which demonstrated the presence of non caseating granuloma typical of sarcoidosis (figure 2). The patient was thought to have a systemic toxicity due to IFN-1 β, which was therefore suspended. Due to the persistence of the disease after three months of follow up, treatment with prednisone and hydroxychloroquine was initiated.Hematologic laboratory exams improved and marked improvement in ultrasonography findings was noted after 1 month.The described treatment led to the complete resolution of the granulomatous disease; sarcoidosis has not recurred over these last 24 months although neurologic, clinical, and MR assessments have consistently shown an unaltered MS scenario. Discussion Sarcoidosis is a disease of unknown cause whose diagnosis is made based on histological and radiological findings and when its typical clinical manifestations are present, although these may vary and may even be absent in half of cases at diagnosis [1].Several pathophysiological mechanisms have been proposed to explain the lung damage produced by IFN, focusing on its known immuno-modulatory activity [2].The spectrum of lung tissue damage associated with the use of IFNs is very broad.The association of sarcoidosis with INF-α treatment for HCV infection is well-described [3].The possibility that IFN-1 β causes sarcoidosis was already described long ago.To our knowledge, two cases of pulmonary sarcoidosis have been described in the course of INF-β therapy, one following interferon therapy for advanced renal cell carcinoma and the other with interferon-beta 1 therapy for multiple myeloma [4,5].Symptoms of IFN-1 β -induced sarcoidosis were subtle at diagnosis for these cases of limited pulmonary sarcoidosis, as they were in our case of chronic multisystemic disease.The available literature reports the average duration of IFN therapy at the time of the diagnosis of sarcoidosis as being 34 weeks (range 2-168 weeks); the cases described are mostly related to the use of recombinant IFN-α2b, at a dose of 3 MIU subcutaneously (sc) tiw [6].In our patient, instead, sarcoidosis appeared 288 weeks after the initiation of a dose of 22 mcg of IFN-1 β sc tiw. Historically, the medical literature has addressed the misdiagnosis of MS by referring primarily to other inflammatory processes, with Neurosarcoidosis (NS) being the most commonly encountered disease initially diagnosed as MS.Classic isolated neurologic syndromes typically occurring as the first manifestations of MS are also common in NS, as well as cerebrospinal fluid abnormalities like oligoclonal bands and elevated IgG concentration.Further, the McDonald MR criteria for lesion dissemination in space are of little help in distinguishing between MS and NS [7].Neurologists tend to doubt the coexistence of MS and systemic sarcoidosis, changing their diagnosis to NS whenever a biopsy proving systemic sarcoido- sis is available.In our case, we emphasise the possibility that another confounding element may be represented by the treatment itself, which can determine sarcoidosis as chronic multisystemic complication of a therapy for MS. We performed three months of follow up after the diagnosis of systemic sarcoidosis and the withdrawal of IFN-1 β therapy without a spontaneous remission of the disease.This, along with the longer latency observed in our case, goes against the clinical logic that sarcoidosis was pharmacologically induced, as pulmonary toxicities usually disappear spontaneously after the withdrawal of the responsible drug.On the other hand, the complete regression of the disease with steroids without recurrences led us to conclude that what we observed was not an idiopathic process but a disease secondary to drug toxicity.Repeated clinical and MR assessments showing an unaltered MS scenario confirmed our hypothesis of chronic multisystemic sarcoidosis as a complication of interferon treatment for MS. In conclusion, the case here described demonstrates that chronic multi-systemic sarcoidosis can be a complication of interferon-based treatment for MS and that the collaboration between pulmonologists and neurologists is thus crucial.Moreover, it is well known that NS is often indistinguishable from MS at presentation.Ours is the first report of a multi-organ sarcoidosis that we concluded was associated with interferon-beta treatment.As symptoms of sarcoidosis are subtle at diagnosis and there is a wide variety of possible manifestations, with extra pulmonary involvement more likely when the disease is drug-induced, this clinical possibility must be taken into account. , A. Cavazza, C. Feo, F. Menzella, L. Motti, L. Zucchi.Sarcoidosis is a multi-systemic inflammatory disease of unknown origin characterized by the presence of noncaseating epitheloid cell granulomas in multiple organs.Diagnosis is made on the basis of a compatible clinical-radiological scenario and the histological demonstration of the typical granulomas in the affected tissues.Interferons are immuno-modulators that have been used in a wide range of diseases, including hepatitis C virus infection, multiple sclerosis, and multiple myeloma and other types of tumours, including leukemia, lymphomas, Kaposi's sarcoma, and melanoma.Interferon-α-induced sarcoidosis has been reported repeatedly and there are two re-ports in the literature of cases of pulmonary sarcoidosis treated with interferon-1b therapy: one for advanced renal cell carcinoma and the other for multiple myeloma.A 35-year-old man on chronic immune-modulant Interferon-1b-based therapy for multiple sclerosis presented to the Neurology Unit with mild dyspnoea, dry cough, and transient pain to right upper abdomen.Lungs, spleen, liver, and almost all lymphnode stations of abdomen and mediastinum were clearly involved on ultrasound examination, chest X-ray, and computed tomography.A transbronchial biopsy showed non-caseating granuloma on histopathologic evaluation of the lungs.To the best of our knowledge, this is the first report of a chronic multisystemic sarcoidosis that was associated with interferon-beta treatment.Monaldi Arch Chest Dis 2012; 77: 1, 29-31.
2018-04-03T03:18:51.125Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "e89d17df070c0a27ed942102930fb0b0eb2bbc45", "oa_license": "CCBYNC", "oa_url": "https://monaldi-archives.org/index.php/macd/article/download/165/153", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e89d17df070c0a27ed942102930fb0b0eb2bbc45", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119705606
pes2o/s2orc
v3-fos-license
Stochastic Homogenisation of Free-Discontinuity Problems In this paper we study the stochastic homogenisation of free-discontinuity functionals. Assuming stationarity for the random volume and surface integrands, we prove the existence of a homogenised random free-discontinuity functional, which is deterministic in the ergodic case. Moreover, by establishing a connection between the deterministic convergence of the functionals at any fixed realisation and the pointwise Subadditive Ergodic Theorem by Akcoglou and Krengel, we characterise the limit volume and surface integrands in terms of asymptotic cell formulas. Introduction In this article we prove a stochastic homogenisation result for sequences of free-discontinuity functionals of the form where f and g are random integrands, ω is the random parameter, and ε > 0 is a small scale parameter. The functionals Eε are defined in the space SBV (A, R m ) of special R m -valued functions of bounded variation on the open set A ⊂ R n . This space was introduced by De Giorgi and Ambrosio in [22] to deal with deterministic problemse.g. in fracture mechanics, image segmentation, or in the study of liquid crystals -where the variable u can have discontinuities on a hypersurface which is not known a priori, hence the name free-discontinuity functionals [21]. In (1.1), Su denotes the discontinuity set of u, u + and u − are the "traces" of u on both sides of Su, νu denotes the (generalised) normal to Su, and ∇u denotes the approximate differential of u. Our main result is that, in the macroscopic limit ε → 0, the functionals Eε homogenise to a stochastic free-discontinuity functional of the same form, under the assumption that f and g are stationary with respect to ω, and that each of the realisations f (ω, ·, ·) and g(ω, ·, ·, ·) satisfies the hypotheses considered in the deterministic case studied in [16] (see Section 3 for details). Moreover, we show that under the additional assumption of ergodicity of f and g the homogenised limit of Eε is deterministic. Therefore, our qualitative homogenisation result extends to the SBV -setting the classical qualitative results by Papanicolaou and Varadhan [30,31], Kozlov [27], and Dal Maso and Modica [17,18], which were formulated in the more regular Sobolev setting. 1.1. A brief literature review. The study of variational limits of random free-discontinuity functionals is very much at its infancy. To date, the only available results are limited to the special case of discrete energies of spin systems [2,14], where the authors consider purely surface integrals, and u is defined on a discrete lattice and takes values in {±1}. In the case of volume functionals in Sobolev spaces, classical qualitative results are provided by the work by Papanicolaou and Varadhan [30,31] and Kozlov [27] in the linear case, and by Dal Maso and Modica [17,18] in the nonlinear setting. The need to develop efficient methods to determine the homogenised coefficients and to estimate the error in the homogenisation approximation, has recently motivated an intense effort to build a quantitative theory of stochastic homogenisation in the regular Sobolev case. The first results in this direction are due to Gloria and Otto in the discrete setting [25,26]. In the continuous setting, quantitative estimates for the convergence results are given by Armstrong and Smart [8], who also study the regularity of the minimisers, and by Armstrong, Kuusi, and Mourrat [5,6]. We also mention [7], where Armstrong and Mourrat give Lipschitz regularity for the solutions of elliptic equations with random coefficients, by directly studying certain functionals that are minimised by the solutions. The mathematical theory of deterministic homogenisation of free-discontinuity problems is well established. When f and g are periodic in the spatial variable, the limit behaviour of Eε can be determined by classical homogenisation theory. In this case, under mild assumptions on f and g, the deterministic functionals Eε behave macroscopically like a homogeneous free-discontinuity functional. If, in addition, the integrands f and g satisfy some standard growth and coercivity conditions, the limit behaviour of Eε is given by the simple superposition of the limit behaviours of its volume and surface parts (see [13]). This is, however, not always the case if f and g satisfy "degenerate" coercivity conditions. Indeed, while in [10,15,24] the two terms in Eε do not interact, in [9,20,11,32,33] they do interact and produce rather complex limit effects. The study of the deterministic homogenisation of free-discontinuity functionals without any periodicity condition, and under general assumptions ensuring that the volume and surface terms do "not mix" in the limit, has been recently carried out in [16]. Stationary random integrands. Before giving the precise statement of our results, we need to recall some definitions. The random environment is modelled by a probability space (Ω, T , P ) endowed with a group τ = (τz) z∈Z n of T -measurable P -preserving transformations on Ω. That is, the action of τ on Ω satisfies P (τ (E)) = P (E) for every E ∈ T . We say that f : Ω × R n × R m×n → [0, +∞) and g : Ω × R n × (R m \ {0}) × S n−1 → [0, +∞) are stationary random volume and surface integrands if they satisfy the assumptions introduced in the deterministic work [16] (see Section 3 for the complete list of assumptions) for every realisation, and the following stationarity condition with respect to τ : When, in addition, τ is ergodic, namely when any τ -invariant set E ∈ T has probability zero or one, we say that f and g are ergodic. 1.3. The main result: Method of proof and comparison with previous works. Under the assumption that f and g are stationary random integrands, we prove the convergence of Eε to a random homogenised functional E hom (Theorem 3.13), and we provide representation formulas for the limit volume and surface integrands (Theorem 3.12). The combination of these two results shows, in particular, that the limit functional E hom is a free-discontinuity functional of the same form as Eε. If, in addition, f and g are ergodic, we show that E hom is deterministic. Our method of proof consists of two main steps: a purely deterministic step and a stochastic one, in the spirit of the strategy introduced in [18] for integral functionals of volume type defined on Sobolev spaces. In the deterministic step we fix ω ∈ Ω and we study the asymptotic behaviour of Eε(ω). Our recent result [16,Theorem 3.8] ensures that Eε(ω) converges (in the sense of Γ-convergence) to a free-discontinuity functional of the form , see (f) in Section 2) attaining a piecewise constant boundary datum near ∂Q ν r (rx) (see (1.5)), and Q ν r (rx) is obtained by rotating Qr(rx) in such a way that one face is perpendicular to ν. In the stochastic step we prove that the limits (1.2) and (1.3) exist almost surely and are independent of x. To this end, it is crucial to show that we can apply the pointwise Subadditive Ergodic Theorem by Akcoglou and Krengel [1]. Since our convergence result [16] ensures that there is no interaction between the volume and surface terms in the limit, we can treat them separately. More precisely, for the volume term, proceeding as in [18] (see also [29]), one can show that the map defines a subadditive stochastic process for every fixed ξ ∈ R m×n (see Definition 3.10). Then the almost sure existence of the limit of (1.2) and its independence of x directly follow by the n-dimensional pointwise Subadditive Ergodic Theorem, which also ensures that the limit is deterministic if f is ergodic. For the surface term, however, even applying this general programme presents several difficulties. One of the obstacles is due to a nontrivial "mismatch" of dimensions: On the one hand the minimisation problem appearing in (1.3) is defined on the n-dimensional set Q ν r (rx); on the other hand the integration is performed on the (n − 1)-dimensional set Su ∩ Q ν r (rx) and the integral rescales in r like a surface measure. In other words, the surface term is an (n − 1)-dimensional measure which is naturally defined on ndimensional sets. Understanding how to match these different dimensions is a key preliminary step to define a suitable subadditive stochastic process for the application of the Subadditive Ergodic Theorem in dimension n − 1. To this end we first set x = 0. We want to consider the infimum in (1.5) as a function of (ω, I), where I belongs to the class In−1 of (n − 1)-dimensional intervals (see (3.9)). To do so, we define a systematic way to "complete" the missing dimension and to rotate the resulting n-dimensional interval. For this we proceed as in [2], where the authors had to face a similar problem in the study of pure surface energies of spin systems. Once this preliminary problem is overcome, we prove in Proposition 5.2 that the infimum in (1.5) with x = 0 and ν with rational coordinates is related to an (n − 1)-dimensional subadditive stochastic process µ ζ,ν on Ω × In−1 with respect to a suitable group (τ ν z ′ ) z ′ ∈Z n−1 of P -preserving transformations (see Proposition 5.2). A key difficulty in the proof is to establish the measurability in ω of the infimum (1.5). Note that this is clearly not an issue in the case of volume integrals considered in [17,18]: The infimum in (1.4) is computed on a separable space, so it can be done over a countable set of functions, and hence the measurability of the process follows directly from the measurability of f . This is not an issue for the surface energies considered in [2] either: Since the problem is studied in a discrete lattice, the minimisation is reduced to a countable collection of functions. The infimum in (1.5), instead, cannot be reduced to a countable set, hence the proof of measurability is not straightforward (see Proposition A.1 in the Appendix). The next step is to apply the (n − 1)-dimensional Subadditive Ergodic Theorem to the subadditive stochastic process µ ζ,ν , for fixed ζ and ν. This ensures that the limit g ζ,ν (ω) := lim t→+∞ µ ζ,ν (ω)(tI) t n−1 L n−1 (I) , (1.6) exists for P -a.e. ω ∈ Ω and does not depend on I. The fact that the limit in (1.6) exists in a set of full measure, common to every ζ and ν, requires some attention (see Proposition 5.1), and follows from the continuity properties in ζ and ν of some auxiliary functions (see (5.9) and (5.10) in Lemma 5.4). As a final step, we need to show that the limit in (1.3) is independent of x, namely that the choice x = 0 is not restrictive. We remark that the analogous result for (1.2) follows directly by Γ-convergence and by the Subadditive Ergodic Theorem (see also [18]). The surface case, however, is more subtle, since the minimisation problem in (1.5) depends on x also through the boundary datum u rx,ζ,ν . To prove the x-independence of g hom we proceed in three steps. First, we exploit the stationarity of g to show that (1.6) is τ -invariant. Then, we prove the result when x is integer, by combinining the Subadditive Ergodic Theorem and the Birkhoff Ergodic Theorem, in the spirit of [2, Proof of Theorem 5.5] (see also [14,Proposition 2.10]). Finally, we conclude the proof with a careful approximation argument. 1.4. Outline of the paper. The paper is organised as follows. In Section 2 we introduce some notation used throughout the paper. In the first part of Section 3 we state the assumptions on f and g and we introduce the stochastic setting of the problem; the second part is devoted to the statement of the main results of the paper. The behaviour of the volume term is studied in the short Section 4, while Sections 5 and 6, as well as the Appendix, deal with the surface term. Notation We introduce now some notation that will be used throughout the paper. For the convenience of the reader we follow the ordering used in [16]. (a) m and n are fixed positive integers, with n ≥ 2, R is the set of real numbers, and R m 0 := R m \ {0}, while Q is the set of rational numbers and Q m 0 := Q m \ {0}. The canonical basis of R n is denoted by e1, . . . , en. For a, b ∈ R n , a · b denotes the Euclidean scalar product between a and b, and | · | denotes the absolute value in R or the Euclidean norm in R n , R m , or R m×n , depending on the context. (g) For A ∈ A and p > 1 we define (h) For A ∈ A and p > 1 we define it is known that GSBV p (A, R m ) is a vector space and that for every u ∈ GSBV p (A, R m ) and for [19, page 172]). (i) For every L n -measurable set A ⊂ R n let L 0 (A, R m ) be the space of all (L n -equivalence classes of) L n -measurable functions u : A → R m , endowed with the topology of convergence in measure on bounded subsets of A; we observe that this topology is metrisable and separable. (j) For x ∈ R n and ρ > 0 we define We omit the subscript ρ when ρ = 1. (k) For every ν ∈ S n−1 let Rν be an orthogonal n×n matrix such that Rν en = ν; we assume that the restrictions of the function ν → Rν to the sets S n−1 ± defined in (b) are continuous and that R−νQ(0) = Rν Q(0) for every ν ∈ S n−1 ; moreover, we assume that Rν ∈ O(n) ∩ Q n×n for every ν ∈ Q n ∩ S n−1 . A map ν → Rν satisfying these properties is provided in [16, Example A.1 and Remark A.2]. (l) For x ∈ R n , ρ > 0, and ν ∈ S n−1 we set (o) For x ∈ R n and ν ∈ S n−1 , we set Π ν 0 := {y ∈ R n : y · ν = 0} and Π ν x := {y ∈ R n : (y − x) · ν = 0}. (p) For a given topological space X, B(X) denotes the Borel σ-algebra on X. In particular, for every integer k ≥ 1, B k is the Borel σ-algebra on R k , while B n S stands for the Borel σ-algebra on S n−1 . (q) For every t ∈ R the integer part of t is denoted by ⌊t⌋; i.e., ⌊t⌋ is the largest integer less than or equal to t. Setting of the problem and statements of the main results This section consists of two parts: in Section 3.1 we introduce the stochastic free-discontinuity functionals and recall the Ergodic Subadditive Theorem; in Section 3.2 we state the main results of the paper. Given f ∈ F and g ∈ G, we consider the integral functionals F, G : is reversed when the orientation of νu is reversed, the functional G is well defined thanks to (g7). Moreover, for G as in (3.2), and w ∈ L 0 (R n , R m ) with w|A ∈ SBVpc(A, R m ), we set In (3.3) and (3.4), by "u = w near ∂A" we mean that there exists a neighbourhood U of ∂A such that As a consequence we may readily deduce the following. is not restrictive follows from assumption (g6) by using w as a competitor in the minimisation problem We are now ready to introduce the probabilistic setting of our problem. In what follows (Ω, T , P ) denotes a fixed probability space. Let f be a random volume integrand. For ω ∈ Ω the integral functional F (ω) : , with g(·, ·, ·) replaced by g(ω, ·, ·, ·). Finally, for every ε > 0 we consider the free-discontinuity functional Eε(ω) : In the study of stochastic homogenisation an important role is played by the notions introduced by the following definitions. Definition 3.6 (P -preserving transformation). A P -preserving transformation on (Ω, T , P ) is a map T : Ω → Ω satisfying the following properties: If, in addition, every set E ∈ T which satisfies T (E) = E (called T -invariant set) has probability 0 or 1, then T is called ergodic. Definition 3.7 (Group of P -preserving transformations). Let k be a positive integer. A group of Ppreserving transformations on (Ω, T , P ) is a family (τz) z∈Z k of mappings τz : Ω → Ω satisfying the following properties: , for every E ∈ T and every z ∈ Z k ; (d) (group property) τ0 = idΩ (the identity map on Ω) and If, in addition, every set E ∈ T which satisfies τz(E) = E for every z ∈ Z k has probability 0 or 1, then (τz) z∈Z k is called ergodic. Remark 3.8. In the case k = 1 a group of P -preserving transformations has the form (T z ) z∈Z , where T := τ1 is a P -preserving transformation. We are now in a position to define the notion of stationary random integrand. Definition 3.9 (Stationary random integrand). A random volume integrand f is stationary with respect to a group (τz) z∈Z n of P -preserving transformations on (Ω, T , P ) if for every ω ∈ Ω, x ∈ R n , z ∈ Z n , and ξ ∈ R m×n . Similarly, a random surface integrand g is stationary with respect to (τz) z∈Z n if We now recall the notion of subadditive stochastic processes as well as the Subadditive Ergodic Theorem by Akcoglu and Krengel [1, Theorem 2.7]. We now state a variant of the pointwise ergodic Theorem [1, Theorem 2.7 and Remark p. 59] which is suitable for our purposes (see, e.g., [18, Proposition 1]). Statement of the main results. In this section we state the main result of the paper, Theorem 3.13, which provides a Γ-convergence and integral representation result for the random functionals (Eε(ω))ε>0 introduced in (3.7), under the assumption that the volume and surface integrands f and g are stationary. The volume and surface integrands of the Γ-limit are given in terms of separate asymptotic cell formulas, showing that there is no interaction between volume and surface densities by stochastic Γ-convergence. The next theorem proves the existence of the limits in the asymptotic cell formulas that will be used in the statement of the main result. The proof will be given in Sections 4-6. Thanks to Theorem 3.13 we can also characterise the asymptotic behaviour of some minimisation problems involving Eε(ω). An example is shown in the corollary below. Corollary 3.14 (Convergence of minimisation problems). Let f and g be stationary random volume and surface integrands with respect to a group (τz) z∈Z n of P -preserving transformations on (Ω, T , P ), let Ω ′ ∈ T (with P (Ω ′ ) = 1), f hom , and g hom be as in Theorem 3.12. Let ω ∈ Ω ′ , A ∈ A , h ∈ L p (A, R m ), and let (uε)ε>0 ⊂ GSBV p (A, R m ) ∩ L p (A, R m ) be a sequence such that Proof. The proof follows from Theorem 3.13, arguing as in the proof of [16, Corollary 6.1]. Proof of the cell-formula for the volume integrand In this section we prove (3.10). We can now give the proof of Proposition 4.1. Proof of Proposition 4.1. The existence of f hom and its independence of x follow from Proposition 4.2 and [18, Theorem 1] (see also [29,Corollary 3.3]). The fact that f hom is a random volume integrand can be shown arguing as in [16, Lemma A.5 and Lemma A.6], and this concludes the proof. Proof of the cell-formula for the surface integrand: a special case This section is devoted to the proof of (3.11) in the the special case x = 0. Namely, we prove the following result. Theorem 5.1. Let g be a stationary random surface integrand with respect to a group (τz) z∈Z n of Ppreserving transformations on (Ω, T , P ). Then there exist Ω ∈ T , with P ( Ω) = 1, and a random surface integrand g hom : Ω × R m 0 × S n−1 → R such that for every ω ∈ Ω, ζ ∈ R m 0 , and ν ∈ S n−1 . The proof of Theorem 5.1 will need several preliminary results. A key ingredient will be the application of the Ergodic Theorem 3.11 with k = n − 1. This is a nontrivial task, since it requires to define an (n − 1)dimensional subadditive process starting from the n-dimensional set function A → m pc G(ω) (u 0,ζ,ν , A). To this end, we are now going to illustrate a systematic way to transform (n − 1)-dimensional intervals (see (3.9)) into n-dimensional intervals oriented along a prescribed direction ν ∈ S n−1 . Let A ′ ∈ In−1; we define the (rotated) n-dimensional interval Tν (A ′ ) as The next proposition is the analogue of Proposition 4.2 for the surface energy, and will be crucial in the proof Theorem 5.1. To conclude the proof of Proposition 5.1 we need two preliminary lemmas. Then g, g ∈ G. Proof. It is enough to adapt the proof of [16,Lemma A.7]. We will also need the following result. Proof. The proof of (g2) can be obtained by adapting the proof of [16,Lemma A.7]. An analogous argument, now using the cube Q and hence the continuity of g (x, ζ, ·) in S n−1 + . The proof of the continuity in S n−1 − , as well as that of the continuity of g are similar. We are now ready to prove Theorem 5.1. Therefore the T -measurability of the function ω → g(ω, ζ, ν) in Ω for every ζ ∈ R m 0 and ν ∈ S n−1 implies that the restriction of g to Ω × R m 0 × S n−1 ± is measurable with respect to the σ-algebra induced in Ω×R m 0 × S n−1 ± by T ⊗B m ⊗B n S . This implies the (T ⊗B m ⊗B n S )-measurability of g hom on Ω×R m 0 ×S n−1 , thus showing that g hom satisfies property (c) of Definition 3.5. Note now that for every ω ∈ Ω the function (x, ζ, ν) → g hom (ω, ζ, ν) defined in (5.17) belongs to the class G. Indeed, for ω ∈ Ω this follows from Lemma 5.3 while for ω ∈ Ω \ Ω this follows from the definition of g hom . Thus, g hom satisfies property (d) of Definition 3.5, and this concludes the proof. 6. Proof of the formula for the surface integrand: the general case In this section we extend Theorem 5.1 to the case of arbitrary x ∈ R n , thus concluding the proof of (3.11). More precisely, we prove the following result. We now state some classical results from Probability Theory, which will be crucial for the proof of Theorem 6.1. For every ψ ∈ L 1 (Ω, T , P ) and for every σ-algebra T ′ ⊂ T , we will denote by E[ψ|T ′ ] the conditional expectation of ψ with respect to T ′ . This is the unique random variable in L 1 (Ω, T ′ , P ) with the property that We start by stating Birkhoff's Ergodic Theorem (for a proof, see, e.g., [28, Theorem 2.1.5]). Theorem 6.4 (Birkhoff's Ergodic Theorem). Let (Ω, T , P ) be a probability space, let T : Ω → Ω be a P -preserving transformation, and let IP (T ) be the σ-algebra of T -invariants sets. Then for every ψ ∈ L 1 (Ω, T , P ) we have for P -a.e. ω ∈ Ω. We also recall the Conditional Dominated Convergence Theorem, whose proof can be found in [12,Theorem 2.7]. Theorem 6.5 (Conditional Dominated Convergence). Let T ′ ⊂ T be a σ-algebra and let (ϕ k ) be a sequence of random variables in (Ω, T , P ) converging pointwise P -a.e. in Ω to a random variable ϕ. Suppose that there exists ψ ∈ L 1 (Ω, T , P ) such that |ϕ k | ≤ ψ P -a.e. in Ω for every k. We are now ready to prove the main result of this section. We divide the proof into several steps. We use the notation for the integer part introduced in (q), Section 2. Moreover, if (τz) z∈Z n is ergodic, then by Corollary 6.3 the function g hom does not depend on ω and (6.2) can be obtained by integrating (5.1) on Ω, and using the Dominated Convergence Theorem thanks to (5.4). Appendix. Measurability issues The main result of this section if the following proposition, which gives the measurability of the function ω → m pc G(ω) (w, A). This property was crucial in the proof of Proposition 5.2. Proposition A.1. Let (Ω, T , P ) be the completion of the probability space (Ω, T , P ), let g be a stationary random surface integrand, and let A ∈ A . Let G(ω) be as in (3.2), with g replaced by g(ω, ·, ·, ·). Let w ∈ L 0 (R n , R m ) be such that w|A ∈ SBVpc(A, R m ) ∩ L ∞ (A, R m ), and for every ω ∈ Ω let m pc G(ω) (w, A) be as in (3.4), with G replaced by G(ω). Then the function ω → m pc G(ω) (w, A) is T -measurable. The main difficulty in the proof of Proposition A.1 is that, although ω → G(ω)(u, A) is clearly Tmeasurable, m pc G(ω) (w, A) is defined as an infimum on an uncountable set. This difficulty is usually solved by means of the Projection Theorem, which requires the completeness of the probability space. It also requires joint measurability in (ω, u) and some topological properties of the space on which the infimum is taken, like separability and metrisability. In our case (see (3.4)) the infimum is taken on the space of all functions u ∈ L 0 (R n , R m ) such that u|A ∈ SBVpc(A, R m ) and u = w near ∂A, and it is not easy to find a topology on this space with the above mentioned properties and such that (ω, u) → G(ω)(u, A) is jointly measurable. Therefore we have to attack the measurability problem in an indirect way, extending (an approximation of) G(ω)(u, A) to a suitable subset of the space of bounded Radon measures, which turns out to be compact and metrisable in the weak * topology. We start by introducing some notation that will be used later. For every every A ∈ A we denote by
2017-12-20T00:11:09.000Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "1ccce9c52899e965fe090bf2f657c686ec80d6d3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00205-019-01372-x.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2082d73bc8bfb427d0041c81b7dca71bceea2415", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
51887721
pes2o/s2orc
v3-fos-license
County-level air quality and the prevalence of diagnosed chronic kidney disease in the US Medicare population Background Considerable geographic variation exists in the prevalence of chronic kidney disease across the United States. While some of this variability can be explained by differences in patient-level risk factors, substantial variability still exists. We hypothesize this may be due to understudied environmental exposures such as air pollution. Methods Using data on 1.1 million persons from the 2010 5% Medicare sample and Environmental Protection Agency air-quality measures, we examined the association between county-level particulate matter ≤2.5 μm (PM2.5) and the prevalence of diagnosed CKD, based on claims. Modified Poisson regression was used to estimate associations (prevalence ratios [PR]) between county PM2.5 concentration and individual-level diagnosis of CKD, adjusting for age, sex, race/ethnicity, hypertension, diabetes, and urban/rural status. Results Prevalence of diagnosed CKD ranged from 0% to 60% by county (median = 16%). As a continuous variable, PM2.5 concentration shows adjusted PR of diagnosed CKD = 1.03 (95% CI: 1.02–1.05; p<0.001) for an increase of 4 μg/m3 in PM2.5. Investigation by quartiles shows an elevated prevalence of diagnosed CKD for mean PM2.5 levels ≥14 μg/m3 (highest quartile: PR = 1.05, 95% CI: 1.03–1.07), which is consistent with current ambient air quality standard of 12 μg/m3, but much lower than the level typically considered healthy for sensitive groups (~40 μg/m3). Conclusion A positive association was observed between county-level PM2.5 concentration and diagnosed CKD. The reliance on CKD diagnostic codes likely identified associations with the most severe CKD cases. These results can be strengthened by exploring laboratory-based diagnosis of CKD, individual measures of exposure to multiple pollutants, and more control of confounding. Introduction The body of evidence suggesting that long-term exposure to air particles less than 2.5 micrometer in diameter, called fine particulate matter (PM 2.5 ) air pollution, contributes to adverse health outcomes continues to grow. Early work focused on acute exposure to high levels of micro-particle air pollution where it was found to increase overall daily mortality by 7% per 50 pg/m 3 increase in PM 2.5 , and cause-specific mortality by 25%, 11%, and 0.4% for respiratory, cardiovascular, and other causes, respectively [1]. Recently, there has been a growing interest in exploring outcomes from long-term air pollution exposure in high-risk groups, such as those with underlying cardiovascular and metabolic or respiratory disorders [2]. Even more recently, studies examining the possible effects of air pollution on the risk of chronic kidney disease (CKD) have been conducted [3][4][5][6][7]. Some of the first evidence of the association between PM 2.5 and kidney disease was discovered in an ecological study of health outcomes in coal mining areas of Appalachia, where they found a 19% higher relative risk of CKD among men and a 13% higher relative risk among women in mining counties with population > 4 million compared to non-mining counties [3]. Two studies focused on the Boston community and examined estimated glomerular filtration rates (eGFR), a measure of kidney function. The first examined eGFR in patients hospitalized for acute ischemic stroke and found that individuals living closer to major roadways (<50 m) had eGFR on average lower by 3.9 ml/min/1.73m 2 compared to patients living !1000 m from a major roadway [4]. The second study was a small longitudinal sample of elderly veterans and showed that individuals exposed to higher levels of ambient air pollution also had lower average estimated glomerular filtration rates and a larger annual decrease in kidney function [5]. The largest study to date have been conducted using data from the Department of Veteran Affairs where they found higher risks of multiple measures of kidney function of 20% or higher for every 10 μg/m 3 higher PM 2.5 level and higher rates for specific components including NO 2 and CO [6,7]. CKD is a common condition with important long-term health implications that often goes unrecognized until advanced stages or kidney failure [8][9][10]. CKD currently afflicts about 27 million Americans and significantly elevates the risk of death, cardiovascular disease, endstage renal disease (ESRD), and other complications [11]. Individuals with CKD are at an 8-10-fold increased risk of cardiovascular mortality, compared to those without kidney dysfunction [12]. CKD is typically a progressive disease with loss of kidney function over time. The rate of function loss is variable and dependent on both treatment and patient factors, including level of proteinuria, older age, diabetes mellitus, blood pressure control, obesity, metabolic syndrome, and family history of kidney disease. Early recognition and treatment of CKD and of the risk factors for CKD may slow progression of the disease [13][14][15]. Although much attention has been given to treatment of personal CKD risk factors, less has been focused on potential environmental contributors to the development and progression of CKD, despite the higher prevalence of both CKD and air-pollutions exposure among disadvantaged and minority populations in the United States [12]. Sources of PM 2.5 include all types of combustion activities, such as motor vehicle emissions, power plants, and wood burning, as well as common indoor activities, such as smoking, cooking, burning candles or oil lamps, and operating fireplaces and fuel-burning space heaters (e.g., kerosene heaters) [16]. The major components of PM 2.5 include: ammonium sulfate, ammonium nitrate, organic carbonaceous mass, elemental carbon, and crustal material [16]. Air pollution from these sources can be mitigated; thus, it is important to study its link with CKD. Several pathophysiologic mechanisms have been proposed to explain the possible causal link between air pollution and adverse cardio-metabolic and respiratory outcomes. Many of these mechanisms are similar to factors known to play a role in initiation and progression of CKD, including: increased sympathetic nervous system activity, activation of the renin-angiotensin-aldosterone system (RAAS), vascular endothelial dysfunction, oxidative stress, inflammation, platelet adhesion and aggregation, insulin resistance, and metabolic dysregulation [17][18][19]. For example, there is evidence that individuals in areas with high PM 2.5 have high levels of sympathetic activity and RAAS activation [20]. These are known contributors to initiation and progression of CKD, and treatment of individuals with medications that inhibit RAAS have been shown to slow CKD progression. Studies using experimental mouse models have also demonstrated that air pollution is associated with high mouse levels of oxidative stress and vascular endothelial dysfunction [19]. Experimental studies suggest that treating these conditions can slow CKD progression [20]. Additionally, air pollution is known to contain heavy metals. Lead, mercury, and cadmium are common heavy metal toxins known to have toxicological kidney effects at high levels. Exposure to some of these metals from the air, even at low levels, could also potentially play a role in CKD progression [21,22]. We postulate that, similar to high-risk individuals with cardiopulmonary disease, individuals with CKD would be particularly susceptible to the effects of air pollution. We, therefore, conducted an exploratory study to determine whether an association exists between county levels of ambient air pollution and CKD prevalence, controlling for potential confounders, among older adults living in the United States. Evidence of a link between air pollution and kidney disease in this study would support future studies involving individual exposures measures. Study sample This study is an analysis of anonymous, secondary data sources and met University of Michigan's Institutional Review Board standards for "Not Regulated" Status. We conducted a crosssectional study of 1,164,057 adults !65 years old enrolled in the U.S. Medicare program in 2010 (Medicare 5% sample). To be included, patients were required to be enrolled in Medicare parts A and B for the full year, with no health management organization (HMO) coverage. CKD was defined using a large set of ICD-9-CM diagnosis codes indicating CKD, which are identical to the codes utilized by the United States Renal Data System [23][24]. The full set of ICD-9-CM codes were included in this study to capture all possible mechanisms for an association between PM 2.5 and kidney disease. ICD-9-CM codes were also employed to calculate indicators of diabetes and hypertension status and were derived from both inpatient and/or outpatient diagnosis claims. This year of Medicare data was chosen to specifically align with the county-level exposure data. Other measures County-level concentrations of PM 2.5 were obtained for the year 2006 from the Centers for Disease Control and Prevention (CDC) Wonder database [25]. A full description of this data can be found on the website. Briefly, this database includes PM 2.5 concentrations measured daily in the outdoor air and geographic aggregates of these measures of fine particulate matter. To create these data, two sources of environmental data were used as input to the surfacing algorithm, US EPA AQS PM 2.5 in-situ data and NASA MODIS aerosol optical depth remotely sensed data and continuous spatial surfaces (grids) of daily PM 2.5 for the whole conterminous U.S. were created for 2003-2011. County-level data were aggregated from 10 kilometer square spatial resolution grids [26]. Aggregated county-level PM 2.5 values provided directly from the Wonder database were employed for this study for the year 2006. Particles with aerodynamic diameter < 2.5 micrometers (PM 2.5 ) were the focus of this work, as evidence already exists for the effect of larger particular matter in the etiology of kidney disease and it is believed that finer particles pose a greater health risk because they are more readily inhaled and can lodge deeply into the lungs and enter the blood stream [27,28]. A 6-category ordinal variable for urban/rural status was used to account for other unmeasured differences between counties, as this measure is known to be associated with potential confounders, such as obesity, physical activity, nutrition, and poverty, as well as air pollution levels [29][30][31][32][33]. Data were derived from the CDC's Urban-Rural Classification Scheme for Counties, for 2006 [34]. The six categories included: two large metropolitan groups, consisting of > 1 million residents, divided by designation as central or fringe/suburban; medium metropolitan with 250,000-999,999 residents; small metropolitan with < 250,000 residents; and two non-metropolitan categories, micropolitan if containing an urban cluster of > 10,000 residents and non-core if no urban cluster. County-level data on poverty and education, from the 2006 Behavioral Risk Factor Surveillance System BRFSS Supplement [35], were examined as markers of socioeconomic status, but were not associated with CKD in our analysis after accounting for the urban-rural status of each county and were therefore not used in the final models. Statistical analysis Although the main exposure variables in this analysis are ecologic (aggregated) measures of PM 2.5 at the county level, the unit of analysis is the individual level outcome of CKD status and all covariates except urban/rural status are measured at the individual level [36]. The county of residence for every individual in the study population was indicated by the 5-digit Federal Information Processing Standard (FIPS) codes and was used to merge the air pollution data to each patient in the sample [37]. Descriptive statistics are presented for the total sample and the sample stratified by the median PM 2.5 concentration (12.2 μg/m 3 ), which lies very near the middle of the bimodal distribution of this measure. The individual-level diagnosis of CKD was modelled as the outcome, using modified Poisson regression with robust errors. This modeling approach was chosen, as opposed to logistic regression, because it yields estimates of prevalence ratios (PRs), rather than odds ratios [38,39]. The final model accounted for clustering of the outcome within counties, using a compound symmetry covariance matrix. Two parameterizations of county-level mean PM 2.5 were examined: as a continuous variable (expressed for an increase of 4 μg/m 3 , which is approximately the interquartile range) and by quartiles. All PM 2.5 measures are reported in micrograms per cubic meter (μg/m 3 ). PR estimates, comparing mean exposure levels, were adjusted for the following available potential confounders: age, sex, race/ethnicity (Non-Hispanic White, Non-Hispanic Black, Hispanic, Asian, North American Native, Other, and Unknown), diagnosed hypertension, diagnosed diabetes, and urban/rural status. Results Of 3,143 U.S. counties, CKD diagnosis information was available for enrollees within 3,108, PM 2.5 data was available for 3,111, and both variables were available for 3,049 counties. The overall prevalence of diagnosed CKD in the sample was 17.2%. When examined at the countylevel, the median county-level prevalence of diagnosed CKD in the Medicare population was 16%, ranging from 0%-60%, with an interquartile range of 13%-19%. The median county-level PM 2.5 concentration was 12.2 μg/m 3 , ranging from 6.1 to 16.8 μg/m 3 , with an interquartile range of 10.2-13.8 μg/m 3 . The distribution of county-level PM 2.5 concentration was bimodal, as displayed in Fig 1. When examining characteristics of Medicare enrollees by the two clusters of PM 2.5 concentration: high (PM 2.5 > 12.2 μg/m 3 ) and low (PM 2.5 12.2 μg/m 3 ), we see that enrollees in counties with higher PM 2.5 were slightly younger, contained a higher proportion of females and non-Hispanic Blacks, higher prevalence of both diabetes and hypertension, and a higher proportion of enrollees living in large metropolitan areas (Table 1). There was a clear pattern of higher prevalence of diagnosed CKD in large central metropolitan areas (18.4%), decreasing steadily to 16.0% and 15.1% in micropolitan and non-core counties, respectively (p<0.0001). Due to this observed association, all models examining the association between diagnosed CKD and fine particulate matter in air accounted for the county's urban-rural status, as well as the risk factors shown in Table 1. We examined PM 2.5 concentration as both a continuous and as a 4-category ordinal variable (quartiles) in separate analyses. In unadjusted models, a 4 μg/m 3 higher PM 2.5 concentration was associated with a 1.12 (95% CI: 1.10-1.14) PR of diagnosed CKD. After adjustment for patient characteristics and urban/rural status, the PR was 1.03 (95% CI: 1.02-1.05). Categorizing average PM 2.5 level in quartiles and treating the first (lowest) quartile as the reference group, where PM 2.5 <10.2 μg/m 3 , the adjusted PR was 1.02 (95% CI: 0.99-1.04) for counties in the second quartile, 1.01 (95% CI: 0.98-1.03) for the third quartile, and 1.05 (95% CI: 1.03-1.07) for the fourth quartile where the average PM 2.5 level was !13.8 μg/m 3 (Fig 3). Discussion In a large population of subjects, aged 65 years and older, enrolled in the Medicare insurance program of the United States, county-level concentration of ambient PM 2.5 was positively associated with diagnosed CKD. This association was attenuated, but remained statistically significant even after adjusting for individual demographic characteristics, diagnosed hypertension and diabetes, and county level urban-rural status. In all models, higher average concentrations of PM 2.5 were associated with higher prevalence of CKD. We also found no evidence that this association is due to differences in the age, sex, race, diabetes, or hypertension prevalence differences between regions. Although there could be other confounders affecting this relationship, these characteristics are some of the most common risk factors related to CKD. While the effect size of PR = 1.05 may not seem large, one should remember that this effect is for all residents of the county, not just those of a specific age, race/ethnicity, or with a certain comorbid condition. The effect size is also similar to those found in studies of other chronic disease outcomes [40]. This finding is important in regards to standards for air quality. The U.S. Environmental Protection Agency currently sets the lower limit threshold for PM 2.5 at 12 μg/m 3 ; which could be interpreted to mean that levels lower than this threshold are deemed safe, and vice versa [27]. This value is much lower than the daily level typically considered healthy for sensitive groups (~40 μg/m 3 ) and almost half of the counties had mean PM 2.5 levels that were above these guidelines [19]. Moreover, it is not entirely clear that lower levels are indeed safe for those with health conditions that raise their risk of cardiopulmonary complications. If these findings can be validated in future research, they may point to the importance of assuring adequate protection from environmental air pollution for individuals at risk of, or already suffering from, varying severity of CKD. The findings from this study are consistent with results from studies that have examined the association between air pollution and other chronic conditions, such as cardiovascular and pulmonary disease, but these studies are few in number. In one study, Schwartz, et al. [40], found that after controlling for age, race, sex, and cigarette smoking, annual average total suspended particulate concentrations were associated with increased risk of chronic bronchitis (odds ratio = 1.07; 95% CI: 1.02-1.12). Most studies of air pollution and its effects on health have been limited to looking at cardiac or mortality events [41][42][43][44][45]. Our results are also consistent and extend recent work examining associations between air pollution and kidney disease [3][4][5][6][7], while focusing on a large, novel population of elderly Medicare recipients. Future research among kidney disease patients will examine hospitalization and mortality, as well as incidence of kidney disease in this patient population. The significant overlap in risk factors, pathogenesis, progression, and complications of cardiovascular and kidney disease is, in general, well recognized [46,47]. The cardiovascular system is especially vulnerable even in early stages of CKD with early onset of endothelial dysfunction [48]. Free radical-mediated injury, activation of vasoactive and pro-inflammatory cytokines, the central role of activation of renin-angiotensin-aldosterone, abnormal autonomic imbalance with abnormalities in heart rate variability, increased arterial stiffness, accelerated atherosclerosis, and a high propensity to acute cardiovascular events including sudden death are common to both cardiovascular disease (CVD) and CKD [49][50][51]. A number of other metabolic abnormalities unique to the uremic milieu additionally render patients with CKD even more vulnerable to environmental and other insults/stressors, such as air pollution. The kidney, while seemingly remote from air in the environment, is intimately linked to the circulatory system-by virtue of the high rates of blood flow through its parenchyma-and therefore to the environment, thereby sharing vulnerability with the respiratory and cardiovascular systems [52]. We recognize that the association between air pollution levels and prevalence of diagnosed CKD does not indicate a (causal) effect and may be confounded by county-level differences in a number of unmeasured characteristics, including health system capacity and other environmental factors. By adjusting for each county's urban-rural status, we have aimed to minimize this potential confounding. This study was restricted to a population at the highest risk for kidney disease, Medicare enrollees (aged 65 years and older), and the results are not generalizable to younger age groups. While older individuals are at high risk, an examination of younger ages would benefit any future work. The main methodological limitations of the current work are its cross-sectional design and lack of individual-level exposure data. This study was also limited to the use of administrative healthcare claims data for identification of CKD. It is likely that individuals with early stages of CKD do not have a diagnosis and are therefore classified as non-cases. We chose to use the list of ICD-9 codes utilized by the United States Renal Data System, which includes all diagnoses of CKD, because although some diagnoses, such as posterior urethral valves or pyelonephritis, are not likely associated with air pollution, we cannot exclude this possibility based on our study. Also, a systematic review of coding for CKD and related conditions has shown the sensitivity of using only diagnostic codes to be low, typically under 50% [53]. Moreover, these conditions would be extremely rare in the population under study. The reliance on claims data also excluded an examination of these associations by stage of CKD. Future work would benefit by focusing on cohorts that include laboratory data for use in classifying individuals into appropriate CKD categories. The authors also acknowledge that there may be air pollution data quality limitations and refer the reader to the CDC Wonder website for details. If indeed a variety of studies consistently further support the hypothesis that air pollution is a risk factor for kidney disease incidence, progression and other complications, it may lend greater impetus to encourage public health and clinical efforts to not only offer greater protection to these higher risk individuals, but also to establish evidence at lower thresholds for air pollution standards, in general. Specific toxins in the environment (e.g., lead, aristolochic acid, heavy metals, etc.) have definitively been linked with nephrotoxicity, and minimal exposure has been advised. It is well known that patients with kidney disease are especially susceptible to cardiopulmonary complications and when in highly polluted areas, may benefit from the use of preventive measure that are relatively simple and easy to implement. It may also be advisable for such individuals to consider limiting long hours commuting to work in high traffic areas where there is significantly higher exposure to environmental pollutants and other stressors [54][55]. Although this study included over one million individuals, the cross-sectional design and lack of individual exposure data severely limit causal inference. It does, however, support further research in this area, using more detailed air pollution exposure data mapped to the patient-or zip-code level rather than the more crude averaged, county-level estimates utilized in this study. If this association is borne out by future studies, it would have clinical and public health implications for reducing air pollution exposure for those with CKD and also for those at risk for the condition. The potential public health significance of this finding is even greater for regions and countries with much higher levels of air pollution than the United States.
2018-08-14T12:22:48.509Z
2018-07-31T00:00:00.000
{ "year": 2018, "sha1": "64d7268d9d52ed8f163694b88c7dbd6ba2c1fbe5", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0200612&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64d7268d9d52ed8f163694b88c7dbd6ba2c1fbe5", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230797063
pes2o/s2orc
v3-fos-license
Analysis of sex differences in dietary copper-fructose interaction-induced alterations of gut microbial activity in relation to hepatic steatosis Background Inadequate copper intake and increased fructose consumption represent two important nutritional problems in the USA. Dietary copper-fructose interactions alter gut microbial activity and contribute to the development of nonalcoholic fatty liver disease (NAFLD). The aim of this study is to determine whether dietary copper-fructose interactions alter gut microbial activity in a sex-differential manner and whether sex differences in gut microbial activity are associated with sex differences in hepatic steatosis. Methods Male and female weanling Sprague-Dawley (SD) rats were fed ad libitum with an AIN-93G purified rodent diet with defined copper content for 8 weeks. The copper content is 6 mg/kg and 1.5 mg/kg in adequate copper diet (CuA) and marginal copper diet (CuM), respectively. Animals had free access to either deionized water or deionized water containing 10% fructose (F) (w/v) as the only drink during the experiment. Body weight, calorie intake, plasma alanine aminotransferase, aspartate aminotransferase, and liver histology as well as liver triglyceride were evaluated. Fecal microbial contents were analyzed by 16S ribosomal RNA (16S rRNA) sequencing. Fecal and cecal short-chain fatty acids (SCFAs) were determined by gas chromatography-mass spectrometry (GC-MS). Results Male and female rats exhibit similar trends of changes in the body weight gain and calorie intake in response to dietary copper and fructose, with a generally higher level in male rats. Several female rats in the CuAF group developed mild steatosis, while no obvious steatosis was observed in male rats fed with CuAF or CuMF diets. Fecal 16S rRNA sequencing analysis revealed distinct alterations of the gut microbiome in male and female rats. Linear discriminant analysis (LDA) effect size (LEfSe) identified sex-specific abundant taxa in different groups. Further, total SCFAs, as well as, butyrate were decreased in a more pronounced manner in female CuMF rats than in male rats. Of note, the decreased SCFAs are concomitant with the reduced SCFA producers, but not correlated to hepatic steatosis. Conclusions Our data demonstrated sex differences in the alterations of gut microbial abundance, activities, and hepatic steatosis in response to dietary copper-fructose interaction in rats. The correlation between sex differences in metabolic phenotypes and alterations of gut microbial activities remains elusive. Supplementary Information The online version contains supplementary material available at 10.1186/s13293-020-00346-z. Introduction The prevalence of nonalcoholic liver disease (NAFLD) in the USA has increased rapidly in the past two decades, from 19 to 24%, which is close to the global prevalence of 25.24% [1,2]. Based on the epidemiological data from obesity and type 2 diabetes in adults, the estimated prevalence of NAFLD will continue to increase up to 33.5% by 2030, and nonalcoholic steatohepatitis (NASH) will increase proportionately from 20% of NAFLD to 27%, ranking it as a top indication for liver transplantation [3,4]. Of note, NAFLD and NASH exhibit age and sex differences, with a higher prevalence in men than in premenopausal women. Conversely, a higher rate of NAFLD was found among the postmenopausal women [5][6][7]. In agreement with this finding, sex differences also exist in the risk factors, such as obesity and type 2 diabetes [8,9]. Biological sex differences are exhibited in many physiological phenomenon, including fat distribution, triglyceride storage in the liver and muscle [10], and fatty acid and glucose metabolism [11]. Therefore, understanding sex differences in physiology and pathophysiology is required for precision medicine. Sex hormones and sex chromosome are two major factors driving sex differences [7]. The role of sex hormones has been demonstrated in both human and animal studies. For example, postmenopausal women with estrogen deficiency display a higher risk for NAFLD progression to fibrosis [12]. In contrast, liver injury was improved by hormone replacement therapy in postmenopausal women with type 2 diabetes [13]. Ovariectomized (OVX) female rats exhibit exacerbated hepatic steatosis when exposed to high-fat high-fructose diet (HFFD), which was reversed by estrogen replacement [14]. A four-core genotype mouse model (XX gonadal male and female, XY gonadal male and female) allows for the identification of whether sex differences arise from the sex chromosome complement. Using this approach, it was revealed that XX mice are prone to developing obesity and fatty liver in response to high-fat diet, regardless of sex hormones [15]. In addition to genetics and sex hormones, diet is a key environmental factor leading to sex differences in metabolic diseases [16]. Copper and fructose are two dietary factors known to be critical in the pathogenesis of NAFLD [17][18][19][20][21][22]. Sex differences in the metabolic effects of fructose and/or copper deficiency have been noted in rodents [23][24][25][26] as well as in humans [27,28], with more harmful effects reported in males and more protective effects in females, which is consistent with the sex differences in NAFLD [7]. In fact, sex differences in fructose-induced metabolic effects are more complex and vary by tissue and organ [14,29,30]. Although sex hormones are one of the factors leading to sex differences in copper-fructose interaction-induced metabolic disorders [26], the underlying mechanisms are largely unknown. A growing body of evidence has shown that gut microbiota play a causal role in driving the development of obesity, diabetes, and NAFLD [31][32][33][34]. Diet, as one of the most common environmental factors, shapes the gut microbiome [35]. Interestingly, diet-induced alterations of gut microbiota exhibit a sex-dependent phenotype [36,37]. Previous studies have shown that distinct alterations of the gut microbiome are linked to specific metabolic traits [38] as well as to different stages of NAFLD [39,40], leading to the hypothesis that sex differences in the gut microbiota are linked to distinct metabolic phenotypes or disease severity. Our previous studies have shown that dietary copper-fructose interactions shifted gut microbiota and correlated to the development of hepatic steatosis in male rats [41,42]. Given that diet shapes the gut microbiome in a sex-specific manner [36], we aimed to determine whether dietary copperfructose interaction alters gut microbiota and induces hepatic steatosis in a sex-dependent manner and whether sex differences in metabolic phenotype contribute to the distinct alterations of the gut microbiota. Animals and diets Male and female weanling Sprague-Dawley rats (35-45 g) from the Harlan Laboratories (Indianapolis, IN) were fed (ad lib) an AIN-93G purified rodent diet with a defined copper content. The rats received either 1.5 mg/kg or 6.0 mg/kg of copper as marginal or adequate doses, respectively, for 8 weeks. Control animals were fed adequate copper with no added fructose. The animals were single housed in stainless steel cages without bedding in a temperature-and humidity-controlled room with a 12: 12-h light-dark cycle. Animals had free access to either deionized water or deionized water containing 10% fructose (w/v). Fructose-enriched drinking water was changed twice a week. Food consumption and body weight were monitored on a weekly basis. After a 2-h fasting, all the animals were sacrificed under anesthesia with ketamine/xylazine (100/10 mg/kg I.P. injection). Blood was collected from the inferior vena cava, and citrated plasma was stored at − 80°C for further analysis. Portions of liver tissue were fixed with 10% formalin for subsequent sectioning, while others were snap-frozen with liquid nitrogen. All studies were approved by the University of Louisville Institutional Animal Care and Use Committee, which is certified by the American Association of Accreditation of Laboratory Animal Care. Hepatic triglyceride assay Liver tissues were homogenized in 50 mM sodium chloride solution. Hepatic total lipids were extracted with chloroform/methanol (2:1) according to the method described by Bligh and Dyer [43]. Hepatic triglyceride was determined by commercially available kit (Thermo Fisher Scientific Inc., Middletown, VA, USA). 16S ribosomal RNA (16S rRNA) gene library preparation and sequencing on the Illumina MiSeq Fecal pellets were collected into sterile tubes at the end of the experiment and stored at − 80°C. Microbial genomic DNA was extracted from frozen fecal samples using DNeasy PowerSoil kit (Cat#:12888-100, Qiagen, Germantown, MD, USA) according to the manufacturer's instructions. The composition of fecal microbiota was analyzed using Illumina MiSeq technology targeting the variable V3 and V4 regions of 16S ribosomal RNA. 16S variable regions were amplified using 12.5 ng microbial genomic DNA. PCR conditions are as follows: 95°C for 3 min; 25 cycles of 95°C for 30 s, 55°C for 30 s, and then 72°C for 30 s; and 72°C for 5 min. The primers used for 16S Amplicon PCR are as follows: Forward: 5′-TCGTCG GCAGCGTCAGATGTGTATAAGAGACAGCCTACGG GNGGCWGCAG; Reverse: 5′-GTCTCGTGGGCTCGG AGATGTGTATAAGAGACAGGACTACHVGGGTATC TAATCC. Index PCR was performed to attach dual indices and Illumina sequencing adapters using the Nextera Index Kit (Cat#: FC-121-1012, Illumina, San Diego, CA, USA). Each step was followed by the PCR clean-up, using AMPure XP beads to obtain a purified library. After libraries were normalized, pooled, and denatured, sequencing was done using Illumina MiSeq Reagents kit v3 (600 cycles, read lengths up to 2 × 300 bp) (Cat#: MS-102-3003, Illumina, San Diego, CA, USA) on an Illumina MiSeq instrument. Sequencing data analysis Quality control of raw sequence files was performed using FastQC and further analyzed using QIIME 2 (version 2019.04) [44]. The workflow is shown in the schematic diagram (supplementary Figure 1). Briefly, the paired-end files per sample were merged and imported into a QIIME 2 artifact. The sequences reads were then demultiplexed and denoised into amplicon sequence variants (ASVs) (supplementary Table 8) using DADA2 in QIIME 2 which can identify more real variants and output fewer spurious sequences than other methods. The resulted feature table and representative sequences were used for the downstream analysis. Rarefaction curve using the observed operational taxonomy unit (OTU) and Shannon index generated by QIIME 2 were used as metrics of α-diversity [45]. Principal coordinate analysis (PCoA) was performed to compare microbial community structure between groups (β-diversity), using both weighted and unweighted UniFrac [46]. Heat map analysis of OTU abundance was performed using R software (https://www.r-project.org/). Linear discriminant analysis (LDA) effect size (LEfSe) method was used to find the most differentially abundant enriched microbial taxa between the different diets. The analysis was performed on Galaxy platform (http:/huttenhower.sph.harvard.edu/ galaxy). The data generated from LEfSe analysis was shown by cladogram and histogram with LDA score > 2 and a significance of α < 0.05, as determined by Wilcoxon rank-sum test [47][48][49]. The 16S data set was used for metagenome predictions using the software package PICRUSt2 [50]. Predictions were based on Kyoto Encyclopedia of Genes and Genomes (KEGG) database pathways [51], and the output was based on the pathway mapping of the MetaCyc database [52]. A Venn diagram was used to show genus distribution between groups. Short-chain fatty acid (SCFA) measurement by gas chromatography-mass spectrometry (GC-MS) About 50 mg of cecal and fecal stool samples were weighed, and polar metabolites were extracted for GC-MS analysis using established methods as described previously [53]. Statistical analysis Data were expressed as mean ± SD (standard deviation) and analyzed using two-way ANOVA to test the factors of copper, fructose, and their interactions (copper × fructose), followed by Tukey's multiple comparison test. The Kruskal-Wallis test was used for pairwise comparison between treatment groups (α-diversity). Comparison of the mean distance matrix (β-diversity) between two treatment groups using PERMANOVA (a nonparametric method for multivariate analysis of variance) with permutation tests was based on UniFrac distance matrix (999 Monte Carlo permutations). Two-tailed nonparametric Spearman correlation was done with GraphPad Prism. Differences at p ≤ 0.05 were considered to be statistical significant. Characterization of dietary copper-fructose interaction on metabolic phenotypes in male and female rats Male and female rats exhibit similar trends of changes in the body weight and body weight gain in response to dietary copper and fructose, with a generally higher level in male rats ( Fig. 1, Tables 1 and 2). Two-way ANOVA analysis showed that the liver weight of female rats, but not male rats, was affected by dietary copper content within the 8-week period. The liver/body weight ratio was altered by both dietary copper and fructose. However, copper-fructose interaction was apparent only in female rats. While the variations of perigonadal white adipose tissue (WAT) weight as well as WAT/body weight ratios were related to dietary copper content in male rats, they were more likely to be affected by dietary fructose in female rats. The energy efficiency ratio (EER, %), i.e., the ratio of body weight gain and total energy intake [54,55], was decreased by dietary fructose in both male and female rats compared to their controls, suggesting the metabolic effects of fructose may not be contributed to the calorie intake. Ad libitum feeding of fructose via drinking water led to a significant increase in water intake and a decrease in pellet food intake. Although there was a trend toward an increase in the total energy intake in rats fed with fructose compared to those without, the difference did not reach statistical significance in either males or females. Plasma triglyceride was significantly elevated in male rats fed with fructose regardless dietary copper. However, it was only significantly elevated in CuMF female rats compared to marginal copper diet (CuM) female rats. Plasma cholesterol levels were not significantly changed by dietary fructose or copper level in both male and female rats. Plasma NEFA was significantly increased in CuAF male rats compared to adequate copper diet (CuA) rats. In female rats, fructose feeding led to a trend of an increase in plasma NEFA levels. Plasma glucose level was significantly elevated by fructose feeding in female rats regardless of dietary copper level, whereas this effect was only observed in male CuA rats (Tables 1 and 2). Collectively, plasma lipids and glucose display distinct alterations in response to dietary copper and fructose between male and female rats. Hepatic manifestations in response to dietary copperfructose interaction in male and female rats Neither male nor female rats showed obvious liver injury in terms of plasma ALT and AST after being exposed to CuA or CuM diets with or without 10% fructose (w/v) for 8 weeks (Fig. 2a). Three of eight female rats fed with CuA plus fructose (CuAF) developed mild steatosis, characterized with macrosteatosis around the portal area. Only very mild microsteatosis could be visualized in either CuMF female rats or male rats fed with marginal copper diet and/or fructose (Fig. 2b). Consistently, hepatic triglyceride was significantly elevated in CuAF female rats compared to control rats (Fig. 2c). Compared to our previous study with AIN-76 diet (containing 49% sucrose) and 30% fructose (w/v) in the drinking water [21], the extent of hepatic steatosis is mild and no apparent liver injury was detected. Despite there being only mild steatosis induced under the current conditions, sex differences still were detected, with female CuAF rats showing hepatic steatosis. Distinct alterations of fecal gut microbiota in response to dietary copper and fructose between male and female rats as analyzed by 16S rRNA sequencing To examine whether copper-fructose interaction alters the gut microbiome in a sex-specific manner, we performed 16S rRNA sequencing of fecal stool DNA. In male rats, either fructose or CuM resulted in a trend of decrease in alpha-diversity in terms of the observed OTU. However, only the difference between CuA and CuAF reached statistical significance (CuA versus CuAF, p = 0.037), suggesting fructose feeding led to reduced species richness in male rats [56]. There were no significant differences between groups of female rats in terms of observed OTU, suggesting neither fructose nor CuM alters the species richness of the gut microbiota in female rats. There was no significant difference between groups of both male and female rats in terms of Shannon index (Fig. 3a, supplementary Table 1). Betadiversity was evaluated by UniFrac analysis [46]. Unweighted UniFrac is a qualitative β-diversity measure, which detects the difference in the presence or absence of lineages of bacteria in different communities [57]. Unweighted UniFrac analysis demonstrated that the mean distance between groups CuA and CuAF, CuA and CuM, and CuA and CuMF were significantly different in male rats (p < 0.05) (Fig. 3b, right panel, supplementary Table 2). In female rats, unweighted UniFrac analysis showed significant differences were between groups CuM and CuMF, and CuA and CuMF (p < 0.05) (Fig. 3b, right panel, supplementary Table 2). The weighted UniFrac measure was used for detecting differences in abundance [57], and no significant differences were detected between the four treatment groups in male or female rats (Fig. 3b, left panel, supplementary Table 2). These results suggested that either dietary fructose (CuAF) or copper (CuM) or the combined effects (CuMF) alter bacterial communities in male rats. However, bacterial communities were altered by dietary copper (CuM) or copper plus fructose (CuMF) in female rats. Moreover, the baseline bacterial communities (CuA) were significantly different between male and female rats. At the phylum level, fructose feeding led to a remarkable increase in the abundance of Bacteroidetes and Proteobacteria and a decrease in Firmicutes independent of dietary copper content. In male rats, only the abundance of Bacteroidetes and Proteobacteria was altered by dietary fructose, and the effect was less pronounced compared to female rats (Fig. 3c, supplementary Tables 3 and 4). In agreement with this, more families and genera under the phyla Bacteroidetes, Firmicutes, and Proteobacteria were altered in female rats compared to male rats. For example, Bacteroidaceae, Bacteroides, Lachnospiraceae, Erysipelotrichaceae, Allobaculum, Alcaligenaceae, and Sutterella were markedly shifted in female rats, but not in male rats. Even among the commonly changed taxa, such as Porphyromonadaceae, Parabacteroides, and Blautia, the factors leading to such changes are different between males and females, as shown by two-way ANOVA (supplementary Tables 3, 4, 5, 6 and Fig. 4). In addition to the sex differences in response to dietary fructose and marginal copper, the composition of gut microbiota is also different between male and female rats when exposed to adequate copper diet, which was considered as a normal control. A higher abundance of Firmicutes and a lower abundance of Bacteroidetes were observed in female rats than in male rats, leading to a higher Firmicutes/Bacteroidetes ratio in females rats (12.06 versus 7.47, female versus male), which was Collectively, female rats exhibit more pronounced alterations of gut microbiota, and fructose plays a dominant role. LEfSe identified microbiota signature associated with dietary copper and fructose To further identify more specific taxa changes in gut microbiome by dietary copper and fructose, LEfSe analysis was performed using 16S rRNA metagenomic data [47]. Fifteen and 26 differentially abundant taxa were identified with LDA score higher than 2 in male and female rats, respectively ( Fig. 5a and b). The Proteobacteria and Bacteroidetes were enriched in the CuAF and CuMF group, respectively, in both male and female rats. No specific taxa were identified to be enriched in CuM Representative photos of liver histology using H&E staining. c Hepatic triglyceride. CuAF female rats had macrosteatosis (arrows) around the portal area. Microsteatosis (arrowheads) was observed in female CuMF rats as well as in some male rats as indicated. Data represent means ± SD (n = 7-8). Statistical significance was set at p ≤ 0.05. P values displayed are for the factors copper (Cu), fructose (F), and interaction (Cu × F) using two-way ANOVA followed with Tukey's multiple comparisons test. A, adequate copper diet; AF, adequate copper diet +10% fructose (w/v) in the drinking water; M, marginal copper diet; MF, marginal copper diet +10% fructose (w/v) in the drinking water Tables 3 and 4). Cladogram and histogram with LDA score ≥ 2 showing the features with differential abundance of taxa between groups in a male rats and b female rats (Wilcoxon rank-sum test). c Venn diagram. Each circle's diameter in the cladogram is proportional to the taxon's abundance. From the outer circle to the inner circle, the circles represent phyla, class, order, family, and genus. Differentially abundant taxa in specific groups were represented in different colors with the exception that yellow represents non-significant in the cladogram. M, male; F, female; Cu, copper; A, adequate copper diet; AF, adequate copper diet +10% fructose (w/v) in the drinking water; M, marginal copper diet; MF, marginal copper diet +10% fructose (w/v) in the drinking water Particularly, abundant beta-Proteobacteria and Erysipelotrichi in CuMF rats as well as abundant alpha-Proteobacteria in CuAF rats were identified in female rats. Thus, distinct abundant taxa were identified by LEfSe analysis between male and females. We further performed correlation analysis between liver fat content and the genera identified by LEfSe analysis in female CuAF rats. Unfortunately, the abundance levels of the genera are not correlated with the liver fat content (supplementary figure 2). To further explore the functional changes of gut microbiome in response to dietary copper and fructose, we performed PICRUSt2 analysis. In male rats, 40 significant differences in the functional profiles were identified by PICRUSt2 analysis between groups CuA and CuM, mainly involving fatty acid biosynthesis, electron carrier biosynthesis, lipopolysaccharide biosynthesis, and vitamin B6 biosynthesis, which were enriched in CuM male rats. Twenty-three significantly enriched pathways were predicted in male CuAF rats compared to male CuA rats. In female rats, 34 significant differences in the functional profiles were identified between CuA and CuMF groups, involving branched chain amino acid biosynthesis, fermentation, nucleotide biosynthesis and degradation, folate biosynthesis, and phospholipid biosynthesis, with lower abundance in CuMF rats (supplementary Table 7). Taken together, significant functional alterations of microbiota in female rats were induced mainly by the combined effects of copper and fructose (CuMF), whereas they were induced by either copper or fructose singly in male rats. The Venn diagram plot showed 51 shared genera by four groups in both male and female rats. There are total 65 and 56 detected genera in male and female rats, respectively. Fructose and marginal copper led to reduced genera in male rats, but an increase in female rats. Six genera were not altered by fructose or marginal copper diet in male rats, but only two were not altered in female rats (Fig. 5c), suggesting more genera abundance changes occur in female rats. Sex differences in fecal short-chain fatty acids in response to dietary copper-fructose interaction To better understand the sex differences in microbial activities induced by dietary copper and fructose, we measured SCFAs by GC-MS in cecal and fecal contents. Acetate, propionate, and butyrate are the predominant SCFAs in cecal and fecal contents. Overall, the levels of total as well as individual SCFAs were higher in cecal contents than that in fecal contents in both male and female rats. While the level of total cecal SCFAs is higher in males, the level of total fecal SCFAs are comparable between male and female rats. Fructose feeding resulted in a decrease of total SCFAs in both cecal and fecal contents in CuA-and CuM-fed rats; however, a significant decrease was found in female CuMF rats. A similar trend of alterations in SCFAs, but to a lesser extent, was observed in male rats, as shown in Fig. 6a. Consistently, acetate, propionate, and butyrate were all markedly decreased in female CuMF rats (Fig. 6b). In addition, decreased total SCFAs was associated with the relatively increased proportion of acetate and decreased proportion of butyrate in both cecal (acetate to propionate to butyrate = 63.3:18.4:18.4 versus 66.9:19.5:13.6; CuA versus CuMF) and fecal stool (68.7:13.1:18.2 versus 73.7: 16.6:9.7; CuA versus CuMF) of female CuMF rats. This effect was less prominent in male rats (Fig. 6c). Collectively, a substantial decrease of SCFAs was seen in female rats and profoundly so in the CuMF group. Two-way ANOVA showed that the alteration in SCFAs was most likely due to the additive effect of copper and fructose in female rats, but the decrease in SCFAs in male rats was only attributable to copper. Discussion Copper-fructose interaction-induced metabolic effects exhibit sex dimorphism [23,25]. Sex-specific alterations of gut microbiota in response to a specific diet have been demonstrated in a variety of studies [59][60][61]. Given that the gut microbiota play a causal role in driving the development of metabolic diseases, we aimed to determine whether sex-specific alterations of the gut microbiota are linked to hepatic steatosis. Our data showed that sex differences do exist in the gut microbiota, gut microbiota metabolites such as SCFAs, and hepatic steatosis following dietary copper and fructose exposure. Female rats exhibited more pronounced alterations in the abundance of various taxa than that did male rats at multiple taxa levels, including phylum, family, and genus. The number of distinct abundant taxa identified by LEfSe was also higher in female rats than in male rats. In addition, SCFAs were decreased to a greater extent in female rats compared to male rats, particularly in the CuMF group. Moreover, female rats with an adequate copper diet developed mild, but apparent steatosis after 8 weeks of added fructose feeding (CuAF), but female CuMF rats, which showed the most significantly altered gut microbial activity, did not. Therefore, the altered gut microbial activity does not correlate with the hepatic fat accumulation. SCFAs are the end products of microbial fermentation of indigestible fiber, and they play a critical role in energy homeostasis and metabolism [62]. In our study, we found significantly decreased SCFAs, particularly butyrate, concomitant with the reduced butyrate producers, Lachnospiraceae and Ruminococcaceae [63], in CuMF female rats, implying that the most significantly altered gut microbial activities were in this group. We found mild hepatic steatosis in CuAF female rats; thus, it is unlikely that this hepatic steatosis is attributable to the metabolic effects of gut microbiota. Accelerated de novo lipogenesis (DNL) is known to contribute to fructoseinduced hepatic steatosis [64,65]. However, the underlying mechanisms are unclear. A recent study demonstrated a two-point mechanism leading to fructose-induced hepatic steatosis. One part is gut bacteria-derived acetate which serves as a substrate for acetyl-CoA synthesis via acyl-CoA synthetase shortchain family member 2 (ACSS2) in the liver. Second, fructose metabolism in hepatocytes activates a signal leading to lipogenic gene expression [66]. Interestingly, the most significantly changed SCFAs occurred in CuMF rats. We also observed this effect in our previous study when rats were exposed to a high-fructose diet via 30% fructose (w/v) in the drinking water and sucroseenriched diet (AIN-76) [21]. This finding suggests that hepatic steatosis may be related to the amount of fructose intake. In support of this, a recent study demonstrated that dietary fructose is primarily metabolized in the small intestine and only excess fructose intake spills over to the colon microbiota and the liver [67]. Previous studies showed that either inhibition of fructose metabolism in the liver [68] or elimination of gut microbiota by antibiotics [69] protected against fructose-induced hepatic steatosis, indicating that fructose metabolism in both the liver and gut microbiota is required to facilitate the development of steatosis. When a large amount of fructose intake saturates the capacity of the small intestine metabolism, presumably excess fructose will proceed to the colon, the gut microbiota, and the liver. However, the priority of excess fructose to be distributed and metabolized in colon microbiota or the liver or other tissues is unclear when a modest amount of fructose was ingested. It has been shown that dietary copper-fructose interaction exacerbates copper deficiency-induced metabolic syndrome, likely due to impaired intestinal copper absorption because of excess fructose ingestion [21,70]. Whether the extent of interaction relates to the relative amounts of copper and/or fructose, and subsequent metabolic effects, remains largely unknown and warrants further study. Despite significantly changed gut microbiota and decreased SCFAs in CuMF rats, only a few of the female rats in the CuAF group developed modest steatosis, suggesting decreased SCFAs and the altered gut microbial activities were not sufficient to lead to hepatic steatosis in female CuMF rats. Of note, Porphyromonadaceae and Parabacteroides are two of the microbiota signatures associated with the CuAF diet in female rats, although with relatively low abundance (1.52%), which is different from male rats identified by LEfSe. Whether increased abundance of Porphyromonadaceae and Parabacteroides plays a causal role in fructose-induced hepatic steatosis needs to be examined. Sex differences in fructose-induced metabolic effects are mixed [24,71,72]. In contrast to previous studies on Fig. 6 Alterations of cecal and fecal SCFA levels induced by dietary copper and fructose. a Total SCFA levels. b SCFA levels (C2-C4). c Percentage of total SCFAs. Data represent means ± SD (n = 7-8). Statistical significance was set at p ≤ 0.05. P values displayed are for the factors copper (Cu), fructose (F), and interaction (Cu × F) by two-way ANOVA with Tukey's multiple comparisons test. * versus CuA; # versus CuAF; $ versus CuM. Cu, copper; A, adequate copper diet; AF, adequate copper diet +10% fructose (w/v) in the drinking water; M, marginal copper diet; MF, marginal copper diet +10% fructose (w/v) in the drinking water. C2, acetic acid; C3, propionic acid; C4, butyric acid copper-fructose interactions [23,25,26], our results showed that female rats are relatively sensitive to fructose-induced hepatic steatosis. The discrepancy may be attributed to several factors. First is the dose of copper and fructose. A lower dose of copper (0.6 ppm) and a higher dose of fructose (30-62%) were used in Field's as well as in Morrell's studies [23,26]. It appeared that males are more sensitive to the deleterious effects of copper deficiency. In our study, marginal copper diet (1.5 ppm) and 10% fructose (w/v) in the drinking water were used, presumably leading to less-pronounced copper-fructose interactions and metabolic effects than previous studies [23,26]. Second, the activities of fructose-metabolizing enzymes and intermediate metabolites differed by sex and copper level [73]. In fact, the activities of liver enzymes involved in lipogenesis were affected not only by the type of carbohydrate but also by the quantity [74]. Lastly, differences in facilities, diet components, and species as well as experimental durations may all contribute to discrepancy [25,75,76]. In support of our results, a previous study demonstrated that weanling female rats exhibit a higher rate of acetate incorporation into lipids in the liver compared to male rats [77], suggesting a higher lipogenic capacity in female rats. However, there is a different species driving the lipogenic enzyme activity in response to carbohydrate [74]. In human studies, the fructose-induced increase in hepatic DNL and decrease in fatty acid oxidation were more pronounced in men and premenopausal women than in postmenopausal women [28,65,78,79]. Sex hormones are known factors regulating sex dimorphism of fructose-related metabolic effects [7]. However, the molecular underpinnings remain elusive. Recent studies showed that GLUT8 mediates distinct metabolic effects between males and females in response to dietary fructose [29,30,80]. GLUT8 is a dualspecificity glucose and fructose transporter, which was found to be abundantly expressed in both murine and human liver and intestine [30,80,81]. Interestingly, while GLUT8 mutation does not alter intestinal fructose absorption in male mice [29], it enhances intestinal fructose absorption in female mice, which was associated exacerbated hypertension, hyperinsulinemia, and hyperlipidemia in those animals when they were fed with high-fructose diet [30]. Conversely, GLUT8-deficient male mice are protected from high-fructose diet-induced dyslipidemia, glucose intolerance, and hypertension [29]. These studies revealed an important molecular mechanism underlying the tissue-specific and sex-specific divergence in response to fructose. A potential limitation of the current study is the one time analysis of gut microbiota and hepatic steatosis. Although female rats displayed earlier development of steatosis, it is difficult to predict the ultimate severity of steatosis and disease progression. Since male rats exhibit decreased diversity of gut microbiome, and given that the microbial gene richness is associated with inflammation, insulin resistance, and dyslipidemia [82,83], it is plausible that male rats develop steatosis with a prolonged duration on the experimental regime. Thus, long-term and multiple time points evaluation will provide more accurate profiles of disease progression in the context of sex difference. However, sex differences observed in animal studies are under strictly defined experimental conditions. Therefore, a caveat must be noted when extrapolating animal data to human, as humans have much more complex genetic and environmental factors than experimental animals. Perspectives and significance In summary, our current study provides evidence of sexspecific alterations in gut microbial abundance, activities, and hepatic steatosis in response to dietary copperfructose interaction in a rat model. However, the correlation of sex differences in hepatic steatosis and alterations of gut microbial activities was not established in the current experimental condition. Future studies deciphering the molecular mechanisms as well as tissuespecific effects would help us better understand sexspecific responses to dietary copper-fructose interactions. Conclusions Our data demonstrated sex differences in the alterations of gut microbial abundance, activities, and hepatic steatosis in response to dietary copper-fructose interaction in rats. The correlation between sex differences in metabolic phenotypes and alterations of gut microbial activities remains elusive.
2021-01-07T09:04:47.769Z
2021-01-06T00:00:00.000
{ "year": 2021, "sha1": "8e8e93d3d9aaff9eb3344d40abc9a64fc990dc9e", "oa_license": "CCBY", "oa_url": "https://bsd.biomedcentral.com/track/pdf/10.1186/s13293-020-00346-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54fed3b1a114ca735b69b948803a152d8d71c6b3", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
242756947
pes2o/s2orc
v3-fos-license
Optimal synthesis into fixed XX interactions We describe an optimal procedure, as well as its efficient software implementation, for exact and approximate synthesis of two-qubit unitary operations into any prescribed discrete family of XX-type interactions and local gates. This arises from the analysis and manipulation of certain polyhedral subsets of the space of canonical gates. Using this, we analyze which small sets of XX-type interactions cause the greatest improvement in expected infidelity under experimentally-motivated error models. For the exact circuit synthesis of Haar-randomly selected two-qubit operations, we find an improvement in estimated infidelity by ~31.4% when including alongside CX its square- and cube-roots, near to the optimal limit of ~36.9% obtained by including all fractional applications of CX. Introduction In this paper, we describe an optimal synthesis routine for two-qubit unitary operations which targets any discrete family of two-qubit gates, each locally equivalent to some exp(−iaXX ). We refer to such gates as being of XX -type and note that this class includes all controlled unitaries. Gate sets of XX -type are common on contemporary platforms: the gate CX is an example, and synthesis routines for it have long been known to give rise to algorithmic schemes for universal quantum computation, making it an attractive target for device engineers. The physical processes which give rise to the operation CX can typically be truncated to produce "fractional applications" CX α for 0 ≤ α ≤ 1, each of which is also of XX -type, giving rise to an infinite family of further examples. 1 Such fractional applications can be found on devices based on superconducting qubits (e.g., IBM's), as well as on those based on ion traps. Though not required for universal computation, the availability of these "overcomplete" basis gates has the potential to yield more efficient synthesized circuits, particu-larly if the error magnitude of CX α correlates with α: while the universally programmable CX circuit invokes CX three times, the universally programmable fractional CX circuit invokes CX α , CX β , and CX γ with α + β + γ = 3/2. 2 In practice, however, these parametric families are difficult to operate. The relationship between the degree of physical process truncation and the value α is often nonlinear and prone to imperfect measurement, and constraints in the steering electronics (e.g., waveform sample rate) can make truncations unavailable below some threshold, so that wholesale use of these families may be prohibited on realistic hardware. Still, precision can be guaranteed for any particular value of α, which gives rise to the following question: Question. Given a fixed "calibration budget" which permits the tuning of n fractional operations, which set of values α 1 , . . . , α n maximizes (average-case) device performance? How does one efficiently find expressions for generic two-qubit unitaries in terms of these operations? How does one simultaneously guarantee the optimality of such expressions, as measured against device performance? We answer this question fully. Our fundamental results are an efficient test for when a twoqubit unitary operation admits expression as a circuit using any particular sequence of XX -interaction strengths (α 1 , . . . , α n ) with local gates interleaved (Theorem 4.1), an efficicent synthesis routine for manufacturing such circuits (Procedure 6.1, Theorem 5.5), and an efficient routine for producing the best approximation (in average gate fidelity) within the set of such circuits (Procedure 6.8). 3 These tools combine to give an optimal synthesis scheme for reasonably behaved cost functions (e.g., average gate infidelity). An implementation of our technique can be found in Qiskit's quantum_info subpackage as the class XXDecomposer [3], where it can be verified that it outperforms blind numerical search in both wall time and output quality (cf. Figure 8). We leverage these results to explore the design space of gate set extensions where we constrain α 1 , . . . , α n to be drawn from Relative infidelity Gateset description [5] [6] Ours , XX π 8 , XX π 12 } n/a n/a 39% Figure 1: Four syntheses of a two-qubit operator U ∈ SU (4) with canonical coordinate (0.968, 0.273, 0.038). (1) An exact, optimal synthesis into a triple of CX gates. (2) An exact, optimal synthesis into a triple of XX π 8 gates. (3) An exact, optimal synthesis into four XX π 12 gates. (4) An exact, optimal synthesis into a mixed set of gates. Finally, we include the relative infidelity costs of these syntheses in an error model where XX gate infidelity is linearly related to the parameter with a small affine offset. Relative infidelity: We assume that XX gate infidelity is linearly related to the parameter with a small affine offset and that infidelity is approximately additive over that of its gates. We refer to the infidelity of circuit (1) as the "baseline", and we report as "relative infidelity" the percentage of this value achieved by other synthesis strategies. See Figure 8 and Section 7 for similar statistics where U is allowed to range. some small, fixed set of pretuned angles. For experimentally realistic error models, 4 our main findings are first that including first CX 1/2 and then CX 1/3 give significant improvement over CX alone at several common tasks 5 , and second that these two gates capture almost all of the benefit of allowing α 1 , . . . , α n to be drawn without constraint (cf. Figure 15). Finally, we note that some of our proofs rely on using a computer algebra system to manipulate polytopes, and we have released both this framework and the proof software under the Qiskit umbrella of packages 4 In practice, we find that these models amount to a weighted count of circuit elements, where the weights depend linearly on the exponents α j . 5 Specifically, we examine synthesis of random unitaries, as encountered during whole-circuit resynthesis, and reported on in Figure 18; and we exaxmine synthesis of certain structured operators like QFT, as encountered in peephole optimization of highly structured circuits, and reported on in Figure 2. 31.0% Figure 2: Qiskit syntheses of QFT circuits over n qubits, targeting a family of qubits supporting either S = {CX } or S = {XX π 4 , XX π 8 , XX π 12 } and with all-to-all connectivity. At right, we include the expected circuit infidelity, reported as a fraction of that of traditional synthesis methods, under the assumptions that XX gate infidelity is linearly related to the parameter with a small affine offset and that infidelity is approximately additive. In the limit of a large qubit count, the expected infidelity of a QFT circuit synthesized to the fractional gate set drops by two-thirds that of the standard CX -based gate set. We don't intend circuit synthesis for QFTs to be a "killer app", but rather as evidence that these methods are not limited to random inputs. Related literature We give a non-exhaustive survey of existing ideas. [7] Shende, Markov, and Bullock showed how to synthesize optimal-depth circuits for CX -based gate sets. [8] Cross et al. extended that to synthesize optimalinfidelity circuits for CX -based gate sets. Our method constructs circuits out of the same building blocks, but our method for how to arrange those blocks is different. This difference in synthesis strategy gives us optimal circuits, whereas often 6 they miss by a constant offset. [11] Peterson et al. showed how to detect when a twoqubit unitary operation admits expression using a given circuit type with sufficiently many freely ranging local operations. This gives rise to a method for analyzing the optimality of a synthesis strategy, but it does not show how to perform the actual synthesis. [12] Earnest, Tornow, and Egger have shown how to produce the entire family of XX -type interactions from a particular pulse-level implementation of CX , which then permits the use of a straightforward synthesis method. Their method does not extend to other hardware implementations of CX , including the implementation used by the IBM group to achieve quantum volume 64 [13], which does not lend itself to noncalibrated scaling. [14] Huang et al. have recently shown how to synthesize (optimal) circuits for a gateset containing √ ISWAP, a particular "fractional ISWAP". Despite surface similarities, our results depart substantially: √ ISWAP is not of XX -type; they work with the fixed gate √ ISWAP as opposed to an unknown family of fractional applications; and they consider circuits of depth at most 3. to uncover which fractional interactions might be valuable to include in a native gate set put to various specific purposes. This has substantial overlap with our discussion of gate set optimization, but it does not solve the synthesis problem: numerical search is both non-optimal and two orders of magnitude slower than our direct synthesis. The case of one qubit To give a sense of our methods and results, let us analyze the analogous problem for one-qubit unitaries: decomposition into a fixed set of fractional Xrotations and unconstrained Z -rotations. The fixed X -rotation most typically available is X π 2 , and an X π 2 -based circuit can be synthesized for a unitary U through "Euler ZYZ decomposition". Namely, there are angle values θ, φ, and λ which satisfy easily calculable by diagonalizing U U T . Since U can freely range, the right-hand side of this equation gives a universally programmable quantum circuit. A downside to this circuit is that the operational cost of U is always that of a pair of X π 2 gates, even if U itself is a "small" rotation of the Bloch sphere. 7 For circuits based on other choices of choices of Xrotation angles, such as one perform some mathematical analysis to discern the limited set of synthesizable operations Y θ , ultimately arriving at the critical relationship cos θ = cos ψ · cos ψ − cos ι · sin ψ · sin ψ , and the remaining parameters ι , ι can be explicitly determined by inspecting complex phases. Varying ι, 7 One can set aside special cases when θ is zero or π 2 , but these are probability-zero events in common measures. ι , and ι , this trigonometric equation admits a solution precisely when θ satisfies Let us refer to this interval as I ψ,ψ . In the same manner, a longer sequence of interactions X ψ1 , . . . , X ψn interleaved with Z -rotations gives rise to a corresponding interval I ψ1,...,ψn of achievable values of θ. Suppose that any given gate X ψ , with ψ ∈ [0, π], can be made available in an experimental setting with infidelity for some error model parameters m and b, and at a fixed calibration cost per gate. We seek a small set of gates {X ψ1 , . . . , X ψn } so that the intervals constructed above cover the possible values of θ so as to minimize the expected infidelity cost of a given operation. For instance, Figure 3 shows the relevant intervals for the gate set {X π 2 , X π 3 }. Several aspects of this goal can also be understood with additional nuance: "Expected": The distribution of operations U to be synthesized will affect the relative importance of the various choices of ψ. A safe assumption is that U is drawn according to the Haar distribution, in which case the distribution of angle values θ is given by p(θ)dθ = 1 2 sin(θ)dθ. "Cost": In addition to the operational cost of a synthesized circuit (i.e., the cost from gate applications), one can also incorporate a cost stemming from synthesizing some θ rather than the requested θ. There are then some circumstances where it is profitable to deliberately missynthesize Y θ as Y θ , provided the difference between θ and θ is small and the difference in operational cost between the two circuits is large. Average gate infidelity gives a popular embodiment of this idea, where the fidelity of two one-qubit operations is given by the formula Against this yardstick, the θ ∈ I ψ,ψ which gives the best approximation to a θ ∈ I ψ,ψ occurs at one of the interval endpoints. "Given operation": Rather than synthesizing the operation U requested, the compiler can choose to inject a reversible logic operation R and its inverse R −1 into the program, synthesizing the composite U · R and either commuting R −1 forward through the circuit or absorbing its effect into software. This option can be used to further shape the expected distribution of inputs. For single-qubit operations, a typical choice of R is Figure 3: The optimal synthesis regions for the one-qubit gate set {X π 2 , X π 3 , Zcts}. The interval being covered is the set of angles [0, π] appearing as the middle parameter in a ZYZ -decomposition of a generic U ∈ P U (2). the classical logic gate X π , which has the effect of trading X θ for X π−θ . 8 Considering only exact synthesis for now, we compute the following expected (i.e., Haar-averaged) average gate infidelities for various gate sets: 2m + 2b, the standard decomposition, used as a baseline. {X cts }: In the continuous limit with all gates X θ available, the cost becomes m + b, an improvement of 50% over the baseline. These values can be further improved by considering approximate synthesis, mirrored synthesis, or both. Outline Our analysis of the two-qubit case follows along the same lines as above, and in the same order. Section 2: Generalizing Euler decomposition, we give a lightning review of KAK decomposition as specialized to two-qubit unitary operations. Section 3: We describe a more detailed plan of attack on the two-qubit problem, outlining the steps in the proofs to come. Section 4: Generalizing the interval I ψ,ψ , we give a compact description of which two-qubit gates are accessible to a circuit built out of a fixed sequence of XX -type interactions with one-qubit operations interleaved (Theorem 4.1). This leverages previous work of Peterson et al. [11]: it detects when a two-qubit operation admits synthesis as a circuit of a certain type, but it does not indicate how to produce the circuit. Section 5: Generalizing the formula relating cos θ and cos ι, we single out a method for choosing local circuit parameters which are simple to analyze (Theorem 5.3). We then compare with Section 4 and prove that each of these restricted circuits nonetheless exhaust the space of possibilities (Theorem 5.5). Section 6: Generalizing the discussion around cost, we provide an efficient method to find the best approximation within a given circuit family (Theorem 6.10), and we couple it to the preceding results to produce the promised efficient synthesis method (Procedure 6.1, Procedure 6.8). We give a small example of the effectiveness of these techniques as applied to a random operator in Fig Conventions We use the following abbreviations throughout: 2 Résumé on two-qubit unitaries and the monodromy map We briefly recall the theory of Cartan decompositions as it applies to two-qubit unitary operations and its role in circuit synthesis. Lemma 2.1 ([15], [16], [17], [11]). Let CAN denote the following two-qubit gate: Any two-qubit unitary operation U ∈ PU (4) can be written as where L, L ∈ PU (2) ×2 are local gates and a 1 , a 2 , a 3 are (underdetermined) real parameters. Definition 2.2 ("Canonical decomposition", cf. [11]). In Lemma 2.1, there is a unique triple (a 1 , a 2 , a 3 ) satisfying a 1 ≥ a 2 ≥ a 3 ≥ 0, π 2 ≥ a 1 + a 2 , and one of either a 3 > 0 or a 1 ≤ π 4 . Such a triple is called a positive canonical coordinate, and we denote the space of such as A C2 . This unicity determines a function Π : P U (4) → A C2 , called the monodromy map. Away from the plane a 3 = 0, this function is continuous. 9 Example 2.3. Here are the positive canonical triples for some familiar gates: Generalizing the last example, the positive canonical triple for any controlled unitary gate has the form (a 1 , 0, 0); we say that such an operation is of XXtype, and we abbreviate such gates to Specifically, the fractional gate CX α is of XX -type, with coordinate Π(CX α ) = (α · π 4 , 0, 0), so that From this perspective, the varying coordinate measures interaction duration or interaction strength, so that smaller values give rise to less entanglement. For us this apparatus has two main uses, captured in the following pair of results: Theorem 2.5 ([11]; [4]). Let S, S ⊆ P U (4) be two sets of two-qubit operations whose images Π(S), Π(S ) ⊆ A C2 through Π are polytopes (e.g., a set of isolated points). The image of the set through Π is then also a polytope. Given explicit descriptions of the input polytopes as families of linear inequalities, the output polytope can also be so described. Our work will lead us directly into considering families of two-qubit gates and their parametrizations, so we introduce some attendant language. Definition 2.6. A gate set is any collection of twoqubit unitaries; typically we will consider gate sets which are made up of finitely many controlled unitaries. For a gate set S, an S-circuit is a finite sequence of members of S and local gates. The operation which it enacts is given by the product of the sequence elements. A circuit shape 10 is a circuit-valued function where each L j is a parametrized local operator and each S j ∈ S is fixed. It can be convenient to place further restrictions on the L j (e.g., that they consist only of Z -rotations), but absent explicit mention we take each L j to surject onto PU (2) ×2 . In this surjective case, the sequence (S 1 , . . . , S n ) determines the image of C, and it follows from Theorem 2.5 that the image of Π • C in A C2 is a polytope, called the circuit polytope of C (or of (S 1 , . . . , S n )). In the case that S consists of gates of XX -type, locally surjective circuits can be further identified with the underlying sequence of interaction strengths (α 1 , . . . , α n ) with S j ≡ XX αj . Remark 2.7. The coordinate system given in Definition 2.2 is not unique: a similar theorem holds for any choice of "Weyl alcove" in pu 4 . When g is the Lie algebra of a simply connected Lie group (e.g., su 4 ), each Weyl alcove is related to every other by a discrete set of linear transformations including reflections and shears. Without the simply-connected hypothesis (e.g., pu 4 ), they are related by linear transformations and "scissors congruence". Our choice of coordinate system differs from that used in previous work of Peteron et al. [11] by a nontrivial scissors congruence, effectively replacing the condition stated there, Remark 2.8 (cf. Remark 4.5). Later, it will be convenient for us to consider a variant of positive canonical triples which are not required to be sorted. Unsorted triples (a 1 , a 2 , a 3 ) which become positive canonical triples upon sorting are those which satisfy 0 ≤ a j and a j + a k ≤ π 2 for all choices of j and k. The set of such triples still gives a convex polytope. given. We can produce from it a sequence of truncations C j that retain steps 1 through j. Each C j is also a circuit modeling some other unitary operator U j , and if C is optimal for circuits modeling U against some well-behaved cost function (e.g., operation count), then each C j will be so optimal for U j . The images p j = Π(U j ) ∈ A C2 of these intermediate operators then describe a path through the Weyl alcove, where the j th step in the path belongs to the region P j of operations whose optimal circuits take the form of C j . Since our goal is to construct C, we might instead begin by constructing the path (p j ) j , subject to the two constraints: 2. The hop from p j to p j+1 is given by some nice circuit. In order to understand the first constraint, we give a compact description of P j by way of describing the circuit polytope associated to an arbitrary sequence of XX -type interactions. We call this the global theorem (Theorem 4.1) since it describes the large-scale structure of the problem and does not reference the individual point p j . Though our main tool here is Theorem 2.5, for a generic sequence of interactions it can only guarantee an exponential-sized family of convex bodies, themselves each of increasing facet complexity. It is a special feature of interactions of XX -type that the associated circuit polytopes have a fixed number of convex bodies, each of fixed complexity, independent of the sequence length. To understand the second constraint, we choose a particular "nice circuit" and analyze the effect under Π of appending such a circuit to a canonical gate (Lemma 5.1), resulting in a family of constraints we call "interference inequalities" (Theorem 5.3). This, too, is specific to our case: even for interactions of XX -type, not all choices of unit circuit have a discernable image under Π, nevermind a polytope. We complete the program by linking these two together in the local theorem (Theorem 5.5): we show that for any p j+1 ∈ P j+1 , we can always find a p j ∈ P j linked to p j+1 by one of these simple circuits. This argument can then be reorganized into a constructive, efficient synthesis routine (Procedure 6.1). Additionally, we show how to select a point p ∈ P j which is the best approximation by average gate infidelity to p = Π(U ) (Theorem 6.10). The global theorem One of our overall goals is to describe the set of positive canonical triples whose optimal circuit implementation uses a sequence of interaction strengths (α 1 , . . . , α n ). This can be accomplished by describing those positive canonical triples which admit any such circuit implementation, even if suboptimal. Optimality can then be enforced by taking a complement against positive canonical triples which admit superior circuit implementations. In this section, we accomplish this goal, summarized in the following Theorem: ) j be a sequence of interaction strengths, and let (a 1 , a 2 , a 3 ) be a positive canonical coordinate. The canonical operator CAN (a 1 , a 2 , a 3 ) admits a presentation as a circuit of the form where L j are local operators, if and only if either of the following two families of linear inequalities is satisfied: We respectively refer to the first, second, and third inequalities in each family as the strength, slant, and frustrum bounds. Important Remark. From a physical perspective, the circuit polytope ought to be invariant under injecting extra zero-strength interactions into the defining sequence of interaction strengths. Accordingly, we always treat expressions like "min k = α + − α k − α " as if the sequence were padded by arbitrarily many zero entries. Proof of Theorem 4.1. For the base case, note that the empty list of interaction strengths yields the polytope which agrees with the set of circuits locally equivalent to the identity interaction. Suppose then that we have established the claim for a sequence of interaction strengths (α 1 , . . . , α n ), and we would like to establish the claim for (α 1 , . . . , α n , β) for some new interaction strength β. By allowing the (n + 1) different strengths to range, we note the region in the claim is naturally expressed as a polytope in (n + 1) + 3 dimensions. In fact, we can reduce it to a certain 6-dimensional polytope as follows: writing α and α respectively for the largest and secondlargest elements in the hypothesized sequence of interaction strengths, we may rewrite the inequality fami-lies above as with the additional constraints Altogether, these statements over a 1 , a 2 , a 3 , α + , α , α describe a pair of convex polytopes in 6-dimensional space. Theorem 2.5 gives an explicit, finite family of linear inequalities (the "monodromy polytope") so that a 1 , a 2 , a 3 , a 1 , a 2 , a 3 , b 1 , b 2 , b 3 satisfies the constraints if and only if there is a local gate L and a local equivalence We combine this with the polytope from the inductive hypothesis so that its coordinates are shared with (a 1 , a 2 , a 3 ) and the coordinates (a 1 , a 2 , a 3 ) are set to (β, 0, 0). This produces a union of convex polytopes in 10-dimensional space, a point of which simultaneously captures: (α + , α , α ): Values extracted from the prefix of interaction strengths. (a 1 , a 2 , a 3 ): A positive canonical coordinate which admits expression as an XX -circuit as in the inductive hypothesis. β: A new interaction strength. A canonical coordinate which admits expression as a concatenation of the aforementioned circuit, a local gate, and XX β . Our goal is to describe a certain projection of this polytope. Projection has the effect of introducing an existential quantifier into the above description: a point belongs to the projection of a polytope exactly when it is possible to extend the projected point by the discarded coordinates so that it satisfies the original constraints. This trades the actual data housed in the lost coordinates-which may be complicated to the point of distraction-for the mere predicate that such data exists. In our case, we seek to project away the coordinates (a 1 , a 2 , a 3 ), which leaves only constraints on (b 1 , b 2 , b 3 ), given in terms of (α + , α , α , β), ensuring that a prefix circuit of the indicated type exists, without actually naming it. To compute this projection, we apply Fourier-Motzkin elimination to project away the remaining coordinates and eliminate redundancies in the resulting inequality set. These reduced inequality sets have the following form: where we have collected the inequalities which give communal upper bounds into single expressions using "min". Notationally absorbing β into the sequence of interaction strengths completes the proof. See check_main_xx_theorem in monodromy [4] for an executable proof. Remark 4.4. Theorem 4.1 is manifestly invariant under permutation of the interaction strengths. Remark 4.5. By dropping the assumption that the entries of positive canonical triples are ordered descending (as in Remark 2.8), we can rewrite the above inequality families in a manner that is more pleasingly symmetric. For example, the first 11 family is rewritten as: We note that we have won these pleasing formulas by losing convexity: the "min"s appearing in the lower bounds encode disjunctions of linear sentences rather than conjunctions, so we see merely a non-convex union of these convex polytopes. 11 The second is similar, but less pleasing to the eye. This causes the slant and frustrum inequalities to degenerate, which recovers a theorem of Zhang et al. as a special case. The local theorem In this section, we study the problem of appending a single new XX interaction strength β to a specific circuit formed from a sequence of strengths (α 1 , . . . , α n ). Note that Theorem 4.1 gives us an understanding of the "global" effect of appending XX β , where the interaction strengths are fixed but the circuit is allowed to range. Note also that if we are able to achieve such a local understanding, we would then like to use it in reverse: given a point p n+1 which Theorem 4.1 guarantees to be modelable using a circuit with strengths (α 1 , . . . , α n , β), we would like to guarantee the existence of-and algorithmically identify!-a point p n which is modelable by (α 1 , . . . , α n ) and for which p n+1 is reachable by appending XX β and some local gates. Excepting the caveat about algorithmic identification, this can be accomplished directly using nothing more than the methods of the monodromy polytope. However, because we are interested in circuit construction, we restrict what sorts of circuits we are willing to append to those of the particularly simple form given in Lemma 5.1. In trade, the method of the monodromy polytope no longer directly applies. We show in Theorem 5.3 the "forward" direction of the strategy described above, then in Theorem 5.5 the "reverse" direction, culminating in the recursive step in a synthesis procedure whose full description we defer to Section 6. First, however, we introduce the simplified circuit which we will consider. there exist values r, s, t, u, b 1 , and b 2 so that the operator U may be equivalently expressed as Proof. The vector subspace forms a Lie subalgebra of pu 4 , and the subspaces give rise to a KAK decomposition yielding the desired result. Next, we note that this choice of simple local gates gives rise to the desired explicit expressions for the gate parameters. The outer parameters r, s, t, and u can then be deduced from a linear system with input the phases of the top half of the left-hand matrix. Proof. The trigonometric equalities follow by equating the square-norms of the matrix entries in Lemma 5.1. The (1, 1) and (3, 3) entries respectively yield where we have used the absolute values to suppress some of the phases. We then apply the identity |x + re iθ | 2 = x 2 + r 2 + 2xr cos θ and isolate d and e to deduce the statement. The linear system then arises by inspecting the phases of any nondegenerate quadruple of entries. For example, the nonzero entries in the top half, read leftto-right, have respective phases This collection of linear combinations is of full rank. We can interpret the constraints imposed by these expressions on the positive canonical triples in terms of β. if and only if the following inequalities hold: 12 Moreover, the local gates witnessing the equivalence can be taken to be Z -rotations. 12 It is extremely unusual that image under Π of a circuit with constrained local gates is again a polytope. This is, perhaps, the most important ingredient in our approach. Proof. Starting from Lemma 5.2, there exist solutions to d and e exactly when the following inequalities are met: Using the inequalities a 1 + a 2 ≤ π 2 , a 1 ≥ a 2 , and 0 ≤ β ≤ π 4 , we see that both the right-hand quantities are always positive, hence the right-hand absolute value can be suppressed. The left-hand absolute value can be equivalently expressed as a pair of inequalities, giving Rewriting the binomials as cosines of differences / sums and then converting square cosines to doubleangle cosines yields Finally, we use the piecewise monotonicity and reflection invariance of cosine, as well as the bounds on the inputs, to deduce inequalities on the angles: Linear rearrangement yields the claimed inequality family. Example 5.4. In Figure 5, we give a visualization of the regions accessible via Theorem 5.3. Theorem 5.5 (cf. Figure 6). Given a positive canonical triple (b 1 , b 2 , b 3 ) satisfying the conditions of Theorem 4.1 for a sequence of interaction strengths (α 1 , . . . , α n , β), there always exists a positive canonical triple (a 1 , a 2 , a 3 ) satisfying the conditions of Theorem 4.1 for the sequence (α 1 , . . . , α n ) and for which there are Weyl reflections w, w so that the following is solvable: The outer gates witnessing the local equivalence can be taken to be Z-rotations. Proof. Any canonical gate CAN (b 1 , b 2 , b 3 ) can be written as Applying Theorem 5.3 to the right factor gives under certain conditions on a 1 , a 2 , b 1 , b 2 , and β. Since local Z-rotations commute with canonical gates of the form CAN (0, 0, b 3 ), we may abbreviate this to Additionally, our choice to factor out b 3 is immaterial: there are Weyl reflections which permute the coordinates within a canonical triple, so by conjugating CAN (b 1 , b 2 , b 3 ) we can place any of the three values in the final slot. In short, we may appeal to Theorem 5.3, provided we fix one coordinate and potentially disorder the positive canonical triples. From here, our proof strategy is similar to that of Theorem 4.1. Theorem 4.1 itself furnishes us with linear constraints on the spaces of triples (b 1 , b 2 , b 3 ) so that a triple satisfies the constraints if and only it can be realized as the positive canonical coordinate of an XX -circuit with interaction strengths (α 1 , . . . , α n , β). Rather than working with ordered triples (a 1 , a 2 , a 3 ), we instead consider unordered triples (a h , a , a f )to be referred to as the "high", "low", and "fixed" coordinates-as in Remark 4.5. Then, we interrelate the a-and b-coordinates: serve as the "fixed" coordinate (and take the union over such choices), and we set a f = b f . • On a h and a , we impose the constraint a h ≥ a . Similarly, of the remaining coordinates in (b 1 , b 2 , b 3 ), we pick b h to be the larger and b to be the smaller. Let us call the resulting (nonconvex) polytope P . Points in P capture the following interrelated pieces of data: • A canonical coordinate (b 1 , b 2 , b 3 ) which admits expression as an XX -circuit with interaction strengths (α 1 , . . . , α n , β). • A choice of value to share among the a-and bcoordinates. • The condition that, among the unshared coordinates, there exists a circuit of the form in Lemma 5.1 relating them. (As in the first two bullets, the polytope does not record the literal data of such a circuit, only the predicate that one exists.) By projecting away (a h , a , a f ) from P , we produce the polytope of positive canonical triples (b 1 , b 2 , b 3 ) which can be expressed as XX -circuits with the specified interaction strengths, together with the predicate constraint that the last step in the circuit decomposition can be written in the form of the Theorem statement. This is a subpolytope of that of Theorem 4.1, which merely tracks positive canonical triples which can be expressed as XX -circuits with the specified interaction strengths, without the constraint on the final local operator. Appealing again to a computer algebra system, we find that these two polytopes are equal. See regenerate_xx_solution_polytopes in monodromy [4] for an executable proof. Remark 5.6. Naively specified, the polytope P in the proof of Theorem 5.5 has many convex components: the two convex regions of a-and b-coordinates each contribute factors of 2, the choice of which coordinate to fix contributes a factor of 3, and the choice of which slant and frustrum bounds apply to the disordered acoordinates contribute factors of 2 and 3. However, the projection of P onto the b-coordinates, which we used to conclude the theorem, can be shown to have only four regions: • The choice of convex region of b-coordinates is free, but one then uses the same choice for acoordinates. • The fixed coordinate a f is taken to be either b 1 or b 3 . • For the unreflected (resp., reflected) convex region of b-coordinates, the slant (resp., strength) inequality is imposed either on a f or a h depending on whether a f = b 1 or a f = b 3 . • The frustrum bound is always imposed on a . The inequalities describing these regions are given in Figure 20. Remark 5.7. It is possible for the technophobic reader to rearrange the proofs of Theorem 4.1 and Theorem 5.5 so as to avoid computer algebra systems. First, break Theorem 4.1 into a forward implication, that the positive canonical triple associated to an XX -circuit satisfies the indicated inequality set, and the reverse implication. The forward implication can be checked by hand, using a judiciously chosen subset of inequalities from the monodromy polytope; the reverse implication is much harder from this point of view, so we set it aside for a moment. Now we turn to Theorem 5.5. Its proof also relies on a computer algebra system, but we may severely limit the amount of work by inspecting only the convex summands described in Remark 5.6, which is then small enough to accomplish manually. With only the forward implication of Theorem 4.1 established, the proof of Theorem 5.5 instead shows that if the bcoordinate belongs to the polytope named by Theorem 4.1 for (α 1 , . . . , α n , β), then there exists an acoordinate in the polytope named by Theorem 4.1 for (α 1 , . . . , α n ) which is related to the b-coordinate by a particular single-step XX -circuit. Following the induction described in Procedure 6.1 then yields the missing reverse implication of Theorem 4.1, which in turn yields the full strength of Theorem 5.5. Optimal synthesis We now put the pieces together to form an optimal synthesis routine. The actual synthesis process is now straightforward, given in Procedure 6.1, but it is trickier to pin down exactly what is meant by "optimal". For instance, the notion of optimality considered by Zhang et al. [5] is to minimize two-qubit operation count-but in a larger gateset, where different gates may have uneven performance impact, optimizing count alone may not optimize performance. Relatedly, if performance is the true goal and the performance penalty incurred for using gates is high, it may be preferable to synthesize a circuit modeling some canonical triple a = a which requires fewer gates, trading the performance hit due to the mismatch for performance gain of dropping some of the gates. Let us begin with the synthesis procedure itself: Procedure 6.1 (cf. Figure 6). The existence claim of Theorem 5.5 can be promoted into an algorithmically effective synthesis routine. Given a sequence of interaction strengths (α 1 , . . . , α n , β) and a positive canonical triple (b 1 , b 2 , b 3 ) which belongs to the associated circuit polytope, the polytope P from the proof of Theorem 5.5 can then be specialized so that only a h and a are free variables. (We report these inequality sets in Figure 21.) The content of Theorem 5.5 is that this specialization is always nonempty, so we may find an XX -circuit with strength sequence ( π 8 , π 8 , π 12 , π 12 , π 12 , π 12 ). The various colored regions are the circuit polyhedra for truncations of this sequence of interaction strengths. a point (a h , a , a f ) in it (e.g., by calculating line-line intersections until we produce a vertex). This pair of points can then be fed to Lemma 5.2, which produces the angle values for the Z -rotations. This proceeds recursively until the sequence of interaction strengths is exhausted. Example 6.2. In Figure 7, we include a visualization of the intermediate steps produced when using Procedure 6.1 to synthesize an XX -circuit for a certain canonical point against a particular sequence of interaction strengths. To progress, we need a quantitative definition of optimality. Definition 6.3. Given a target unitary U , a gate set S, and a cost function C S which consumes U and an S-circuit C, the approximate synthesis task is to produce an S-circuit C maximizing C S (U, C). Cost functions can enjoy a variety of pleasant properties: Separable: For a circuit template C(θ), C S (U, C) can be written as a sum C (U, C(θ)) + C S (C, θ) where C depends only on the unitary which the circuit models and C depends only on the circuit and parameters-but not its relationship to U . Locally invariant: C S (C, θ) = C S (C) is invariant under choice of parameters θ for local gates in the circuit C. Monotonic: Suppose that C is a separable cost function. If C is contained as a subcircuit in D, then C S (C, θ) ≤ C S (D, (θ, φ)). Non-approximating: The separable cost function C has C given by These features are chosen both because they feed into an efficient algorithm for optimal synthesis and because they are satisfied in the following guiding example: Example 6.4. The average infidelity of two gates U and V is For S a finite collection of XX -interactions with costs c : S → R, we define a separable, locally invariant, monotonic cost function by Average gate infidelity satisfies a few pleasant generic properties, but it is also tightly connected to the theory of KAK decompositions. We record these properties below. Remark 6.5. Average infidelity detects gate equivalence, in the sense that I(U, V ) = 0 if and only if U = V . It is also symmetric: I(U, V ) = I(V, U ). However, it fails to satisfy the triangle inequality, even when U and V belong to the canonical family, hence does not give a metric. It satisfies compositionality only to first order: U = CAN (a 1 , a 2 , a 3 ) and V = CAN (b 1 , b 2 , b 3 ) be two canonical gates with parameter differences δ j = (a j −b j ). Their average gate infidelity is given by Lemma 6.6 ([8]). Let Lemma 6.7 ([18]). Suppose that C 1 , C 2 are fixed canonical gates and that L 1 , L 1 are fixed local gates. Letting L 2 and L 2 range over all local gates, the value I(L 1 C 1 L 1 , L 2 C 2 L 2 ) is minimized when taking L 2 = L 1 and L 2 = L 1 . We now describe an optimal synthesis procedure for a nice cost function: Procedure 6.8. Let C be a separable, locally invariant, and monotonic cost function. Let S be a finite gate set of XX -type interactions, and consider the set of circuit templates given by interleaving unconstrained local gates into the words formed from S. Traverse these available circuit templates (i.e., the words in S) by ascending order of C S . 13 For each such circuit template C, use Theorem 4.1 to calculate the circuit polytope Π(C). Calculate the point p ∈ Π(C) which optimizes C (U, CAN (p)). If the total cost C S is the best seen so far, retain C and p. Continue to traverse circuit templates until Π(U ) ∈ Π(C), at which point C vanishes and the ordering of circuit templates guarantees that all future circuit templates will yield a worse cost. 14 Finally, apply Procedure 6.1 to synthesize a C-circuit for CAN (p), then apply Lemma 6.7 to produce U itself. Remark 6.9. In Figure 8, we study the execution char- 13 Using a priority queue, one can perform this traversal without enumerating all possible words beforehand. 14 Theorem 4.1 guarantees that this termination condition will eventually be met provided S contains any interaction XX β with β ∈ (0, π 2 ). acteristics of Procedure 6.8 compared to those of blind numerical search. The implementation of our method is available in Qiskit's quantum_info subpackage as the class XXDecomposer [3]. Given a Haar-randomly chosen two-qubit unitary operator U , our numerical search procedure is to let numpy's generic optimizer explore the space of circuits of a particular depth, with the objective of minimizing the infidelity with U . If the optimizer cannot find a circuit with infidelity below some threshold, we retry with a circuit of the next larger depth. Altogether, this is similar to what is implemented in NuOp [6], among other compilation suites. The histograms reported in Figure 8 are the result of sampling over many such U , targeting either the gate set S = {XX π 8 } or S = {XX π 12 }. 15 It remains to describe how to find the point p ∈ Π(C) which optimizes C (U, CAN (p)). For a nonapproximating cost function, this can be probed directly: if Π(U ) ∈ Π(C), then we take p = Π(U ), and otherwise we reject Π(C) entirely. For the approximating cost function defined in Example 6.4, we use the following more elaborate result: This result means that we can repurpose the standard procedure used to calculate the nearest point in Euclidean distance to instead find the best approximating canonical triple. Namely, to calculate the nearest point in Euclidean distance, project the point onto the affine subspaces spanned by each facet of the polytope (e.g., by solving a least-squares problem), retain those projections which belong to the polytope, and from that finite set select the point of minimum (infidelity) distance. Remark 6.11. This is extremely unusual behavior for these two optimization problems and relies on the specific form of the polytopes appearing in Theorem 4.1. For contrast, consider the line passing through the origin with slope ( π 4 , π 50 , π 50 ) and the off-body point ( 83π 400 , 83π 400 , 83π 400 ). The fidelity-nearest point appears after traveling for one unit of time, but the Euclideannearest point appears after traveling for ≈ 95% of a unit of time. 15 Neither our implementation of Procedure 6.8 nor our invocation of numpy is particularly clever. We expect that both distributions can be shifted left with further optimization of the implementations, but that the multiplicative difference will be at least as large between "optimal" implementations of each synthesis method. Remark 6.12. Numerical experiment indicates that the nearest point under infidelity distance exactly agrees with the nearest point under Euclidean distance-i.e., that the same critical point achieves the minimum value in both of these searches. However, this conjecture yields no algorithmic speedup when producing these minimizers, so we are not motivated to pursue it here. Gateset optimization and numerical experiment In this section, we bring the theory of Section 6 to bear on deciding which native gates are worth bestowing on a device. Even if a device is physically capable of enacting some quantum operation, there is calibration overhead to making that operation available as a reliable user-facing gate. At the same time, the more high-fidelity native interactions are available, the more clever and adaptable a synthesis method (including ours) can be. Accordingly, we would like to find a small set of XX operations that optimizes certain objective functions which measure synthesis performance. The primary objective with which we will concern ourselves is expected cost: Definition 7.1. For a two-qubit unitary U ∈ PU (4) and a native gate set S, let C S (U ) := min C C S (U, C) be a cost function as in Definition 6.3 (e.g., Example 6.4 or its non-approximating variant). The expected cost is defined as For XX -based gate sets S and for favorable cost functions, we now show how to compute this value exactly. Starting with the definition we use separability and non-approximation to reduce to the case where U admits an exact model by C: By assuming S finite and C S locally invariant, we learn that the integrand min U ∈C C S (C) takes on finitely many values, supported by finitely many choices of C. By sorting the C compatibly with C S (C), we may further reduce to Since C S (C) is constant on each region, each summand is given by the reweighted Haar volume of the corresponding region. Since constant functions pull back from constant functions, we can also push these integrals forward along Π and compute them in A C2 : Altogether, this reduces the problem to calculating the Haar volume of the polytopes which appear in Theorem 4.1. A formula for Π * µ Haar which enables this was previously reported by Watts et al.: ([19], [20], [2]). The pushforward of the Haar measure is given by 16,17 Π * dµ Haar = 384 π 1≤j<k≤3 sin(2c j + 2c k ) sin(2c j − 2c k ). Such trigonometric integrals over tetrahedra can be performed exactly. Altogether, this gives us quantitative means by which to study the effect of tuning the inputs to a parametric gate set, e.g., S(x) = {XX π 4 , XX x }. A parametric choice of gate set requires a parametric cost function, and our parametric cost function of interest is as follows: Definition 7.3. In our setting, we find it experimentally justified to assume an affine error model: we take XX x to have fidelity cost mx + b for some experimentally determined values of m and b. 18 From this, 16 The extra factor of 2 appearing in this formula comes from a different scaling of our coordinate systems. 17 This density function has a unique local maximum at ( π 4 , π 8 , 0). 18 In one experiment, we measured π 4 · m ≈ 5.76 × 10 −3 and b ≈ 1.909 × 10 −3 . This reported offset b incorporates the average infidelity cost of local post-rotations, so as to better model the total circuit execution cost while maintaining local invariance. Figure 10: An optimal set of S-circuit polytopes covering AC 2 for S = {XX π 4 , XX π 8 }. There are six regions depicted: ( π 8 , π 8 , π 8 ) in orange, ( π 8 , π 8 , π 4 ) in yellow, ( π 8 , π 8 , π 8 , π 8 ) in green, ( π 8 , π 4 , π 4 ) in blue, ( π 8 , π 8 , π 8 , π 4 ) in purple, and ( π 4 , π 4 , π 4 ) in red. There are also six regions which have circuit depth at most two, hence they do not contribute volume and we suppress them from the picture. we build a separable, locally invariant, additive cost component by Remark 7.4. The reader who would like to account, in the above framework, for the worst case cost of the interleaved single-qubit operations can absorb that extra amount into the b parameter. gate set {XX π 4 }. The precise location of the optimum in the middle depends on the ratio m/b; for experimentally realistic error models like the one depicted here, it is located near π 8 , achieving an expected infidelity of 1.62 × 10 −2 . We observe also that the basin for this minimum is fairly wide, so that π 8 is a good choice for inclusion in a native gate set even if the error model varies somewhat over time or across a device. 19 Finally, in Figure 10 we depict the optimal synthesis regions within the Weyl alcove for the gate set {XX π 4 , XX π 8 }. Example 7.6. Consider next the gate set S = {XX π 4 , XX x , XX y } with the same cost function. In Figure 11, we display the expected infidelity of synthesizing an S-circuit for a Haar-randomly chosen unitary against both parameters x and y. The edges of this triangular figure degenerate to the case discussed in Example 7.5 along the lines x = π 4 , y = 0, and x = y. As before, the precise location of the optimum in the middle depends on the ratio m/b, but for experimentally realistic error models like the one depicted here, it is located near (x, y) = ( π 8 , π 12 ), this time achieving an expected infidelity of 1.51 × 10 −2 . Again, the basin is fairly wide and the minimum fairly independent of the value of m/b, so that ( π 8 , π 12 ) are good choices for inclusion in a native gate set even if the observed error model exhibits mild variation over time or across a device. In Figure 12 we depict the optimal synthesis regions within the Weyl alcove for the gate set {XX π 4 , XX π 8 , XX π 12 }. Example 7.7. Taking these results for exact synthesis as inspiration, we can also explore effects introduced by approximate synthesis. Our results here cannot be so clean, because we lose access to our 19 Low-denominator rational multiples of π 4 are also easier to use in a randomized benchmarking scheme. Figure 12: An optimal set of S-circuit polytopes covering AC 2 for S = {XX π 4 , XX π 8 , XX π 12 }. The nineteen regions are too many to name explicitly, but their hues indicate an increasing cost from a minimum at I to a maximum at SWAP. There are also ten regions which have circuit depth at most two, hence they do not contribute volume and we suppress them from the picture. method for analytic calculation, but we can still perform Monte Carlo experiments to analyze the relationship between S x = {CX , XX x } and the expected infidelity. The plot in Figure 13 shares many of the same qualitative features as Figure 9 (e.g., the approximate position of the global minimum, and the non-concave kink near x = π 2 · 1/3), with an overall vertical shift coming from the approximation savings. The global optimum for approximate synthesis into S x,y = {CX , XX x , XX y } is again near to the global optimum for exact synthesis, so we re-use the userfriendly value of (x, y) = ( π 8 , π 12 ) and depict in Figure 14 the frequencies that these regions are used by approximate synthesis of Haar-random operations. CAN (a 1 , a 2 , a 3 ) is given by a product where each factor in the product is a Weyl reflection of a single XX gate of the same parameter, and where factors are dropped when the relevant parameter vanishes. Under the assumption of an additive affine error model, this establishes a lower bound for how efficient we can expect our circuits to possibly be, as they are assembled from a more restrictive gate set. Comparing Example 7.5 and Example 7.6, we observe that there are rapidly diminishing returns to enlarging the native gate set. Specialized to the same error model as in the Examples, the performance lower bound argued above is (3/2 · π 4 ) · m + 3b, resulting in the table in Figure 15. Remark 7.9. For a two-qubit unitary U , its mirror is the gate U · SWAP. The mirror of a canonical gate CAN (a 1 , a 2 , a 3 ) is again canonical, given by the formula otherwise. This formula shows that mirroring interchanges the regions of A C2 with the most and least infidelity cost, suggesting that our technique may be particularly fruitful at reducing the cost of mirrorable gates. We summarize the numerical results in Figure 17, and we depict in Figure 16 the relative frequency of different circuit templates when synthesizing up to mirroring. The main points are that these two synthesis strategies are "compatible", in that mirroring can be used in tandem with fractional synthesis to effect a combined decrease in infidelity, and that the optimal choice of finite gateset extension is somewhat different between the two but not wildly so. (1, 1, 1): The Lagrange multiplier constraints are ∂ 1 I = ∂ 2 I and ∂ 2 I = ∂ 3 I, which amount to the trigonometric conditions These equalities are analyzed similarly to as in the previous case. The first equation is satisfied either when both a and b represent the identity unitary or when δ 1 = δ 2 , and the second equality is similarly dispatched to give δ 2 = δ 3 . In each case, the critical points are seen to lie at the Euclidean projections onto the relevant planes. Lemma A.7. The infidelity functional I| a is extremized on the interior of the codimension 2 facets not coincident with the outer walls of the Weyl alcove exactly at the nearest point in Euclidean distance. Proof. Again, we are tasked with solving a family of constrained optimization problems. This time, each nondegenerate pair of inner walls intersect at a line with tangent vector v, and we are looking to solve along the line for the condition ∇I| a · v = 0. To parameterize the line, we select a vertex b ∈ A C2 on it and set (t) = v · t + b. We break v (i.e., the choice of plane pair) into cases. Lemma A.8. The infidelity functional I| a is extremized on the interior of the codimension 2 facets coincident with the outer walls of the Weyl alcove exactly at the nearest point in Euclidean distance. Proof. As in Lemma A.7, we intend to split over the slopes of the plane-plane intersections. Two of these cases are familiar: since the outer alcove wall b 3 ≥ 0 shares a normal with the frustrum inequality of Theorem 4.1, the tangent vectors (−1, −1, 0) and (1, −1, 0) both reappear, and we have already dispatched them in the proof of Lemma A.7. The frustrum inequality contributes one codimension 2 facet not covered by the above: its intersection with the wall a 2 ≥ a 3 yields a line with tangent vector (1, 0, 0), and the associated optimization problem is The sine factor contributes the Euclidean critical point, and the cosine factor is independent of t. The remaining cases correspond to "inner creases" in the Weyl-closed solid ∪ w∈W w · P , and they are treated quite differently. In each case, the strategy is to show that the facet is irrelevant (i.e., has no critical points) unless the outer alcove inequality is tight for the point a, then to use that tightness to simplify the expression further. Our strategy for showing irrelevance is to show that, when a is not a member of an outer facet, ∇I| a has a nonnegative inner product with the inward-facing normal of the codimension 2 facet considered as part of the boundary of the inner codimension 1 facet. Taking this as given, we would learn that the extremum then would always lie on the codimension 1 facet, so that we could avoid considering the codimension 2 facet. In fact, this strategy gives us a bit more: even without the assumption that a lies off of the outer wall, continuity would show that this conclusion still holds for extrema, since the assumption is only violated at limit points of open regions. 21 Thus, we can avoid investigating even the 21 Importantly, we are not arguing about critical points but about extrema. Critical points can manifest on a boundary via a sequence of points on the bulk which themselves are merely approximately critical points, without exactly being critical points. However, any such critical point cannot yield a more extreme value than the value achieved by the function on a sequence of values in the bulk which are extrema for the functional constrained to planes parallel to the outer facet. −1, 0). Assuming a 1 > a 2 , we would like to show that the following quantity is positive: where (b 1 , b 1 , 0) lies on the line and t satisfies − B Inclusion-exclusion and incidence degeneracy In uncovering our main results, it was invaluable to be able to calculate the volume of a nonconvex polytope. Not only did volume calculations play an outsized role in Section 7, they also underlie primitive operations. For instance, while containment of a polytope P within a convex polytope Q can be checked on vertices, this is not true of two generic polytopes; instead, assuming that P is of constant dimension, P ⊆ Q if and only if vol(P ) = vol(P ∩ Q). For this reason, we found it imperative to have a robust and efficient method for volume calculation. The process of volume calculation cleaves into two parts: reducing to the convex case, and computing the volume of convex components. Both steps admit several approaches: for instance, the former can be accomplished by (joint) triangulation, and the latter can be accomplished by determinant methods. However, it is difficult to come by implementations of these techniques which are open-source, permissively licensed, accurate / exact, and which operate in high dimension. 22 In our setting, we can often get away with the following: for the second step, use the (somewhat computationally expensive) ability of a computer algebra system, such as lrs, to calculate the volume of a single convex polytope; and for the first step, use a variant of inclusion-exclusion. The naive application of inclusion-exclusion is described by The terms on the right-hand side are all volumes of convex bodies, hence are individually approachable, but there are 2 |J| such summands. These summands can be culled in two ways: 1. Terms with vanishing volume are downwardclosed : If vol P I = 0, then vol P I ∪I = 0 for any I . 2. Containment is downward-closed : If vol P I = vol P j∪I , then vol P I ∪I = vol P I ∪j∪I for any I . For j ∈ I ∪ I , these pairs of values appear with opposite sign in the larger sum and cancel each other out. It is simple to cull summands with the first observation: whenever we encounter a summand with vanishing volume, we can skip all of its descendants. The second observation is trickier: after encountering two pairs (j 1 , I 1 ) and (j 2 , I 2 ) which fit the hypothesis, it is possible to double-count a term as belonging to two canceling pairs. The following procedure accounts for this wrinkle. We will maintain two "skip lists" of indices to ignore: 1. A skip of Type 1 corresponds to an intersection which vanishes exactly, and it is recorded by a single bitmask of the entries which populate I. 2. A skip of Type 2 corresponds to an intersection which cancels with one of its immediate descendants, and it is recorded by a bitmask of the entries which populate I as well as the index j of the descendant (which does not belong to I). We traverse the possible depths of intersections, and at each depth we traverse the possible intersections at that depth. For each intersection, if it match either skip list, we ignore it and continue to the next intersection at this depth. Otherwise, we compute the volume of this intersection. If the volume vanishes, we add this index to the Type 1 skip list, then continue as if we have done no work at this step. If the volume is equal to one of our immediate predecessors, we add to the Type 2 skip list its index and the extra intersection factor j which witnesses us as its child, then continue as if we have done no work at this step. Otherwise, we add the nonzero contribution to the running alternating sum with the appropriate sign. When we exhaust the possible intersections at this depth, if we have performed no work, we terminate the iteration altogether; otherwise, we proceed to the next depth. Now, we double back to reintroduce the summands which we previously double-counted, which we formulate in a way to also avoid double-counting the double-countings. Traversing the Type 2 skip list in the order in which it was created, let us consider the t th mask and toggle (I t , j t ), as well as some intermediate s th mask and toggle (I s , j s ) with s < t and with j t ∈ I s . Double-counting occurs for this pair at an intersection I when the following are met: 1. The t th mask matches I t ≤ I. 2. The t th toggle is disabled: j t ∈ I. 3. The s th mask matches after the toggle is enabled: 4. For all earlier s < s, the s th mask does not include the t th toggle and additionally does not match I. 5. For all later s < t < t, the t th mask does not match the toggle-on form I ∪ {j t }. Whenever these constraints are met, we reintroduce the summand at I to the running alternating sum. After iterating over all possible values of s and t, the running sum is the true alternating sum. For any s < t, the constraints on I described above are quite strong (and often even contradictory), so that iterating over the possible ways to satisfy these constraints, rather than iterating over I and checking satisfaction, frequently results in loops with few to no iterations. In one instance "in the wild", this strategy reduced a calculation from 2 14 − 1 ≈ 16, 000 convex volume computations to a mere 27 volume computations.
2021-11-05T01:15:35.231Z
2021-11-03T00:00:00.000
{ "year": 2021, "sha1": "d0dd9aa2a9f65dfb57c7c820525c2a4924e10448", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d0dd9aa2a9f65dfb57c7c820525c2a4924e10448", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Physics", "Mathematics" ] }
237270350
pes2o/s2orc
v3-fos-license
Echocardiographic parameters and indices in 23 healthy Maltese dogs Background Echocardiography is a primary tool used by veterinarians to evaluate heart diseases. In recent years, various studies have targeted standard echocardiographic values for different breeds. Reference data are currently lacking in Maltese dogs and it is important to fill this gap as this breed is predisposed to myxomatous mitral valve disease, which is a volume overload disease. Objectives To establish the normal echocardiographic parameters for Maltese dogs. Methods In total, 23 healthy Maltese dogs were involved in this study. Blood pressure measurements, thoracic radiography, and complete transthoracic echocardiography were performed. The effects of body weight, age and sex were evaluated, and the correlations between weight and linear and volumetric dimensions were calculated by regression analysis. Results The mean vertebral heart size was 9.1 ± 0.4. Aside from the ejection fraction, fractional shortening, and the left atrial to aorta root ratio, all the other echocardiographic parameters were significantly correlated with weight. Conclusion This study describes normal echocardiographic parameters that may be useful in the echocardiographic evaluation of Maltese dogs. INTRODUCTION Echocardiography is a primary tool used to monitor heart dimensions and morphology, blood dynamics, and myocardial function. Numerous experienced cardiologists have applied ultrasonic techniques to the study and definition of echocardiographic measurement parameters [1][2][3][4]. Dogs of different breeds have different ventricular and atrial dimensions and morphologies [2]. In recent years, various studies have targeted standard echocardiographic values for different breeds of dogs, such as Beagles [5], Bull Terriers [6], Whippets [7], Border Collies [8], Labrador Retrievers [9], Indian Spitzes [2] and the Dogue de Boredaux [10], and many others. In addition to the aforementioned breed differences, diastolic function is influenced by factors including body shape, weight, body structure, heart rate, and sex [11][12][13]. Consequently, reference values for various echocardiography modes are required for different breeds for the clinical application of disease diagnosis, treatment, and prognosis tracking. MATERIALS AND METHODS Records of client-owned dogs that were presented at Yu Kang Veterinary Hospital in Banqiao District (New Taipei City) for cardiac evaluation between January 2019 and December 2019 were reviewed. The clinical presentation, history and physical examination data of each dog were reviewed. Blood pressure measurements were performed with a USA Pet MAP graphic II Blood Pressure Measurement Device. The measurements were conducted in a quiet environment, away from other animals, before other procedures and only after the patients had been acclimated for 5 minutes. Blood pressure cuffs with a 30%-40% width of the tail root were selected [21]. The systolic, diastolic and mean arterial pressure and pulses were measured for six repetitions. The first measurement was discarded, and the average of five consecutive consistent measurements was recorded. All of the dogs, with the consent of the owners, underwent haematological and biochemical tests, including heartworm antigen tests. Thoracic radiography in right lateral and ventro-dorsal views was performed by Konica Minolta Regius Model 110 Computed Radiography. For evaluation of the heart size, the vertebral heart size (VHS) was measured according to the method published by Buchanan et al. [22]. Twelve-lead electrocardiography (ECG) in conscious, relaxed, unsedated, gently restrained dogs in right lateral recumbency was performed in a quiet environment, for five minutes, according to Santilli et al. [23]. The ECGs were acquired to rule out abnormal heart rhythm. A complete transthoracic echocardiography (TTE) was performed in all animals without sedation. Echocardiography was performed using an Esaote Mylab Class C (Italy) with the PA-122 probe (Cardio Phased Array 8-3 MHz), following the published recommendations [24]. Left ventricle (LV) measurements were performed using standard right parasternal longaxis and short-axis views in B-mode and M-mode, through the Teicholz method. Variables measured included the left ventricular internal dimension at end systole (LVIDs) and at end diastole (LVIDd), the left ventricular posterior wall thickness at end systole (LVPWs) and at end diastole (LVPWd), and the interventricular septal thickness at end systole (IVSs) and at end diastole (IVSd) [24,25]. Then, the left ventricular fractional shortening (FS), ejection fraction (EF), and LV volumes (end-diastolic volume [EDV] and end-systolic volume [ESV]) were calculated according to standard formula. The modified Simpson method was used to estimate the LV cavity, form the left parasternal four-chamber apical view [26][27][28]. Assessment of left atrial (LA) size was performed from the right parasternal short-axis view, and the left atrial-to-aortic ratio (LA/Ao) was calculated [29]. Concerning the Doppler examination, the peak velocity of early diastolic transmitral flow (E wave) and late diastolic transmitral flow (A wave), the ratio between both transmitral flow velocities (E/A ratio) and the E wave deceleration time (EDT) were recorded [30]. Tissue Doppler imaging was performed with the highest available transducer frequency to record the velocity of lateral mitral annular motion from the left apical four-chamber view, and the following variables were measured: the peak early diastolic velocity (E′ wave), peak late diastolic velocity (A′ wave), ratio between E′ and A′ waves, and ratio between E and E′ waves [1,31]. In addition, the heart rate was recorded. All of the examinations were performed by the same experienced cardiologist. The inclusion criteria for our analysis were no abnormal findings upon physical examination, ECG measurements within reference limits, and no evidence of congenital and/or acquired heart disease in TTE. Statistical analysis Statistical analysis was performed using SPSS Statistics Base 20 for Microsoft Windows. The dogs were divided by weight (1-3 kg and 3-5 kg), sex (male and female), and age (less than 2 years and 2-6 years). Independent variables analysed were sex, age and weight, while the dependent variables were echocardiographic parameters. Distributions of the echocardiographic parameters were tested for normality by the Shapiro Wilk tests, and a normal distribution was accepted if the p value was greater than 0.05. The mean and SD of each variable were calculated for normally distributed data, whereas data that failed either test or both tests were presented as the median and range. Correlations between the independent variables (gender, age and weight) were also tested. The Mann-Whitney test was used to assess the significance of differences for each parameter. The Spearman correlation test was used to determine correlations and establish the regression formula for weight and the echocardiographic parameters in B-mode and M-mode. Results with p < 0.05 were considered significant. For correlations, r < 0.4 was considered a low correlation, r > 0.7 was considered a high correlation, and the remaining values indicated a medium correlation. Simple linear regression was performed on variables that were determined to have significant correlations (p < 0.05) with body weight. RESULTS Twenty-three of the 81 Maltese dogs underwent cardiologic consultation in 2019 and fulfilled the inclusion criteria, while 58 dogs were excluded from the study mainly due to the presence of heart apical murmurs and mitral valve degeneration in echocardiography. Body weights ranged from 1.5 to 5.1 kg, with nine dogs included in the 1-3 kg group and 14 included in the second group. The Maltese dogs included 11 males (one of which was Echocardiographic values in Maltese dogs neutered) and 12 females (none of which were spayed). Ages ranged from 7 to 67 months, and six dogs were less than two years of age. There was no significant correlation between the independent variables (gender, age and weight; p > 0.05). The influences of weight, sex and age on the echocardiographic parameters were assessed. As expected, most of the parameters did not differ significantly in the age and sex groups (p > 0.05). Correlation analysis was performed to assess the relationship of weight with the echocardiographic parameters, and the results indicated a strong positive relationship (p < 0.01) for most of them, with linear correlations. For each parameter, the SD and 95% confidence interval, in addition to the upper and lower parts of the reference range, were established. Table 1 shows the blood pressure measurements, heart rate, and VHS of the 23 healthy Maltese dogs included in our study. The mean VHS was 9.1 ± 0.4 (range: 8.5-9.8). Variables determined from 2D and M-mode echocardiography are presented in Table 2. DISCUSSION Several studies have established echocardiographic reference ranges in dogs using various allometric scaling techniques [7,32,33], and a number of breed-specific reference ranges have been developed to further improve echocardiographic assessments and clinical decision- making [5,6,8,10,[34][35][36][37]. To the best of the author's knowledge, this study is the first to provide an echocardiographic parameter in healthy Maltese dogs. Echocardiographic values in Maltese dogs In previous studies, the mean VHS of approximately 98% of healthy dogs was ≤ 10.5. However, this value differs between breeds. For example, the VHS of Miniature Schnauzers can be up to 11; some deep-chested dogs, such as Dachshunds, have a standard VHS value of approximately 9.5; and the standard value of the Beagle is approximately 10.3 ± 0.5 [38,39]. VHS values differ according to the animals' growth condition and age, and the direction of the X-ray (i.e., left or right recumbency position) also influences the VHS results [40], as malformations of the thoracic vertebrae or fat infiltration of the mediastinum or pericardial area. In our study, the mean VHS was 9.1 ± 0.4 for Maltese dogs. Concerning the M-mode echocardiography standard values reported in literature, in a study of Whippets, because the heart weight and weight ratio of female dogs were higher than those of male dogs, the LVID differed significantly between the sex groups (p < 0.05) [7]. In addition, in studies of Beagles and German Shepherds, the LVPW differed significantly between sexes (p < 0.05) [5,16]. Conversely, the LA/Ao ratio and the EF and FS did not differ significantly (p > 0.05). Similarly, according to the standard echocardiography values for Beagles, the LA/Ao ratio, EF, and FS are not influenced by sex, weight, or age [5]. Therefore, these heart systolic functions are not influenced by weight and age; however, the EF was negatively correlated with weight in some breeds with similar structures [34]. The LVID increased with weight in our findings, as reported in previous studies [2,9,35]. However, according to the studies on Corgis and Afghan Hounds, weight changes are not correlated with the LVID [16]. In addition, in studies of Indian Spitzes, Beagles, and Labrador Retrievers, the IVS and LVPW were not correlated with weight [2,5,9]. In a study of Labrador Retrievers, the left ventricular systolic and diastolic volumes were significantly correlated with weight (p < 0.01) [9]. The results of our study were as expected. The parameters related to blood dynamics, E waves, A waves, the E/A wave ratio, the EDT, E′ waves, A′ waves, the E′/A′ ratio, EF%, and FS% exhibited no significant differences in weight (p > 0.05). In contrast, for parameters related to heart size, such as the LVID, LVPW, LA, IVS, AoD, and left ventricular volume, correlation analysis revealed that these parameters were almost all extremely correlated with weight (p < 0.01). Based on the results of this study, linear regression calculations were performed to analyse the relationship of weight with the aforementioned parameters related to heart size to obtain formulas. Several limitations of this study must be considered. The sample size is the major limitation of our study. Reference intervals should ideally be established from a minimum of 120 healthy individuals; however, as there is not such a caseload in our clinic and there are no reference ranges, according to the authors' knowledge, we wanted to pave the way on this breed. The small sample size of the group could have affected the association between gender and reference values and made the association between the parameters and gender unreliable. Further studies in a larger population of Maltese dogs are warranted to confirm the findings from this study. Furthermore, the population in this study was not randomly selected, and the possibility of selection bias should be considered. A multicentre study is desirable. Given these limitations, we believe that our findings are likely representative of a healthy population of Maltese dogs, but further studies in a larger population are warranted to confirm the findings of this study. The Maltese is the dog breed with the highest incidence of heart disease in Taiwan. There is a high prevalence of mitral valve insufficiency within this population of dogs, although it appears to be generally mild to moderate in nature. This study provides breedspecific echocardiographic parameters for normal Maltese dogs, and these data may be useful in echocardiographic evaluations.
2021-08-24T06:23:08.597Z
2021-07-12T00:00:00.000
{ "year": 2021, "sha1": "352c1ba49a2bd15617cec89d5c0a8a2df05aafe3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4142/jvs.2021.22.e60", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d6a1d6f40417c6917153ca98a56aaf49eab5801", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6724943
pes2o/s2orc
v3-fos-license
Hereditary haemorrhagic telangiectasia in a patient taking anticoagulant drugs who has sustained facial trauma The authors present the case of a 41-year-old patient with hereditary hemorrhagic telangiectasia (HHT), who in the past had an aortic valve replacement surgery, currently takes anticoagulant drugs and has sustained an extensive trauma to the nose as a result of a dog bite. The HHT is diagnosed basing on the presence of at least three out of four symptoms or signs: spontaneous epistaxis, vascular lesions in the internal organs, skin telangiectasias and a family history of the disease. The presented patient showed hepatic angioma, history of recurrent bleeding from the tongue and spontaneous epistaxis as well as numerous skin telangiectasias. In his case, HHT coincided with chronic treatment with coagulants implemented after an implantation of the artificial aortic replacement valve, what substantially modified the clinical picture and course of treatment. A b s t r a c t The authors present the case of a 41-year-old patient with hereditary hemorrhagic telangiectasia (HHT), who in the past had an aortic valve replacement surgery, currently takes anticoagulant drugs and has sustained an extensive trauma to the nose as a result of a dog bite. The HHT is diagnosed basing on the presence of at least three out of four symptoms or signs: spontaneous epistaxis, vascular lesions in the internal organs, skin telangiectasias and a family history of the disease. The presented patient showed hepatic angioma, history of recurrent bleeding from the tongue and spontaneous epistaxis as well as numerous skin telangiectasias. In his case, HHT coincided with chronic treatment with coagulants implemented after an implantation of the artificial aortic replacement valve, what substantially modified the clinical picture and course of treatment. Case report A 41-year-old patient with hereditary familial telangiectasia and post-haemorrhagic anaemia was admitted as an emergency case to the Department of Otolaryngology of the District Hospital in Skarzysko-Kamienna. The reason for admission was trauma to the nose resulting from a dog bite. Clinical examination showed a massive haemorrhage from numerous wounds of the face and the nasal cavity. Haemorrhage from the nasal cavity was caused by an extensive injury involving tearing off the left nasal ala and an incomplete tear of the inferior left nasal concha. After bleeding was arrested with electro coagulation and pressure, plastic surgery of the nose was performed. Anterior left nasal packing and numerous sutures on skin wounds were placed. Following the treatment the patient's condition was stable and there was no local bleeding. Seven days after the surgery, sutures were removed, the nasal ala and facial wounds healed properly, the nose was patent, and the cosmetic effect was very good. The patient gave a history of homogenic aortic valve implantation 20 years ago followed by reoperation of an artificial aortic valve implantation 11 years ago. He has been treated for III° circulatory insufficiency for 20 years and undergoing longterm treatment with anticoagulant drugs. Ultrasonic abdominal examination had displayed steatosed liver and a 12 mm × 16 mm lesion of lower echoginicity at the anterior part of the right hepatic lobe resembling an angioma. He had received multiple laryngological treatments due to recurrent bleeding from the body of this tongue. He had undergone repeated surgical ligation of the bleeding lingual blood vessel accompanied by blood transfusion, acenocoumarol dose modification and international normalized ratio (INR) control. We render the presented case extremely challenging as spontaneous haemorrhages in patients affected with hereditary hemorrhagic telangiectasia (HHT) pose a difficult therapeutic problem. Together with an injury to the face, complicated by the need of long-term administration of anticoagulants, a HHT patient faces a likely life-threatening haemorrhage (Figures 1-3). Discussion Hereditary haemorrhagic telangiectasia, also called Rendu-Osler-Weber syndrome, is an entity involving numerous arterio-venous malformations. In the classification of vascular abnormalities, which includes slow-and fast-flow malformations, it is considered a slow-flow one. The absence of capillaries between the arterial and venous circulations accounts for a direct contact between these blood vessels and is responsible for spontaneous and recurrent bleedings. The diagnosis of HHT is made basing on the presence of three out of four symptoms: spontaneous epistaxis, skin telangiectasias, arterio-venous malformations in internal organs, and familial character of the disease [1][2][3][4]. The clinical picture changes with age. The first symptoms appear in adolescence and are mainly recurrent episodes of epistaxis, often occurring at night. The most predisposed bleeding site is the anterior part of the nose, the middle nasal concha, and the floor of the nasal cavity. The site and morphology of nasal telangiectasias change with age, the applied treatment and nasal septum condition [5][6][7][8]. Telangiectasias also affect the skin and mucosa, leading to massive bleedings, following even minor injuries. They are most common in the skin of the face, nose, fingers, auricles, lips vermilion and mucosa, and oral and pharyngeal mucosa. Telangiectasias involved in HHT diagnosis are not specific. They usually take the form of little red patches, which fade when pressed. In 30% of patients they appear before the 20 th year of life, and in one third before the 40 th year of life. They were observed in a 6-year-old child at the earliest. It has been estimated that 25% of sufferers present with extra-nasal haemorrhages, which usually are self limiting; still, in 12% they are prolonged and require treatment. Bleedings most often arise from the base and body of the tongue, like in the presented case, and from the fingers and supraclavicular fossa. The earlier the skin and mucosal lesions appear, the greater the risk of bleedings [9]. A dermatologist is most often the first doctor to diagnose the disease properly, to extend diagnostic procedures onto vital organs, to implement a wide range of treatments and thus prevent later unfavourable and often life-threatening complications. In the case of vascular skin lesions, a meticulous history taking is needed to confirm their familial occurrence accompanied by spontaneous bleedings, especially from the oral cavity, nose, and alimentary tract. Major bleedings from the alimentary tract or the nose are contraindications to the administration of anti-inflammatory and anticoagulant drugs. The history should also reveal possible concurrent pulmonary, cardiac, hepatic and neurological dis- eases as well anaemia and polycythaemia. An antibiotic cover is required before every invasive and dental surgery due to the risk of pulmonary arterio-venous malformations. In patients diagnosed with HHT, annual iron level tests should be ordered to prevent anaemia from developing. In HHT patients extended diagnostic procedures include magnetic resonance imaging of the head, ultrasonic examination or computed tomography of the liver, contrast echocardiogram and taking systolic pressure of the pulmonary artery to detect pulmonary hypertension. Differential diagnosis is required between telangiectasias in the Rendu-Osler-Weber syndrome and ataxia-telangiectasia syndrome, congenital benign telangiectasia and vascular lesions found in chronic hepatic disorders. Sequential photocoagulation using argon laser and sclerotherapy provide good effects in arresting bleedings from the oral mucosa [10,11]. Unsatisfactory aesthetic effect caused by telangiectasias in the facial skin may be effectively eliminated with a dye laser [12]. At an older age, changes affect internal organs. Twenty five per cent of patients aged 50 years and older develop bleedings from the alimentary tract. Bleedings in the brain, lungs, pancreas and liver are rarer [13]. The presented patient had telangiectasias in the facial skin and oral mucosa as well as recurrent bleedings from the lingual vessels. Vascular lesions of the internal organs were mainly found in the liver. The patient could not state if similar vascular changes were present in other family members. However, he reported spontaneous nasal bleedings in the past. The prevalence of HHT in the young population ranges from 1 : 50 000 to 1 : 100 000. Most fatal cases result from diffuse pulmonary, cerebral and digestive tract bleedings. The HHT patients are also diagnosed with haematuria due to slight bleedings and arterio-venous fistulas in the urinary tract. The more severe symptoms of anaemia, the greater likelihood of urinary tract bleeding [14]. Hereditary haemorrhagic telangiectasia is caused by a mutation in the group of genes coding the cascade of TGF-β/BMP transmitters, i.e. the ENG, ACVRL 1 (ALK1), SMAD4 genes together with two other genes not yet identified. The HHT is inherited as an autosomal dominant feature. ENG and ACVRL 1 genes mutations are the foundation to identify two HHT types -HHT1 and HHT2, respectively. They differ in character and in sites affected with pathological changes. Oral and nasal mucosa telangiectasias are more frequent in HHT1 than in HHT2. On the other hand, HHT2 patients present with skin lesions more often and at a younger age. In both types the number of changes increases with age. Skin changes, especially those affecting the face, more often occur in HHT1 female patients than HHT1 male ones [15]. The HHT promotes the formation of malignant neoplasms. There are reports of HHT concurrent with malignant neoplasms of the skin, breast, liver, urinary bladder and large intestine [2,16]. The treatment of epistaxis involves the application of lubricants, antifibrinolytic drugs, laser ablation, nasal septum dermoplastic surgery and systemic or topical hormones on the nasal mucosa [5][6][7][8]. Bleedings from the alimentary tract are treated with endoscopic procedures or segmental surgical resections. In less advanced bleedings hormonal and antifibrinolytic drugs are used [13]. A common problem of HHT sufferers, also present in the described patient, is sideropenic anaemia. Iron preparations are given to prevent its occurrence. Recurrent bleedings often require blood transfusions. Pulmonary or cerebral HHT often necessitates surgical intervention. On the other hand, hepatic vascular changes are often asymptomatic, like in the presented patient, but may eventually lead to irreversible liver damage, its cirrhosis and worsening of the existing circulatory insufficiency. In conclusion, a patient with typical facial skin telangiectasias should be suspected of having HHT, which was a life-threatening condition in the described case.
2018-04-03T03:49:57.482Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "2f58570f9caf4e29439c9e5b192e9ac445333412", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-7/pdf-20959-10?filename=Hereditary.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d1b6bc25afe5fa5e4f388dfe97e4a4f9471b374", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15136255
pes2o/s2orc
v3-fos-license
Metagenomic abundance estimation and diagnostic testing on species level One goal of sequencing-based metagenomic community analysis is the quantitative taxonomic assessment of microbial community compositions. In particular, relative quantification of taxons is of high relevance for metagenomic diagnostics or microbial community comparison. However, the majority of existing approaches quantify at low resolution (e.g. at phylum level), rely on the existence of special genes (e.g. 16S), or have severe problems discerning species with highly similar genome sequences. Yet, problems as metagenomic diagnostics require accurate quantification on species level. We developed Genome Abundance Similarity Correction (GASiC), a method to estimate true genome abundances via read alignment by considering reference genome similarities in a non-negative LASSO approach. We demonstrate GASiC’s superior performance over existing methods on simulated benchmark data as well as on real data. In addition, we present applications to datasets of both bacterial DNA and viral RNA source. We further discuss our approach as an alternative to PCR-based DNA quantification. "EHEC outbreak", we will refer to the dataset as the EHEC dataset. The EHEC dataset contains 977,971 reads with average length 181.7 bp. Dataset sources are provided in Supplementary Table S6. To eliminate the read length differences in the datasets, we trimmed all reads to 80 bp and discarded shorter reads. These reads were used to create 11 datasets with varying E. coli and EHEC concentrations. Each dataset consisted of 400,000 reads, the fractions of E. coli reads were 0.0, 0.01, 0. 05, 0.10, 0.20, 0.50, 0.80, 0.90, 0.95, 0.99, 1.0, the remaining reads were filled from the EHEC dataset. We downloaded the following genomes from NCBI: Escherichia coli DH10B, Shigella flexneri, Escherichia fergusonii, Klebsiella pneumoniae, and Pantoea ananatis. For EHEC, we downloaded the draft assembly from the BGI (Beijing Genomics Institute, http://en.genomics.cn/). Accession numbers can be found in Supplementary Table S6. To calculate the distance matrix, we simulated IonTorrent reads for each reference genome with dwgsim (part of the dnaa package, http://dnaa.sourceforge.net/) using the following command: dwgsim -c 2 -1 80 -2 0 -r 0 -y 0 -e 0.002 -N 500000 -f TACG [reference] [reads]. We chose dwgsim, since, to the best of our knowledge, it is the only simulator which can simulate Ion-Torrent reads. Both GASiC and GRAMMy were applied to all 11 datasets using the E. coli, EHEC, and Shigella reference genomes. For GRAMMy, reads were aligned as described in the original paper using BLAT with default settings, the results were then passed to the GRAMMy pipeline to run the EM estimation of the abundances. For GASiC, we used bowtie (20) to align the reads to the reference genomes and analyzed the output SAM (15) files. We used the following command to invoke the alignment: bowtie -S -p 2 -q -3 30 -v 2 [index] [reads] > [samfile]. Note that we allowed up to 2 mismatches in total and discarded the last 30 bp from the read. The results are presented and discussed in the main text, detailed results are provided in Supplementary Table S2. We conducted further experiments to demonstrate how GASiC performs under complicated conditions. In the first experiment, we increased the number of phantom references and used all six genomes (see above) to test how robust the results are with respect to the size of the reference sequence database. We report the results in Supplementary Table S3. GASiC consequently estimates zero abundance and high p-values for all additional genomes, while the estimates for E. coli and EHEC are consistent with the previous experiment (maximum absolute difference < 0.004). We conclude that additional genomes in the reference set seem not affect the accuracy of GASiC's estimates, as long as the correct reference genomes are in the set. In the second experiment, we enlarged the mixed datasets by adding randomly generated reads. This simulates the situation when a part of the dataset originates from unknown organisms, that have no similarity to the genomes in the reference set. Here we call the set of reference sequences a closed subset of all genomes present in the dataset as we expect no reads belonging to the missing genomes to be ambiguously aligned to the genomes in the reference set. The numbers reported in Supplementary Table S3 show that the additional reads have no influence on GASiC's estimates. Therefore, GASiC should be able to provide reliable estimates in cases when not all reference genomes are available, as long as the missing genomes are not similar to the reference genomes used in the reference set. In the last experiment, we simulated the case of a missing reference genome with high similarity to the other references. Therefore, we repeated the original experiment, but removed the EHEC genome from the reference set. Abundances were estimated by both GASiC and GRAMMy in order to see how the methods handle this difficult situation. We report the results in Supplementary Table S3. We observe that both methods have severe problems estimating the true abundance of the reference sequences and respond to the additional EHEC reads by overestimating the abundances of genomes similar to EHEC. Yet, genomes with very small genomic distance (here: Panteoa ananatis) are not affected by the missing reference sequence, corroborating the findings of the previous experiment on closed subsets. Despite these significant problems, GASiC produces less erroneous results than GRAMMy. We conclude that missing reference genomes can severely influence the quality of abundance estimates of both reference based methods, GASiC and GRAMMy. We therefore recommend, when in doubt, adding genomes to the reference set rather than restricting the reference set to a small selection of genomes. We applied the GASiC quality check step in the experiment with missing EHEC reference and analyzed the read alignments to E. coli and Shigella. The coverage histograms for both reference genomes are shown in Figure 1 and 2, respectively. While both histograms seem to follow a Poisson distribution, Shigella shows unnaturally high values at zero coverage. The high number of uncovered bases in the Shigella genome indicates large areas where no read was matching, which contrasts the areas with high coverage. This is a strong indication that Shigella is not part of the dataset, but E. coli is. This is also visible from the automated warning message generated by GA-SiC. In this experiment, these warnings indicate that the GASiC estimate (Shigella is present with considerable abundance) may not be trustworthy and that the set of reference genomes may be incomplete and may contain a species with a high similarity to Shigella. This can then serve as a basis for further manual inspection. As we suggested in the main text, reference genomes can nowadays be obtained by directly assembling new genomes from the metagenomic dataset (2). If the assembly is successful, GASiC is encountered with new challenges, such as a high number of contigs per species, missing parts in the genome, or falsely assembled contigs. We assembled the E. coli reads used for mixing the datasets in this experiment with Mira (21) using default settings for IonTorrent reads. The assembly yielded 711 contigs; 154 of which were longer than 1000bp, summing up to 4.4Mbp (compare: E. coli has 4.6Mbp). We repeated the above mixing experiment using the E. coli contigs longer than 1000bp, the EHEC genome, and the Shigella genome as reference. The results are presented in Supplementary Table S3. Despite the considerable difference between E. coli assembly and reference genome, GASiC provides almost equal abundance estimates as with the E. coli reference genome available (max. difference < 0.015). This demonstrates GASiC's robustness against incomplete and fragmented reference genomes, as it is typical for assembly. Therefore, we are confident that GASiC is able to provide good abundance estimates also for genomes assembled from metagenomic datasets. Moore et al. (17) analyzed the viral RNA of 40 honeybee pupae, many of them infested by Varroa destructor mites. The viral RNA was purified and the corresponding cDNA was sequenced on an Illumina GAII. The raw data contains 16.8 million paired-end reads with length 72 bp per mate. In addition to the two candidate viruses, Deformed Wing Virus (DWV) and Varroa Destructor Virus-1 (VDV-1), the authors identified two recombinants from DVW and VDV-1: VDV-1 VVD and VDV-1 DVD . The reference genomes of the two recombinants are provided with the read data. All genomes are stored at NCBI, accessions are provided in Supplementary Table S6. All four viral genomes show a high sequence similarity, ranging from 84% to 96% identical bases. We estimated the similarity of the original sequences and the recombinants via whole genome alignment with Geneious v. 5.5.0 (beta). All similarities are provided in Supplementary Table S4. Viral Recombination Experiment Again, we used Mason to simulate the reads for the calculation of the distance matrix. Due to the short length of the viral genomes, 10,000 simulated reads per virus are enough to cover the whole sequence. The exact command for the simulation was mason illumina -N 10000 -hi 0 -hs 0 -n 72 -sq -o [reads] [reference]. As for the E. coli dataset, we used bowtie to align the reads to the reference genomes. To reduce the computational effort, we only used the first mate of every read pair and discarded the second mate. In the original dataset, both mates are concatenated as one contiguous sequence; to only align the first mate, we ignored the last 72 bp of each read via -3 72. The complete command was bowtie -S -p 4 -q -3 72 [index] [reads] > [samfile]. To align the simulated reads for the calculation of the distance matrix, we simply omitted the -3 72 parameter. To compare GASiC's results to the qRT-PCR estimates, we used the data reported in Table 1 in Moore et al. (17). Under the assumption that the virus levels are comparable for each bee, we calculated the relative virus levels for each bee individually and then averaged over all 25 bees. We performed two experiments with GASiC: in the first experiment, we used all involved genomes as reference (DWV, VDV-1, VDV-1 VVD and VDV-1 DVD ) and in the second, we used only DVW and VDV-1 as reference. We aligned the provided read data to both reference sets and analyzed the results with GASiC. The total runtime (incl. alignment) was 41 minutes on one CPU. The peak RAM consumption was 1.3 GB. The scope of the first experiment was to produce abundance estimates for all viral genomes and to compare the estimates to the experimentally obtained virus levels. In the second experiment, we tried to replicate the situation before knowing the genome sequences of the recombinants. We report detailed results for both experiments in Supplementary Table S5 and discuss the results of the first experiment in section Viral RNA Quantification in the main text. In the second experiment, the correction by GASiC is very small, caused by a low genomic similarity as observed by the alignment tool (0.24) in the calculation of the distance matrix. The alignment based similarity must not be confused with the per base similarity as reported in Supplementary Table S4, since we used bowtie as alignment tool, which is not able to align reads with InDels and more than 2 errors (e.g. SNP). Therefore, both genomes, DVW and VDV-1, obtain about equally many reads and seem to be present in the dataset. This demonstrates that GASiC requires all involved highly similar genomes to be present in the dataset and does not allow detecting new recombination events. For completeness, we applied GRAMMy on the Viral Recombination dataset. To this end, we aligned the first mate of each read (as for GASiC) to all four reference genomes using BLAT with default settings. The results were then passed to the GRAMMy pipeline to run the EM estimation of the abundances. The total runtime of the GRAMMy pipeline was 133 minutes on one CPU, the peak RAM consumption was 7.5 GB. We report GRAMMy's and GASiC's estimates jointly in Supplementary Table S5.
2016-01-13T18:10:52.408Z
2012-08-30T00:00:00.000
{ "year": 2012, "sha1": "9c837a9b5a77b8237f103d59464affde94ff2f52", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/41/1/e10/25349549/gks803.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b138cc4c301b7fe9d616ad458404b5fc853913f", "s2fieldsofstudy": [ "Biology", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
2900003
pes2o/s2orc
v3-fos-license
IDENTIFICATION OF A NOVEL BINDING MOTIF IN PYROCOCCUS FURIOSUS DNA LIGASE FOR THE FUNCTIONAL INTERACTION WITH PROLIFERATING CELL NUCLEAR ANTIGEN DNA ligase is an essential enzyme for all organisms that catalyzes a nick-joining reaction in the final step of the DNA replication, repair, and recombination processes. Herein we show the physical and functional interaction between DNA ligase and proliferating cell nuclear antigen (PCNA) from the hyperthermophilic euryarchaeon, Pyrococcus furiosus. DNA ligase is an essential enzyme for all organisms that catalyzes a nick-joining reaction in the final step of the DNA replication, repair, and recombination processes. Herein we show the physical and functional interaction between DNA ligase and proliferating cell nuclear antigen (PCNA) from the hyperthermophilic euryarchaeon, Pyrococcus furiosus. The stimulatory effect of P. furiosus PCNA (PfuPCNA) on the enzyme activity of DNA ligase (PfuLig) was observed not at a low ionic strength, but at a high salt concentration, at which a DNA ligase alone cannot bind to a nicked DNA substrate. Based on mutational analyses, we identified the amino acid residues that are critical for the PCNA binding in a loop structure located in the N-terminal DNA binding domain (DBD) of PfuLig. We propose that the pentapeptide motif QKSFF is involved in the PCNA interacting motifs, in which Gln and the first Phe are especially important for the stable binding with PCNA. DNA ligases catalyze nick sealing reactions via three nucleotidyl transfer steps, as described in recent review articles (1)(2)(3)(4). In the first step, DNA ligases form a covalent enzyme-AMP intermediate, by reacting with ATP or NAD + as a cofactor (step 1). In the second step, DNA ligases recognize the substrate DNA, and the AMP is subsequently transferred from the ligases to the 5'-phosphate terminus of the DNA, to form a DNA-adenylate intermediate (AppDNA) (step 2). Then, in the final step, the 5'-AppDNA is attacked by the adjacent 3'-hydroxy group of the DNA to form a phosphodiester bond (step 3). Three genes (LIG1, LIG3 and LIG4) encoding ATP-dependent DNA ligases have been identified in the human genome to date. Human DNA ligase I (Lig I) is a replicative enzyme that joins Okazaki fragments during the DNA replication process. It is well known that many eukaryotic proteins involved in DNA replication, DNA repair, and cell cycle control interact with a DNA sliding clamp, proliferating cell nuclear antigen (PCNA) (reviewed in [5][6][7]. Extensive studies of the PCNA interacting proteins revealed the existence of a consensus sequence, called the PCNA interacting protein box (PIP box) (5). The PIP box consists of the sequence "Qxxhxxaa", where "x" represents any amino acid, "h" represents hydrophobic residues (e.g. L, I or M), and "a" represents aromatic residues (e.g. F, Y or W). Furthermore, it has been proposed that other sites of the interacting protein can participate in PCNA binding. For example, a conserved pair of Lys and Ala residues was identified as a PCNA binding motif (KA box) by using a random peptide display library (8). However, the importance of the KA box is not obvious, because a detailed biochemical analysis of the motif has not been performed to date. In Escherichia coli, the corresponding DNA sliding clamp is the β subunit of DNA polymerase III holoenzyme (henceforth referred to as the β clamp), which forms a toroidal dimeric structure (9). A bioinformatics approach revealed that a pentapeptide motif (consensus QL[SD]LF) plays an important role in binding to the β clamp (10). In higher eukaryotes, human Lig I reportedly forms a stable complex with a PCNA trimer that is topologically linked to duplex DNA via an N-terminal PIP box motif (11,12). The structures of eukaryotic DNA ligases can be divided into two major domains, an N-terminal noncatalytic domain (NCD) and a C-terminal catalytic domain (CD), which consists of an adenylation domain and an OB-fold domain (3,13). The crystal structure of human Lig I in complex with a nicked DNA and biochemical analyses of the enzyme revealed that the NCD provides most of the DNA binding affinity (14), and therefore, this domain is called the N-terminal DNA-binding domain (DBD). Although several groups have characterized the physical interactions between human DNA ligase I and PCNA, no stimulatory or inhibitory effect on nick-joining activities has been observed in vitro (11,15). In contrast, a stimulatory effect of PCNA was also reported (16). Thus, the detailed interaction mode between human PCNA and Lig I is somewhat unclear. In Archaea, the third domain of life, a single homolog of the eukaryotic DNA ligase I has been identified (17)(18)(19)(20)(21). Interestingly, although most of the archaeal replicative enzymes have a eukaryotic PIP box at their C-terminus, no clear PCNA binding motif has been observed in the sequences of archaeal DNA ligases (5,22). Recently, a physical and functional interaction between PCNA and DNA ligase from Sulfolobus solfataricus was reported (23). In the S. solfataricus DNA ligase (SsoLig), PIP box-like motifs were proposed to exist in the N-terminal region. Furthermore, a mutant SsoLig, which lacks 30 amino acids at the N-terminus, cannot interact with SsoPCNA (23, 24). However, it has not been determined whether the proposed motifs are actually important for the interaction with PCNA. Here, we show a physical and functional interaction between PCNA and DNA ligase from the hyperthermophilic euryarchaeon, Pyrococcus furiosus. The stimulatory effect of P. furiosus PCNA (PfuPCNA) on the nick-joining reaction of DNA ligase (PfuLig) was observed under physiological conditions with an extremely high salt concentration (0.5-0.6 M). Furthermore, we show that the pentapeptide sequence QKSFF, in the DBD of PfuLig, plays an important role in binding to PfuPCNA. Interestingly, this motif is located in a loop connecting two α-helices in the DBD, but is not at the N-or C-terminus of PfuLig, based on our crystal structure (Nishida et al. Submitted). We propose a novel PCNA binding motif, which may be located inside, but not at the terminus, of the PCNA-interacting proteins. EXPERIMENTAL PROCEDURES Cloning the Pfu DNA ligase gene and its mutants --The DNA ligase gene (lig) was amplified by PCR directly from P. furiosus genomic DNA, using the oligonucleotides 5'-GGCCATGGGTTATCTGGAGCTTG CTCAAC-3' and 5'-GCGGATCCTTAGCTTTCCACTTT TCCTTTCATC-3' as the forward and reverse primers, respectively. The amplified gene was cloned into the pGEM-T easy vector (Promega), and its nucleotide sequence was confirmed. The cloned gene was digested by NcoI-BamHI and was inserted into the corresponding sites of pET21d (Novagene). The resultant plasmid was designated as pET-Lig. Amino acid substituted mutations and N-terminally truncated mutations were introduced into the lig gene on the pET-Lig plasmid by PCR, using the appropriate primers. Their sequences are available on request. Overproduction and purification of the Pfu DNA ligase proteins --To obtain the recombinant PfuLig, E. coli BL21 codonPlus TM (DE3)-RIL (Stratagene) carrying pET-Lig was grown in 1 liter of LB medium, containing 50 µg/ml ampicillin and 34 µg/ml chloramphenicol, at 37 . The cells were cultured to an A 600 of 0.40, and then the expression of the lig gene was induced by adding isopropyl β-D-thiogalactopyranoside to a final concentration of 1 mM and continuing the culture for 5 h at 37 . After cultivation, the cells were harvested and disrupted by sonication in buffer A, containing 50 mM Tris-HCl, pH 8.0, 0.5 mM DTT, 0.1 mM EDTA, and 10% glycerol. The soluble cell extract, obtained by centrifugation (12,000 x g, 20 min), was heated at 80 for 20 min. The heat-resistant fraction obtained by centrifugation was treated with 0.15% polyethylenimine to remove the nucleic acids. The soluble proteins were precipitated by 80% saturated ammonium sulfate precipitation. The precipitate was resuspended in buffer B, containing 50 mM Tris-HCl, pH 8.0, 1 M (NH 4 ) 2 SO 4 , 0.5 mM DTT, 0.1 mM EDTA, and 10% glycerol, and was subjected to chromatography on a Hitrap Phenyl column (Amersham Biosciences). The proteins were eluted at 0 M ammonium sulfate, and the eluted proteins were dialyzed against buffer A. The dialysate was loaded onto a Hitrap Heparin column (Amersham Biosciences), and the proteins were eluted at 0.3-0.35 M sodium chloride. The eluted proteins were dialyzed against buffer A, and the dialysate was subjected to chromatography on a Hitrap Q column (Amersham Biosciences). The proteins were eluted at 0.1-0.15 M sodium chloride, pooled and stored at 4 . The mutated PfuLig proteins prepared in this study were purified by the same procedures. The purity of each proteins used in this study was evaluated by SDS-PAGE. No extra band was detected by Comassie brilliant blue staining of the gel containing 2 µg of each purified protein. After electrophoresis, the gels were dried and were scanned by using an image analyzer, FLA5000 (FUJIFILM), to detect the Ado-32 P-DNA Ligase adducts. DNA ligation assay --The substrate DNA used in the ligation assay was a 49 bp DNA duplex containing a single nick at the center. The 22 mer deoxynucleotide (5'-AATTCGTGCAGGCATGGTAGCT -3'), which was labeled with 32 P at the 5'-terminus, and the 27 mer deoxynucleotide (5'-AGCTATGACCATGATTACGAAT TGCTT-3') were annealed to the 49 mer deoxyoligonucleotide with a complementary sequence in TAM buffer, containing 40 mM Tris-acetate, pH 7.8, and 0.5 mM magnesium acetate. The purified PfuLig proteins (at different concentrations for each experiment, as described in the figure legends) were incubated with 5 nM of the nicked DNA substrate, prepared as described above, in 20 µl of the ligation buffer, containing 20 mM Tris-HCl, pH 7.5, 10 mM MgCl 2 , 1 mM DTT, 0.01 mM ATP, 0.1% Tween 20 and 0.1 mg/ml BSA, at 60 for 15 min. Reactions were initiated by the addition of enzyme and were terminated with 5 µl of stop solution, containing 98% formamide, 10 mM EDTA, 0.1% bromophenol blue, and 0.1% xylene cyanol. Samples were heated at 100 for 5 min and chilled rapidly on ice prior to loading onto a 10% polyacrylamide gel containing 8 M urea. After electrophoresis, the gels were dried and were scanned by using FLA5000 to detect the 32 P-labeled DNA. Three independent experiments were carried out in succession for each ligation condition required in this study, and the standard error are shown as vertical lines on the plots in each graph. Surface plasmon resonance analysis --The BIAcore system (BIACORE) was used to study the physical interaction between PfuLig and PfuPCNA. Highly purified recombinant PfuLig or PfuPCNA (25) was fixed on a Sensor Chip CM5, research grade (BIACORE), according to the manufacturer's recommendations. To measure the kinetic parameters, various concentrations of PfuPCNA were applied to the immobilized PfuLig. All measurements were performed at a continuous flow rate of 30 µL/min, in a buffer containing 10 mM HEPES, pH 7.4, 150 mM NaCl, and 0.005% Tween 20. At the end of each cycle, the bound protein was removed by washing with 2 M NaCl. The kinetic constants for PfuPCNA binding to PfuLig were determined from the association and dissociation curves of the sensorgrams, using the BIAevaluation program (BIACORE). RESULTS Biochemical properties of P. furiosus DNA ligase --PfuLig has already been characterized and is commercially available, mainly as a reagent for ligase chain reactions (Stratagene, Patent# US 5506137). In this study, we cloned and purified PfuLig independently and constructed mutant proteins to analyze the structure-function relationships of this enzyme. It was predicted from the primary amino acid sequence similarity that PfuLig is an ATP-dependent DNA ligase. Therefore, we constructed the mutant lig gene encoding PfuLig K249A, in which the lysine at the predicted adenylylation site was substituted by alanine, in parallel with the gene for the wild type PfuLig, and tested their adenylyltransferase activities in the presence of ATP. As shown in Figure 1A, wild type PfuLig can form a covalent enzyme-AMP intermediate by reacting with [α-32 P]ATP as a cofactor, but no adenylyltransferase activity was observed with the K249A mutant protein. It was reported that some thermophilic DNA ligases from archaea utilize ADP (26) or NAD + (19, 27) as a cofactor. However, we detected a distinct activity of PfuLig in the presence of ATP, but not ADP, AMP, and NAD + (Fig. 1B). A very small amount of ligation product was detected by the reaction with ADP. This result is the same as the case of DNA ligase from Pyrococcus horikoshii in the recent report (21). We think the ligation reaction may be derived from contaminant ATP (1.16 %) in our ADP reagent (Oriental yeast, Co, Osaka) according to the manufactures certificate, and therefore, we concluded that ADP is not an appropriate cofactor for PfuLig, as Shuman described for P. horikoshii DNA ligase (21). Our crystallographic study of the wild type PfuLig revealed that the protein consists of three distinct domains, the N-terminal DNA binding domain (DBD), the middle adenylylation domain, and the C-terminal OB-fold (oligonucleotide binding fold) domain (Nishida et al., submitted). The last two domains are commonly called the catalytic core domain (CD), which is conserved in one branch of the nucleotidyltransferase superfamily, containing DNA ligase, RNA ligase, and mRNA capping enzymes (reviewed in 4). We made truncated PfuLigs, including the DBD with amino acids from 1 to 218 (N-terminal domain) and the CD from 219 to 561(the middle and C-terminal domains) to investigate the functions of each domain (Fig. 1C). A nick-joining assay was performed by using a wild type enzyme (WT) and the mutant PfuLigs, K249A, DBD, and CD (Fig. 1D). The CD protein could not complete the nick sealing reaction, even at high enzyme concentrations. A very small amount of ligation product was observed when the CD protein was added with concentration at 100 times higher than that of the wild type (Fig. 1D, lane CD). The accumulation of AppDNA products implied that the CD protein exhibits a lower activity at the "step 3" reaction. The CD protein shares a structural similarity with the full-length Chlorella virus DNA ligase, which is the smallest ATP-dependent DNA ligase with a distinct activity in vitro (28), and therefore, it is very interesting to investigate why the CD from PfuLig lacks most of the enzyme activity. This result indicates the importance of the DBD for the overall ligation activity of PfuLig, and it can be predicted from the structural similarity that the contribution of the DBD domain to the ligation reaction is also conserved in the eukaryotic DNA ligases. Pfu DNA ligase can interact with both monomeric and trimeric PCNA proteins --To determine the physical interaction between PfuLig and PfuPCNA, we first used an immunoprecipitation (IP) method. The PfuPCNA and PfuLig proteins were incubated together and then precipitated by each antiserum. However, significant interactions between them were not detected under several experimental conditions, probably because the protein-protein interaction is too weak to be detected by an IP method (data not shown). We therefore performed surface plasmon resonance (SPR) experiments to analyze the weak PCNA-DNA ligase interactions. The full-length PfuLig was immobilized onto the CM5 BIAcore sensor chip, and subsequently the wild type PfuPCNA and the mutant PfuPCNA D143A/D147A, which is unable to form a stable toroidal structure in solution and thus cannot stimulate P. furiosus DNA polymerase B activity (29), were injected at different concentrations. The physical interactions between the immobilized PfuLig and the two PCNA proteins were identified by the SPR sensorgram (Fig. 2). The calculated equilibrium constant K D values for the wild type PCNA and the D143A/D147A mutant were 1.1 x 10 -7 M and 1.4 x 10 -6 M, respectively. The K D values reported here are comparable to that of the human PCNA-p21 interaction determined by SPR analysis (K D : 3.2x10 -7 M) (30). These findings suggest that the toroidal structure is not necessarily required to form a stable PfuLig-PfuPCNA complex in vitro. PfuPCNA enhances the ligation activity of PfuLig at a physiological salt concentration --It is well known that some hyperthermophilic archaea contain strikingly high intracellular potassium ion concentrations. Based on the study of a euryarchaeon, Pyrococcus woesei, which was later proved to be a subspecies of P. furiosus (31), the potassium ion concentration in the hyperthermophilic archaeal cells was determined to range between 0.5 to 0.6 M (32). We initially examined the effect of increasing salt concentrations on the nick-joining activity of PfuLig by supplementing the reaction with KCl and K-Glu (potassium glutamate) salts. A reduction in the ligation activity was observed with each of these monovalent salts in a concentration-dependent manner, and about 90% inhibition was seen at a 200 mM salt concentration (Fig. 3A). A similar result was obtained in the enzyme assay using NaCl (data not shown). These observations are not specific for PfuLig, as the same phenomena were reported in the characterizations of other DNA and RNA ligases (18,27,33). To determine whether PfuPCNA can stimulate the ligation activity of PfuLig at the physiological ionic strength, the proteins were assayed in a broad range of salt concentrations. The stimulation effect of PfuPCNA on PfuLig was observed at 0.05-0.2 M KCl, but the effect was decreased over 0.2 M KCl (Fig. 3B). In the same manner, we performed the enzyme assay by adding K-Glu salt, which was reported to be an important factor contributing to the thermostability of archaeal proteins (34). As shown in Figure 3C, the stimulation effect was observed over 0. Fig. 1). These results show that a chloride ion (Cl -) concentration over 0.2 M, but not this concentration of potassium ion (K + ) had an inhibitory effect on the enzyme activity of PfuLig. The same result was observed in the characterization of a Holliday junction resolving enzyme, Hjc, from P. furiosus, which exhibited the maximum enzyme activity at 0.2 M KCl (35). For the P. furiosus enzymes that catalyze nucleic acid modification reactions, high Clconcentrations may affect their activity. A novel PCNA binding site in the N-terminal DBD of PfuLig --To determine the region responsible for PCNA binding in PfuLig, we utilized the two truncated mutants, DBD and CD, as shown in Figure 1C. The interactions between PfuPCNA and these truncated PfuLig proteins were examined qualitatively by an SPR analysis. The wild type PfuPCNA was immobilized onto a CM5 BIAcore sensor chip, and the two truncated DNA ligase mutants were then injected. The wild type PfuLig and DBD interacted with the immobilized PfuPCNA to almost the same extent, but CD had no binding ability (Fig. 4). This SPR analysis using immobilized PfuPCNA showed very low resonance units as compared with that shown in Figure 2, in which PfuLig was immobilized. This phenomenon often happens in our experience of the SPR analyses using PfuPCNA and its binding proteins. The difference probably depends on the direction of the proteins fixed on the sensor chip. Due to the relatively low resonance units (< 250 RU) observed in this experiment, the equilibrium constant K D was not determined. These findings suggest that the N-terminal DBD plays a critical role in the PCNA binding of PfuLig. In Archaea, it was reported that S. solfataricus DNA ligase has a PCNA binding site in its N-terminal region (23). To determine whether the same region of PfuLig is responsible for PCNA binding, we cloned and purified two N-terminally truncated mutants, PfuLigΔN14 (15-561) and PfuLigΔN32 (33-561), on the basis of the crystal structure of PfuLig (Fig. 5A). In addition, we carefully examined the amino acid sequence of the DBD, and found some regions that may be involved in interactions with PfuPCNA. Single amino acid substituted mutants in these regions were prepared to examine their effects on the PCNA-interaction. These regions included a candidate KA box and a PIP box-like motif found in the DBD (Fig. 5B). The Lys 67 in the candidate KA box and the two aromatic residues, Phe 106 and Phe 107 , in the PIP box-like sequence were examined by alanine substitutions. To test the stimulation activity of PCNA in the nick sealing reaction (described above) under equivalent conditions, the relative activities of these mutant proteins were determined without PfuPCNA. The F106A/F107A mutant exhibited almost the same activity as that of the wild type PfuLig (Fig. 5C). The decreased activity observed in the two N-terminally truncated mutants, ΔN14 and ΔN32, revealed that the integrity of the DBD is important for the overall ligation activity. In order to determine the region responsible for PCNA binding, we tested the stimulation effect of PfuPCNA on these mutants. As a result, only the ligation activity of F106A/F107A was not stimulated by PfuPCNA (Fig. 5D). Furthermore, no effect of PfuPCNA was observed with increasing concentrations of the F106A/F107A mutant PfuLig (data not shown). The PfuPCNA-dependent ligation ability of the K67A mutant was not different from that of the wild type (Fig. 5E). We concluded that at least one of the two aromatic residues, Phe 106 or Phe 107 , plays a crucial role in PCNA binding via a hydrophobic interaction. Gln 103 and Phe 106 of PfuLig are critical for the functional interaction with PfuPCNA --The PIP box-like sequence, 103 QKSFF 107 described above, is located in a loop structure in the DBD, based on the crystal structure of PfuLig (Fig. 6A). Using three single amino acid substituted mutants, Q103A, F106A, and F107A, we examined the detailed roles of each of these amino acid residues in the 103 QKSFF 107 sequence. The specific activities of these mutant PfuLigs were confirmed to be the same ( Supplementary Fig. 2). The physical interactions between the mutant PfuLig proteins and the immobilized PfuPCNA were analyzed by SPR analyses. The F106A mutant was not able to bind to PCNA, but the Q103A and F107A mutants showed very weak responses (50 RU) as compared the with wild type PfuLig (240 RU) (Fig. 6B). Next, the stimulatory effect of PCNA on the ligation activity of these mutants was examined in vitro. As a result, the activities of the Q103A and F106A mutants were slightly stimulated by PfuPCNA, whereas the F107A mutant exhibited an intermediate response to PfuPCNA (Fig. 6C). These analyses indicate that Phe 106 in the 103 QKSFF 107 sequences is the most important residue in the physical and functional interactions with PfuPCNA. The Gln 103 residue may stabilize the PCNA-DNA ligase complex after connecting PfuLig to PfuPCNA with Phe 106 residue. The F107A mutant PfuLig, which possesses the Gln 103 and Phe 106 residues, showed very weak binding activity to PfuPCNA in the SPR analysis, comparable to that of the Q103A mutant, but a distinct response to PCNA was retained in the ligation assay. Further analyses will be required to understand the detailed role of Phe 107 in the functional interaction between PfuLig and PfuPCNA. DISCUSSION Functional roles of the conserved DBD in eukaryotic DNA ligases --As shown in our mutational analyses, the integrity of the DBD is important for the overall ligation activity of PfuLig itself (Fig 1D). Furthermore, the other important function of the DBD is to interact with PCNA. The functional interaction between PfuLig and PfuPCNA seems to be stoichiometric. However, much excess amounts of PfuPCNA is required for stimulation of ligation reaction by PfuLig (Fig. 5D). This inconsistency could be explained by the difficulty of PCNA loading onto the DNA fragment in the assay mixture. The PCNA trimer loads by diffusion onto the double-stranded DNA fragment over the ends without clamp loader (RFC) in this case. This is probably a limited process, and efficient loading requires a large stoichiometric excess of PCNA as discussed previously for human Lig I (16). In Eukarya and Archaea, PCNA binding proteins generally interact with PCNA via a conserved PIP box motif (e.g. archaeal DNA polymerase B, and flap endonuclease 1 have a typical PIP box motif at their C-terminus (reviewed in 7). Human Lig I has a typical PIP box at the N-terminal tail, but PfuLig lacks a long N-terminal tail. We identified the PCNA-interaction motif of PfuLig in a loop structure, which connects two α-helices in the N-terminal DBD. Based on the information from the crystal structure of human Lig I (N-terminal truncated mutant) complexed with a nicked duplex DNA, a model structure of the DNA ligase-PCNA complex with 1:1 stoichiometry was proposed (14). This interaction is likely to involve "face to face" binding because of the proteins' similar sizes and toroidal structure of PCNA. After binding to the PCNA-DNA complex via the PCNA binding motif in the DBD, the conformations of the CD may change freely to encircle a nicked DNA, because it has no interactive region with PCNA (Fig. 4), and subsequently the enzyme catalyzes the nick-joining reaction. PCNA is a scaffold protein for binding to DNA under physiological ionic conditions --There have been some contradictory observations about the stimulatory effect of human PCNA on DNA ligase I activity. One group suggested that these discrepancies are due to differences in the experimental conditions (16). As shown in Figure 3B and 3C, the PfuLig activity was inhibited by PCNA at low salt concentrations In a previous report on the inhibitory effect of PCNA on human DNA ligase I, the inhibitory effect was observed at 0 and 50 mM NaCl and no effect of PCNA on ligation was observed at 100 mM NaCl (in pH 6.5 buffer) (15). Based on our findings, the stimulatory effect may be observed at 150 mM NaCl, which is near the physiological ionic strength within human cells. However, it is not easy to discuss the differences in the assay conditions, because the salt concentration of each fraction containing the purified recombinant protein is not always obvious from the presented information. It can be predicted that, by themselves, eukaryotic DNA ligases cannot bind to DNA to catalyze the nick-joining reaction at a physiological salt concentration, but they can recognize the substrate DNA by interacting with PCNA on a nicked DNA. Most of the DNA modification enzymes can interact with substrate DNAs to express their function at low salt concentrations, but lose their activities at high salt concentrations in vitro. The DNA binding abilities of these enzymes themselves are probably inhibited by salt in the cells. Each protein involved in DNA replication and repair has to work at a certain time in the successive processes at their appropriate sites. To control the specific timing and the position for each related protein factor to access the target DNA in vivo, the salt concentration, which prevents non-specific binding of protein factors in the cells, is especially important, and in the case of replication fork progression, for example, PCNA probably functions as a platform to control the order and the sites of interacting proteins involved in this successive reaction process. The conserved residues in the novel PCNA binding motif "QKSFF" --The well-known PIP box is generally located in the N-terminal or C-terminal region within the peptide chain of the PCNA interacting proteins. However, the PCNA-binding motif, QKSFF, found in this study is in the middle of the PfuLig protein. This novel PCNA binding motif resembles a putative bacterial β clamp binding motif, QL[SD]LF, which is located not only at the terminus but also in the middle of some β clamp interacting proteins. In this bacterial motif, the pair of hydrophobic residues, LF, is important for binding to the β clamp (10). In PfuLig, the corresponding hydrophobic residues, FF, are also important for binding to PCNA, and furthermore, our work showed that the former residue, Phe 106 , is more critical than Phe 107 (Fig. 6C). Moreover, the importance of Gln 103 was revealed in our experiments. A structural comparison of the novel motif in the PfuLig crystal with that of the PIP box in the RFC large subunit in the cocrystal with PfuPCNA (36) is shown in Figure 7. Interestingly, the locations of the amino acid residues responsible for the hydrophobic and ionic interactions, respectively, clearly correspond to each other, and especially, the positions of Gln 470 and Phe 476 in the PIP box of RFCL, corresponding to Gln 470 and Phe 476 in PfuLig, are remarkably conserved among the PIP box sequences. This new motif may represent a shorter version of the original PIP box. To determine the detailed role of each amino acid in PCNA binding, an X-ray crystallographic structure of the PfuPCNA-PfuLigase complex will be required. Interestingly, this novel PIP box motif is widely conserved in the same region of other archaeal DNA ligases (Fig. 8A). The QKSFF sequence is completely conserved, especially in Thermococcals (Pyrococcus and Thermococcus species). In addition, the basic residues located in the region upstream of the motif are also conserved, and especially, the remarkable cluster of basic residues is conserved in the DNA ligases from Thermococcals and some Methanogens. As we proposed previously, based on mutational analyses of the RFC large subunit (RFCL) from P. furiosus (37), these basic residues may function for the formation of the stable Lig-PCNA-DNA complex in these organisms. We examined the sequences of the eukaryotic DNA ligases, and found that they also have the archaea-type PIP-box in the middle of the peptide chain (Fig. 8B). Eukaryotic Lig I may bind to PCNA at the site corresponding to the motif that we found in this study as discussed above, in addition to the N-terminal PIP box. These analyses show that the PCNA-DNA ligase interaction mode is also interesting from an evolutionary perspective, and we plan to investigate this possibility by introducing mutations into the conserved Glu in the human Lig I protein. Physical interaction between PfuLig and PfuPCNA. SPR analysis was performed using a BIAcore system to detect a physical interaction between PfuLig and PfuPCNA. Purified PfuLig was immobilized on a BIAcore sensor chip, and purified PfuPCNA (3 µM) was loaded. The wild type PfuPCNA and a mutant PfuPCNA (D143A/D147A), which cannot form a stable trimeric ring structure, were used to investigate their affinities to PfuLig. The equilibrium constant K D was calculated from the obtained sensorgram. The amino acid sequence of the DBD was examined carefully, and a new PIP box-like motif was found, in addition to the candidate KA box, as shown with a black box. The regions containing these candidate motifs were aligned with the human Lig I sequence. This PIP box-like motif is located in the loop structure connecting 6th and 7th α-helices (residues 103 to 107) of the PfuLig crystal structure. In the crystal structure of human Lig I (huLig I), a part of the corresponding region is disordered (residues 385 to 392, indicated by dashed line) (14).
2018-04-03T03:23:48.181Z
2006-09-22T00:00:00.000
{ "year": 2006, "sha1": "e04d58737342439132aa67470d27e8b1dab353fa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1074/jbc.m603403200", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "624308dd9ea2a774de883be7ce37b46726a378c5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251772138
pes2o/s2orc
v3-fos-license
Do parents modify child-directed signing to emphasize iconicity? Iconic signs are overrepresented in the vocabularies of young deaf children, but it is unclear why. It is possible that iconic signs are easier for children to learn, but it is also possible that adults use iconic signs in child-directed signing in ways that make them more learnable, either by using them more often than less iconic signs or by lengthening them. We analyzed videos of naturalistic play sessions between parents and deaf children (n = 24 dyads) aged 9–60  months. To determine whether iconic signs are overrepresented during child-directed signing, we compared the iconicity of actual parent productions to the iconicity of simulated vocabularies designed to estimate chance levels of iconicity. For almost all dyads, parent sign types and tokens were not more iconic than the simulated vocabularies, suggesting that parents do not select more iconic signs during child-directed signing. To determine whether iconic signs are more likely to be lengthened, we ran a linear regression predicting sign duration, and found an interaction between age and iconicity: while parents of younger children produced non-iconic and iconic signs with similar durations, parents of older children produced non-iconic signs with shorter durations than iconic signs. Thus, parents sign more quickly with older children than younger children, and iconic signs appear to resist that reduction in sign length. It is possible that iconic signs are perceptually available longer, and their availability is a candidate hypothesis as to why iconic signs are overrepresented in children’s vocabularies. Introduction All natural human languages-both signed and spoken-contain a range of iconic and arbitrary lexical items (Dingemanse et al., 2015;Winter et al., 2017). In spoken languages, in addition to onomatopoeia, the sounds of words can sometimes reflect aspects of their meanings (e.g., recruiting aspects of the speech signal such as intensity to reference words relating to loudness or excitement). In sign languages, the forms of signs can resemble many aspects of the referent's size, shape, movement, and texture. Although iconicity is a feature of language across modalities, perhaps due to the affordances of the manual visual modality, Iconicity and language learning A growing body of evidence indicates that language learners capitalize on iconicity when learning new lexical items. Adult sign language learners are sensitive to iconic form-meaning mappings (Campbell et al., 1992;Baus et al., 2013), sometimes retaining information about iconicity at the expense of phonology (Ortega and Morgan, 2015). Children, too, are sensitive to iconicity in first language acquisition; parent reports of the vocabularies of deaf signing children show high levels of iconicity, and deaf signing toddlers both comprehend and produce iconic signs more often than non-iconic signs (Thompson et al., 2012, Caselli andin BSL: Vinson et al., 2008;in TSL: Sumer et al., 2017). Young children learning spoken languages also show an advantage in learning iconic versus non-iconic words (Imai et al., 2008;Kantartzis et al., 2011;Yoshida, 2012;Imai and Kita, 2014;Perry et al., 2018), and hearing preschoolers learn novel iconic manual symbols more quickly than non-iconic items (Marentette and Nicoladis, 2011;Magid and Pyers, 2017;Ortega et al., 2017). Interestingly, children's ability to capitalize on the effects of iconicity for word learning seems to interact with their age, with older children learning iconic signs better than younger children (Tolar et al., 2008;Thompson et al., 2012;Magid and Pyers, 2017). Learner-centered mechanisms The mechanisms underlying the effects of iconicity in first language acquisition remain unclear. One set of explanations are what we will term 'learner-centered' mechanisms. These appeal to the notion that children are themselves sensitive to iconic mappings and leverage them to learn new words. One example of this kind of theory is Imai and Kita's (2014) sound-symbolism bootstrapping theory, in which children take advantage of an innate ability to map and integrate multi-modal input in order to break into the referential system of language. In essence, sound symbolism bootstraps children's ability to understand the referential relationship between speech sounds and meaning, which serves as the foundation for building their lexical representations. Similarly, another learner-centered theory might draw upon the structure mapping theory of iconicity (Gentner, 1983;Emmorey, 2014), which suggests that the signer draws an analogy between a mental representation of a concept (e.g., a semantic representation of drinking) and the mental representation of its sign form (e.g., a curved handshape moving to the mouth). In this sort of account, children must have the cognitive capacity to recognize the link between form and meaning. Input-centered mechanisms The other set of explanations for children's apparent affinity toward iconic signs is 'input-centered. ' Under this account, adults (either consciously or unconsciously) produce iconic signs in childdirected signing in ways that make these signs more learnable. Patterns in how iconic signs are produced in the input might sufficiently explain most effects of iconicity on acquisition. For example, if iconic signs are used more frequently with children, their frequency alone-and not their iconicity per se-might account for their overrepresentation in children's early vocabularies. Some have hypothesized that child-directed signing may also include the selection of more iconic signs compared to non-iconic signs (Pizer et al., 2011), and in spoken languages, highly iconic ("sound symbolic") words are more prevalent in child-directed speech than in adult-directed speech (Perry et al., 2015(Perry et al., , 2021. Beyond over-representing iconic signs in their input to children, parents may modify iconic signs during child-directed signing by lengthening, repeating, or enlarging them (Perniss et al., 2018). These differences in how iconic signs are produced are also the characteristics of child-directed signing that are often associated with capturing and maintaining children's attention (Pizer et al., 2011). Here too, the ways iconic signs are produced may account for their overrepresentation in children's early vocabularies. Support for this account comes from a longitudinal case study of two Deaf mothers using Israeli Sign Language with their hearing children, reporting that signs were most likely to be repeated, lengthened, enlarged, or displaced ("phonetically modified") when children are aged 10-14 months, but more likely to be produced with an iconic modification-using iconic mimetic body/mouth/vocal gestures-when children are aged 16-20 months (Fuks, 2020). These results offer early suggestions that parents may systematically produce iconic signs in childdirected interactions in ways that make them easily learned. The current study Learner-centered and input-centered explanations are not mutually exclusive; both forces may be at play in acquisition. Children may leverage their ability to detect iconic mappings to learn new words, and adults may also highlight iconic signs by overrepresenting them in their input and/or modifying them to make them more salient for their children to learn. The current study explores two input-centric ways that child-directed signing might be systematically structured to highlight iconic signs. First, we ask whether parents' produce iconic signs more often than non-iconic signs with their children, indicating that they are overrepresenting iconic signs in their interactions with their children. Second, we ask whether parents produce iconic signs with longer durations than non-iconic signs, providing children more time to perceive them, which could in turn make them more learnable. Because the role of iconicity on children's vocabulary acquisition is impacted by developmental stage, we were most interested to Frontiers in Psychology 03 frontiersin.org see if these characteristics of iconicity in child-directed signing vary as a function of age. We test these hypotheses by analyzing the use of iconic signs in child-directed signing in a corpus of naturalistic parent-child play interactions in American Sign Language (ASL). The present study is not designed to empirically test any relationships between child-directed signing and child acquisition; rather, by identifying whether iconic signs are highlighted in child-directed signing, we aim to determine whether these input-centered mechanisms are viable hypotheses that account for the advantage of iconicity in child acquisition. Materials and methods Participants Participants included 24 parent-child dyads who participated in a naturalistic play session as part of a larger study on ASL development. The children were all deaf and ranged from 9 to 60 months of age (M = 36, SD = 15). There were 8 females and 16 males. The children's reported race was White (n = 18), Asian (n = 1), African American (n = 1), more than one race (n = 2), or unreported (n = 2). Three children had a reported ethnicity as Hispanic/Latinx and 21 as not Hispanic/Latinx. Parents were deaf (n = 15) or hearing (n = 9), and all parents used ASL to communicate with their deaf child. The interactions were conducted at five sites in the Northeast and Midwest US. Data sources ASL-PLAY The ASL Parent input and Language Acquisition in Young children (ASL-PLAY) dataset is a corpus of naturalistic interactions between parents and their deaf children (Lieberman et al., 2021;Lieberman, 2022). Parents and children were recorded while engaged in a free play interaction. Parents were provided with a standard set of toys including a wooden fruit set, a Lego train set, toy vehicles, and a farmhouse set. Parents were instructed to play as they typically would with their child. Play sessions lasted for approximately 15 min and were recorded from three separate angles to obtain clear views of both the child and parent. Twelve minutes of each video (beginning one minute after the start of the recording) were coded and analyzed off-line. Videos were coded in ELAN [Crasborn and Sloetjes, 2008;ELAN (Version 5.8), 2019] for a range of features. Signs were glossed individually using the ASL SignBank, a standardized glossing system for ASL (Hochgesang et al., 2020). All signs, English translations, and attention-getters in the ASL-PLAY dataset were annotated using this system by deaf ASL-signing researchers. Signs were tagged individually to capture the onset and offset of each sign. The onset of the sign was defined as the first frame where the sign was identifiable within the sign stream, which typically included the initiation of the movement component of the sign. The offset was the last frame where the sign was still identifiable before transitioning to the next sign. ASL-LEX ASL-LEX 2.0 is a publicly available online database containing linguistic information for 2,723 ASL signs, selected based on previously published databases, psycholinguistic experiments, and vocabulary tests ASL-LEX 2.0, 2021;Sehyr et al., 2021). It is unclear whether ASL-LEX is representative of the entire lexicon of ASL, and it excludes large pockets of the lexicon (e.g., classifiers); regardless, it is the most comprehensive and only database available. Each sign entry contains detailed lexical and phonological information. Of relevance to this project are the metrics for iconicity, repeated movement, and sign frequency; they are described in detail below. All of the signs in ASL-LEX are cross-referenced with the signs in SignBank, allowing us to merge the lexical data from ASL-LEX with the data from the corpus. Iconicity Ratings: The iconicity estimates in ASL-LEX were derived by averaging over the ratings from 30 hearing non-signers who evaluated how much each sign resembled its meaning (1 = not iconic at all, 7 = very iconic). ASL-LEX also has iconicity ratings from deaf signers for a subset of signs. We chose to use the iconicity ratings from non-signers because ratings from non-signers highly correlate with the ratings from deaf signers (Sehyr and Emmorey, 2019), and were available for the full set of signs in ASL-LEX. The signs in ASL-LEX skew towards being non-iconic, with 66% of signs having an iconicity rating below 4 on a scale of 1-7 . Repeated Movement: Each sign in the database is noted as having repeated movement or not. Movement repetition includes repetition of path movements, hand rotation, or handshape change (Sehyr et al., 2021). Sign Frequency: Because there is not a large enough corpus of ASL to robustly estimate lexical frequency, we used the subjective estimates of frequency from ASL-LEX. The frequency estimates in ASL-LEX were averaged over ratings from 25-35 deaf adults who rated how often each sign appears in everyday conversation (1 = very infrequently, 7 = very frequently; Sehyr et al., 2021). Data preparation We extracted all parent sign tokens from participants in the ASL-PLAY dataset (pairs of SignBank Annotation IDs and a timestamp of the duration of the sign in milliseconds), generating a dataset that included 6,294 adult sign tokens from the 24 participants (Per family; Min = 68, Max = 506, Mean = 262). We identified and removed all point tokens (n = 1,256). Points (also called indexes) carry linguistic meaning in ASL; they can serve as pronouns and can also be used to draw attention to an object or event. They were used much more frequently than any Frontiers in Psychology 04 frontiersin.org other sign; for comparison, the next most common sign type was used 199 times across all parents. Because of their unique linguistic function and the difficulty of assessing their iconicity, we excluded them from the analysis. We then removed an additional 138 types (n = 1,256 tokens) from the dataset consisting of depicting signs, fingerspelled words, gestures, pronouns, idioms, and name signs. These signs did not have an iconicity rating (or a corresponding entry) in ASL-LEX. Most of the signs in ASL-LEX and SignBank have a 1:1 correspondence, and so can be straightforwardly matched to the ASL-PLAY dataset. Nevertheless, there were some instances in which a sign in the corpus corresponded to two entries in ASL-LEX due to different phonological or inflectional variants (e.g., EAT) with slightly different iconicity ratings; for these cases (n = 29 types), we randomly selected one of the two possible matches from ASL-LEX. 1 The final corpus had 3,782 adult sign tokens representing 371 sign types from 24 participants. Describing parent productions In order to determine the extent to which each parent favored iconic signs in their signing, we computed a unique mean iconicity rating for each of the 24 parents based on that parent's sign tokens and types. The total number of tokens per parent ranged from 48 to 318 (M = 157, SD = 65). Average parent token iconicity ranged from 2.7 to 4.0 (M = 3.2, SD = 0.3). Parent token iconicity did not differ significantly by parent hearing status (t (22) = −0.8 p > 0.1). Additionally, there was no relationship between the average iconicity of parent sign tokens and their child's age (rho = 0.03, p= > 0.1). Number of parent sign types ranged from 23-103 (M = 57, SD = 21), and the average iconicity of those sign types ranged from 2.7-3.6 (M = 3.2, SD = 0.2). Across all family tokens, the distribution of parent sign tokens by lexical category (taken from ASL-LEX), was as follows: 1125 nouns (30%), 1,090 verbs (29%), 778 minor class items (21%), 455 adjectives (12%), 282 adverbs (7%), and 52 numbers (1%). A table summarizing the participant data from all 24 families is included in the Appendix. Iconicity of child-directed signs relative to ASL-LEX We first asked whether parents' child-directed signs were more iconic than one might expect by chance. To do this, we compared bootstrapped estimates of the iconicity of the sign 1 To ensure that this approach did not unduly influence the analysis, we repeated a parallel set of analysis in which we selected the highest of the two iconicity ratings for each item rather than a random selection. The results were qualitatively the same. types the parents actually used with their children during the session (Parent Vocabularies) to simulated vocabularies of the same number of items randomly drawn from the ASL-LEX database (Simulated Vocabularies) to represent the "lexicon" of each parent during the play session. We also conducted a parallel analysis of sign tokens by comparing all individual tokens the parents produced with their children to simulated vocabularies with the same number of items randomly drawn from ASL-LEX, but with replacement so the same item could appear more than once to account for individual token productions. To control for lexical frequency in the simulated vocabularies, for both tokens and types, the random samples from ASL-LEX were weighted by frequency. The simulated vocabularies were designed to estimate how iconic a set of signs might be by chance. We bootstrapped Parent Vocabularies by randomly sampling from a subset of either tokens or types from each parent's attested items, calculated the mean iconicity rating of each subsample, and repeated this process 1,000 times. We then paired one Simulated Vocabulary with one Parent Vocabulary and calculated the difference in mean iconicity of each vocabulary. We visualized the distribution of the 1,000 difference scores for each of the 24 parents in Figure 1. If parents' vocabularies were significantly more iconic than chance, we would expect the difference between the bootstrapped Parent Vocabularies and the Simulated Vocabularies to be significantly larger than zero (i.e., 0 should fall below the 95% CI). Instead, what we found is that for both tokens and types, the mean iconicity of the bootstrapped Parent Vocabularies is comparable to the Simulated Vocabularies. For sign types, the iconicity estimates of all the Parent Vocabularies were indistinguishable from zero. The same is largely true of the tokens, though two parents used iconic signs more often than chance (probability <0.025), suggesting that those two parents may systematically repeat iconic signs (Figure 1). Contrary to our predictions, iconic signs were not overrepresented in child-directed signing. What factors predict sign duration in parent input? We next sought to determine whether more-iconic signs were produced with longer duration relative to less-iconic signs. We ran a linear mixed-effect model to determine whether iconicity of parent sign productions predicted their duration. The dependent variable was token duration. The critical predictor was an interaction between iconicity and age. Two other control variables that may influence duration were drawn from ASL-LEX: (1) repeated movement, since signs that had repetition would take physically longer to produce, and (2) sign frequency. We included sign frequency because it is often inversely related to phonetic duration, as seen across spoken languages (e.g., Gahl et al., 2012), and in Swedish Sign Language (Börstell et al., 2016). Finally, the model included parent hearing status and random effects for participants (Table 1). Frontiers in Psychology 05 frontiersin.org In support of the hypothesis, there was an interaction between iconicity and age. Visualization of the model (Figure 2) illustrates that parents of younger children had similar sign durations for iconic and non-iconic signs, but parents of older children had shorter durations for non-iconic signs. Simple slopes analyses confirmed this pattern; the only slope that was marginally different from zero was that of the oldest children (B = 0.02(3612.4), SE = 0.008, p = 0.053). Notably, for the older children, the parents' iconic signs had similar durations to those of the parents' of the younger children. This finding provides weak evidence that parents may begin to shorten non-iconic signs as their children get older, but that iconic signs seem to resist shortening. Discussion We examined a corpus of parent interactions with deaf children to investigate iconicity in child-directed signing. First, we found that the average iconicity of parent productions were largely no different than chance (i.e., than the average iconicity of a random sample of signs drawn from the larger ASL lexicon). Only two of the 24 parents produced sign tokens that were more iconic than expected by chance. This pattern suggests that the frequency of iconic signs in child-directed signing is an unlikely explanation for the previously documented advantage for iconic signs in children's vocabularies. Second, we found patterns in our data suggesting that sign duration in child-directed signing may be systematically different for highly iconic and less iconic signs as a function of age: while parents of younger children had similar sign durations for both low and high iconicity signs, parents of older children had shorter duration for low iconicity signs than high iconicity signs. If this pattern holds in future studies, we would take it to indicate that the duration of the iconic signs stays constant as children grow. That is, while parents shorten the articulation of low iconicity signs, iconic signs resist this reduction, leading to increased salience of iconic signs in the input and a corresponding advantage in the acquisition of these signs. The distribution of difference in mean iconicity of 1,000 pairs of Parent Vocabularies and Simulated Vocabularies. The upper and lower bounds of the 95% confidence interval are illustrated in blue and red, respectively. Distributions that largely fall above zero (i.e., the lower bound of the 95% CI is above 0) indicate that parents' signs were more iconic than chance. In the left panel, iconicity ratings were averaged over sign types, and in the right panel over sign tokens. With the exception of two parents' tokens (participants 3 and 19), Parent Vocabularies were no more iconic than would be expected by chance. There were significant positive effects of repeated movement, and significant negative effects of sign frequency, child age, and the interaction between iconicity and age. Prevalence of iconic lexical items in parent input The fact that parents did not overrepresent iconic signs when signing with their children is somewhat different from previous work on use of iconic words in child-directed speech; Perry et al. (2018) found that parent-child conversations use highly iconic words more frequently than adult conversations. This difference may be methodological: the children in our sample had a wider age range and were, on average, older than those in Perry et al. (2018), and the toys available for dyads to play with during the present play sessions may not have elicited especially iconic signs. Alternatively, it could be that there are modality differences in child-directed language in signed vs. spoken languages. Sign languages are more iconic overall than spoken English (Dingemanse et al., 2015;Perlman et al., 2018), and so inflating the rates of iconicity may not be natural to parents; since the language already makes use of iconic form-meaning mappings, inflating those iconic mappings further might not be intuitive. Differential modification of iconic signs We found that the duration of iconic signs varies systematically in children's input, whereby parents produce iconic signs for longer than less iconic signs, but this effect depends on age. With the youngest children in our sample, parents did not vary their sign duration as a function of degree of iconicity. For the older children in our sample (age four years and up) parents produced iconic signs for longer than less iconic signs. This finding aligns with prior literature on modifications of child-directed signing (Perniss et al., 2018), and with studies showing that the effect of iconicity on children's acquisition is greatest among older hearing children (aged 3+; Namy et al., 2004;Tolar et al., 2008) rather than younger ones (aged 18-24 months; Perry et al., 2021). However, much of the research concerning iconicity in early sign language acquisition targets children within the first 20 months (10-14 months- Massaro and Perlman, 2017;21-30 months-Thompson et al., 2012). While the older children in the current study may see iconic signs for longer, they may have already acquired those signs. So, the function of parents' lengthening of iconic signs in their child directed signing to older children remains unclear. There are two ways to consider the observed interaction between iconicity and age on sign duration: parents may lengthen iconic signs or reduce non-iconic signs. Because the length of iconic signs is similar for parents of younger and older children, our interpretation is that iconic signs resist reduction. Lengthening is a common property of child-directed signing (e.g., Holzrichter and Meier, 2000;Pizer et al., 2011), and as children grow parents typically produce signs more rapidly. This study suggests that iconic signs resist this shortening of sign duration and remain similar in length to the input much younger children receive. While the present study is not designed to determine whether increased sign duration causes children to more readily learn signs, it suggests that an 'input centric' mechanism is a viable explanation as to why iconic signs are overrepresented in older children's early vocabularies: iconic signs are perceptually available for longer, which may make them easier for children to learn. Another mutually compatible possibility is that parents lengthen iconic signs in response to children's acquisition, lengthening these signs because they are aware that children are learning them. More work is needed to identify the nature of the relationship between the lengthening of iconic signs in childdirected signing and acquisition of those signs. The interaction between sign duration, iconicity and child age in months. For younger and middle-aged children sign duration was similar regardless of the sign's iconicity rating, but for older children sign duration was shorter for non-iconic signs than iconic signs. The lines indicate the children's mean age and +/− one standard deviation. The role of visual attention We speculate that children's ability to monitor and manage their own visual attention may partially explain the influence of child age on parent sign duration. Specifically, older children are better able to control their visual attention, so they are more likely to be looking at their parents when signs are produced. Pizer et al. (2011) found a significant association between child eye gaze and parent sign duration, with parents producing longer signs when they did not have eye contact with their child. It is likely that children in the current study were old enough to skillfully manage their own attention, resulting in parents producing shorter signs overall but maintaining the increased length of iconic signs due to their phonological form or other factors. Future studies that take into account children's eye gaze to the parent during interaction will help shed light on this possibility. Limitations and future directions Our analysis looked only at lexicalized signs which had a corresponding entry in ASL-LEX that included an iconicity rating. Depicting signs show appearance, location, and/or movement-are often transparently iconic, but were excluded from analysis here. In addition to the iconicity of the manual components of depicting signs, signers often produce accompanying mouth movements that are temporally aligned with the production of the sign and depict the referent's size and shape in iconic ways (Lu and Goldin-Meadow, 2018). Importantly, if lexical signs do not map neatly onto their referents, depicting signs may be used instead to better align with an iconic mapping (Lu and Goldin-Meadow, 2018), which may increase the overall iconic properties of child-directed signing, even within our corpus. How iconicity influences parents' production of depicting signs may very well be different from the lexical items in this study, and merits further exploration. In the current study we investigated the hypothesis that the sign duration of iconic signs may be longer than non-iconic signs. In addition to lengthening, parents may specifically highlight iconic signs by repeating them, displacing them into the child's view, using an unconventional place of articulation, or even attempting to explain the iconic properties of the sign (e.g., Pizer et al., 2011). Perniss et al. (2018) found that parents modify iconic signs more than non-iconic signs, particularly in non-ostensive naming contexts. While these findings support our work, it is important to note that all our contexts were ostensive, with the toys present throughout the interaction, which may have impacted the likelihood of iconic signs being lengthened. Though Perniss et al. do not report the proportion of each kind of modification in their study (enlargement, repetition, and lengthening), Fuks (2020) found that when signs were phonetically modified, they were most likely to be repeated or enlarged, not lengthened. Seeing as our study did not analyze other forms of modification, iconic signs may have been emphasized in other ways within the corpus. Moreover, the kind of modification that parents apply to iconic signs may specifically illustrate the iconicity of the sign. For example, signs referencing large objects might be more likely to be enlarged, signs referencing slow objects might be more likely to be lengthened, etc. Signs can be iconic of their referent in a myriad of ways, and parents can highlight that iconicity by using many forms of modification. More research is needed to examine these other ways that iconic signs may be modified in child-directed signing, especially in naturalistic contexts. Conclusion This study of parent input during naturalistic ASL interactions revealed that parents do not preferentially use iconic signs, but may lengthen their sign productions as a function of iconicity for older children. Increased sign duration may support children's acquisition of iconic signs, but more work is needed to determine whether there is a causal relationship between the length of iconic signs in input and their acquisition. Though we find effects of iconicity in child-directed signing, the effects were subtle. Thus, we await a more nuanced analysis of other types of sign modifications to better understand how input-centered mechanisms might relate to the acquisition of iconic signs. The current study contributes to our understanding of how iconic signs are produced in child-directed signing, and lays groundwork for investigations of the relationship between child-directed signing and child vocabulary acquisition. Data availability statement The data that support the findings of this study are available upon reasonable request from the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by the Boston University IRB. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Author contributions PG conceptualized the study, prepared the data, and conducted the analyses. AL helped to collect the data for the ASL-PLAY dataset and contributed to data analysis. NC collected the data for the ASL-LEX database and contributed to data analysis. JP contributed to data analysis. All authors contributed to writing the manuscript and approved the submitted version. Frontiers in Psychology 08 frontiersin.org Funding This research was supported by the National Institute on Deafness and Other Communication Disorders (Award Nos. DC015272 and DC018279), as well as the National Science Foundation (Award Nos. BCS-1918252 and BCS-1625793).
2022-08-25T14:07:04.978Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "8d8555050f0cb811653ac60d270773b8c5b30a4d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8d8555050f0cb811653ac60d270773b8c5b30a4d", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Medicine" ] }
238420108
pes2o/s2orc
v3-fos-license
Cerebellar mutism following head trauma: A case report and literature review Background: Cerebellar mutism (CM) is defined as the lack of speech production, despite an intact state of consciousness and cognitive function, that happens secondary to a cerebellar insult. To the best of our knowledge, only five cases have thus far been described in the English literature. In this paper, we report the sixth incidence overall, which is also the first case of a CM associated with penetrating head injury. The relevant literature is reviewed and analyzed, our current knowledge of the neuroanatomical and functional relations is summarized, and potential future research endeavors are indicated. Case Description: An 8-year-old girl was transferred to our hospital having had fallen on a rod that penetrated her neck behind the ear. An urgent computed tomography scan of the head revealed a right cerebellar contusion with surrounding edema. Three days later, she became mute but was still obeying commands. Repeat imaging showed a resolving cerebellar contusion with increased edema and mass effect. By day 9, she had uttered a few words. At 1-month follow-up, the child had regained normal speech. Conclusion: Posttraumatic CM is a rare and probably underreported condition with only six documented cases to date. Although it may well be on the same spectrum as postoperative CM, further understanding of the exact mechanism, clinical course, and prognosis of this entity is bound to significantly improve the recovery and quality of life of head trauma patients. INTRODUCTION Cerebellar mutism (CM) is defined as the lack of speech production, despite an intact state of consciousness and cognitive function, that happens secondary to a cerebellar insult. Conventionally, the extension to this definition is "the absence of evidence for supra or nuclear cranial nerve or long tract injury. " However, with our current limited understanding of the anatomical substrate for the condition, this extension remains subject to alterations. [9] CM is most commonly a complication of posterior cranial fossa surgery, with an average reported incidence of 2-40% -in the pediatric age group. [14] It has also been uncommonly described in conjunction with other etiologies, including infection [11] and vascular events. [5] CM following traumatic brain injury is an exceedingly rare occurrence. To the best of our knowledge, only five cases have thus far been described in the literature. [4,6,8,9,15] In this paper, we report the sixth incidence overall. Interestingly, all previous cases were reported in association with a closed head injury. is is the first case of a CM associated with penetrating cranial trauma. e relevant literature is reviewed and analyzed, our current knowledge of the neuroanatomical and functional relations is summarized, and potential future research endeavors are indicated. CASE DESCRIPTION An 8-year-old girl was transferred to our hospital having sustained a penetrating head and neck trauma. According to her parents, she was climbing on a side of a water tank outside when she fell and landed on a steel rod. In the emergency department, the patient was conscious, oriented, and screaming. She had a right-sided retromandibular penetrating wound. Her other findings included right-sided lower motor neuron facial nerve palsy, neck spasm (torticollis), and right-sided cerebellar signs, namely, horizontal nystagmus, impaired finger to nose test, and dysdiadokokinesia were elicited. A computed tomography (CT) scan of the head revealed a skull base fracture through the floor of the posterior cranial fossa in association with right cerebellar contusion and edema. A cervical CT scan showed evidence of right parotid gland injury with surrounding hematoma (images not available). ree days after admission, she became mute but was still obeying commands. Repeat imaging showed a resolving cerebellar contusion with increased edema and mass effect [ Figures 1 and 2]. On the 7 th admission day, she was ambulatory but with an ataxic gait and still could not speak. By day 9, she had spoken a few words and her neurological examination started to improve with a stationary facial exam. At her 1-month follow-up visit, the child had regained normal speech. Her facial palsy, which had persisted, was deemed secondary to direct injury to the extraforaminal segment of the facial nerve. DISCUSSION We reported a case of a child who was temporarily mute after penetrating trauma to the posterior fossa. e mental status and cognition of the patient remained intact throughout. Typical CM is characterized by an intact cranial nerve function. In this case, the patient had an injury to the parotid gland resulting in a coexisting CN VII palsy. CM is characterized by a specific onset and chronology. It is usually diagnosed following a period of latency (1-6 days after the initial insult), with an average duration of 1 day-4 months. It follows a variable recovery path after that before the subsequent gradual return of verbalization. In its typical form, CM is a temporary condition. However, the resumption of baseline speech function follows a variable course, with residual neurological deficits such as persistent dysarthria, ataxia, and behavioral changes being infrequently documented. [7] In our case, mutism began on the 4 th day and continued for 5 days, and then gradually began to improve until the full recovery after 1 month. Notably, our patient demonstrated signs of cerebellar syndrome, with a concordance with the onset and resolution of her mutism. As for the neuropathological and anatomical coordinates of CM, a number of studies have been published, with most data coming from postsurgical cases. A wide range of theories has thus been put in place and most of them focus on specific anatomical areas. For example, the involvement of median and paramedian structures, [3] splitting of the vermis, [2] bilateral cerebellar injury, [12] and transient neuron dysfunction of the A9 to A10 dopaminergic cells in the mesencephalon have all been proposed as potential mechanisms. [1] More recently, the involvement of particular tracts regardless of the specific anatomical areas per se has been identified as the most plausible hypothesis. Specifically, a bilateral disruption to the dentate-thalamocortical tract by ischemia and/or edema and the resultant cerebello-cerebral diaschisis has been cited by multiple authorities as the underlying mechanism. [7,13] e latter explanation is consistent with the literature findings, where the heterogeneous anatomical location of the injuries was apparent. In our case, the involvement of the right cerebellar hemisphere fits into the conclusions of functional studies that tell us that the right cerebellar hemisphere plays a role in language production. [10,14] Furthermore, the fact that the CM coincided with the resolution of contusion and onset of cerebral edema points to the latter as the cause of a bilateral disruption of the circuit between the cerebrum, the dentate nucleus, and the thalamus. However, to better understand the underlying neuroanatomical basis of CM, more tractography studies are needed. The first case of CM was reported in 1985 by Rekate et al. [12] who described six cases of children with transient muteness following posterior fossa craniotomy for tumor removal. Since then, several similar cases have been identified. In this review, we focused on CM following head trauma, an extremely rare entity, of which only five cases have been documented, none of which was associated with a penetrating injury. The first case of a CM following head trauma was reported in 1990 by Yokota et al. [15] who described the case of a 6-year-old boy who became mute after a road traffic accident (RTA). The authors attributed the CM in this case to an injury to the cerebellar vermis or left cerebellar hemisphere. Two cases were then documented in 1997; Koh et al. [9] reported a case of cerebellar muteness following an RTA with the injury located at the left cerebellar hemisphere (small contusion) and left cerebellar peduncle (small focal hemorrhage). In this particular case, a latency period of 44 h was recorded and the patient resumed normal speech after 25 days. Ersahin et al. [4] described a case of cerebellar muteness in a 2.5-year-old boy who had fallen from a height. CT imaging revealed a hematoma in the right paravermian region and this patient regained normal speech function in 2 months. In 2005, Fujisawa et al. [6] reported a case of CM following the evacuation of an acute subdural hematoma of the posterior fossa in a 7-year-old male who was involved in an RTA. The patient spoke normally after 39 days. Kariyattil et al. [8] have recently reported a similar case. This was a 6-year-old boy with contusions of the cerebellar vermis and left cerebellar hemisphere following an RTA [ Table 1]. Although it is difficult to draw conclusions from such a small number of cases, a number of observations may be made here: first, the scarcity of cases and the unknown time of onset in most instances reflect the possible lack of reporting of such cases, partly due to the intubated, sedated status of trauma patients. e second observation is that six cases have so far been reported in the pediatric age group reflecting the age distribution of surgical CM. ird, there is a lack of evidence in long-term follow-up studies, with none of the patients undergoing any formal neurocognitive testing. Forth, the indistinct anatomical location of cerebellar injuries solidifies the current school of thought on the neuroanatomical coordinates of CM. Further anatomical, functional, and clinical studies are needed to better understand the basis and prognosis of CM.
2021-10-08T05:11:39.064Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "81c2cd0668e7e299e229dabd5540746801f351a4", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc8492424?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "81c2cd0668e7e299e229dabd5540746801f351a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201276445
pes2o/s2orc
v3-fos-license
Environmental Persistence of Influenza Viruses Is Dependent upon Virus Type and Host Origin The rapid spread of influenza viruses (IV) from person to person during seasonal epidemics causes acute respiratory infections that can lead to hospitalizations and life-threatening illness. Atmospheric conditions such as relative humidity (RH) can impact the viability of IV released into the air. To understand how different IV are affected by their environment, we compared the levels of stability of human-pathogenic seasonal and avian IV under a range of RH conditions and found that highly transmissible seasonal IV were less sensitive to decay under midrange RH conditions in droplets. We observed that certain RH conditions can support the persistence of infectious viruses on surfaces and in the air for extended periods of time. Together, our findings will facilitate understanding of factors affecting the persistence and spread of IV in our environment. viruses (3)(4)(5). Efficient epidemiological spread of IV in people is dictated by the capacity of the viruses to transmit effectively through the air within respiratory aerosols (noncontact transmission) and droplets (indirect contact, or fomite transmission) expelled from an infected host (6). These modes of transmission require that the viruses maintain viability in the environment for the period of time leading up to contact with an immunologically naive recipient. However, little is known about the maintenance and duration of viral stability in the environment following release from the human airway. Clarifying the relationship between RH and viral persistence in the environment will be critical to understanding the basis for seasonal epidemiology of IV, as well as to designing nonpharmaceutical intervention strategies to limit the spread of these viruses in the human population. Variations in ambient RH have previously been shown to directly affect transmission of IV in animal models (7,8). Historically, midrange RH conditions have been shown to be detrimental to the viability of expelled IV (4,(9)(10)(11)(12)(13)(14). However, our recent work has demonstrated that the presence of airway surface liquid (ASL) collected from human bronchial epithelial (HBE) cells can protect the 2009 H1N1 pandemic (H1N1pdm) virus from RH-dependent decay in suspended aerosols and stationary droplets (15). Primary HBE cells differentiated at an air-liquid interface produce mucus, mimic the surface of the lumen of the human airway, and are highly permissive to IV infection (16,17). Newly replicated IV collected from the apical surface of HBE cells are, therefore, expected to be very similar to those expelled in physiological respiratory droplets. Comparing the levels of stability of IV under a range of RH conditions in physiologically relevant aerosols and droplets will provide a better representation of how IV respond to environmental stressors following release from the respiratory tract and will improve assessment of the risk of transmission under different environmental conditions during an influenza epidemic. Recurring epidemics are driven by cocirculation of human-pathogenic seasonal H1N1 and H3N2 subtype influenza A viruses (IAV) and influenza B viruses (IBV), which persist in the population despite significant surveillance and intervention efforts (18). The epidemiological spread of IAV and that of IBV are not uniform, with IAV and IBV infections peaking at different times during a season (https://www.cdc.gov/flu/weekly/ index.htm) (19)(20)(21), which may reflect differences in their primary modes of transmission (22). The impact of the environment on the seasonality of influenza viruses in temperate climates is still poorly understood. One approach to addressing this critical issue is that of comparing the environmental persistence of IV that transmit efficiently through the air (i.e., human-pathogenic seasonal viruses) to that of IV that are poorly transmissible in the human population. Low-pathogenicity avian influenza (LPAI) viruses, including H6N1 and H9N2, have caused sporadic infections in humans (23)(24)(25)(26) but fail to transmit efficiently in animal models (27)(28)(29)(30). Previous pandemic IV have emerged through genetic reassortment of seasonal and zoonotic IAV (31,32). Therefore, comparisons of the levels of environmental persistence of human IV and LPAI viruses will improve our understanding of the contribution of environmental factors to the spread of IV in the human population and enhance pandemic preparedness programs that assess the phenotypes of emerging zoonotic IAV with pandemic potential. In this study, we examined the environmental persistence of six IV, including human-pathogenic seasonal and LPAI viruses (Table 1; see also Table S1 in the supplemental material), in response to a range of RH conditions. We explored the contributions of virus strain background, droplet composition via various propagation methods, RH, and duration of exposure on the stability of IV in aerosols and large droplets. We found that human-pathogenic seasonal H3N2 and IBV were resistant to decay under most RH conditions for an extended period of time in aerosols containing ASL from HBE cells (HBE ASL). However, the longevities of human-pathogenic seasonal IV stability in droplets differed between subtypes H1N1, H3N2, and IBV in an RHdependent manner, suggesting a role for virus-specific factors in the environmental persistence of IV. Surprisingly, we observed that LPAI IV were more vulnerable to decay at midrange RH than human-pathogenic seasonal IV. Together, these results confirm that human-pathogenic seasonal IV can remain infectious under a range of RH conditions, in agreement with our previous study (15), but this work clearly demonstrates that RH may be an important factor affecting the stability of expelled IV over time. Overall, our results indicate that the levels of persistence of IV are not uniform and that virus-specific factors can impact the stability and longevity of IV in the environment. Aerosolized seasonal H3N2 and IBV are stable independently of RH. Recently, we demonstrated that the presence of airway surface liquid (ASL), collected from HBE cells grown at an air-liquid interface, confers a protective effect against RH-dependent decay to H1N1pdm virus grown in MDCK (Madin-Darby canine kidney) cells in aerosols and droplets (15). H1N1pdm viruses grown in HBE cells are also protected from decay under midrange RH conditions (15). These data suggest that HBE ASL provides a protective microenvironment for H1N1pdm. To determine whether this effect is consistent among other seasonal IV, we compared the levels of stability of aerosolized Perth H3N2 and IBV in the presence of HBE ASL. To test the stability of other seasonal IV in aerosols, we aerosolized Perth H3N2 or IBV supplemented with a 1:10 dilution of HBE ASL collected from uninfected cells into a rotating, RH-controlled drum. Changes in infectious virus titer were determined by comparison of TCID 50 (50% tissue culture infectious dose) assay results between unaged aerosols and aerosols aged for 1 h in the drum determined on MDCK cells. Log decay was corrected for both physical loss and dilution in the drum at each RH (see Fig. S1 in the supplemental material). The titer of Perth H3N2 did not change between aged and unaged samples in the presence of HBE ASL, and we did not detect any RH-dependent decay of Perth H3N2 in aerosols after 1 h (Fig. 1A). To assess the persistence of aerosolized Perth H3N2, we aged viral aerosols in the presence of HBE ASL for 15 min, 1 h, and 2 h at 43% RH. We found that the virus remained fully infectious in aerosols under those conditions for up to 2 h (Fig. 1B). Similarly, aerosolized IBV remained equally infectious using unaged and aged aerosols at 43%, 55%, 75%, and 95% RH for up to 1 h in the drum (Fig. 1C). These results indicate that aerosolized human-pathogenic seasonal IV can resist RH-dependent decay and are more persistent in the environment than previously suggested with laboratory-adapted strains of IAV (4,(9)(10)(11)(12)(13)(14). Cell propagation method impacts stability of H1N1 but not H3N2 viruses in stationary droplets. To assess the stability of human-pathogenic seasonal IV in large droplets, we used 1-l droplets of viruses grown in MDCK or HBE cells under seven sets of RH conditions: 23%, 33%, 43%, 55%, 75%, 85%, and 98% RH for 2 h. After this incubation period, infectious viral titer was determined by TCID 50 assay on MDCK cells and compared to the titers determined using an equivalent volume (10 l total) of virus incubated in a sealed vial outside the chamber. The levels of viral stability determined under each set of RH conditions are presented as the raw viral titer and log decay, which is the change in infectivity between experimental and control samples. Consistent with our previous findings (15), we found that H1N1pdm is highly sensitive to midrange RH under conditions of propagation in MDCK cells but is less sensitive to RH following propagation in HBE cells ( Fig. 2A). Specifically, we observed a decrease of Ͼ2 log 10 of virus titer for MDCK cell-grown H1N1pdm at 75% and 85% RH compared to control samples. In contrast, H1N1pdm propagated in HBE cells was resistant to RH-mediated decay. Comparisons between the level of decay determined for H1N1pdm grown in MDCK cells and that grown in HBE cells revealed a statistically significant difference in viral decay levels for HBE cell-propagated H1N1pdm compared to the MDCK cell-grown virus at 75%, 85%, and 98% RH, suggesting that growth in HBE cells enhances the stability of H1N1pdm under these RH conditions. To account for effects of patient-specific variation on IV stability, we propagated the viruses in multiple HBE cell cultures derived from different patient tissue samples for our experiments (see Table S2 in the supplemental material). We found no significant differences in the levels of decay of H1N1pdm in any of the three primary HBE cell cultures tested (Fig. S2). To examine whether H1N1pdm decay was representative of that of other human H1N1 viruses, we assessed the decay of a pre-2009 pandemic seasonal H1N1 (Bris H1N1) strain (Fig. S3). Similarly to the results seen with H1N1pdm, we found that the levels of stability of BrisH1N1 were statistically significantly different between MDCK and HBE cells at 43%, 55%, and 75% RH. These data indicate that sensitivity of H1N1 viruses to RH in the absence of HBE ASL is not limited to viruses emerging after the 2009 pandemic. In contrast, analysis of the stability of Perth H3N2 IV in stationary droplets showed that the levels were not significantly different in comparisons between cell propagation methods at all RH settings, excluding 98%, where HBE-grown H3N2 decayed more than MDCK-grown H3N2 ( Fig. 2B; see also Fig. S4). In both cases, Perth H3N2 showed a trend toward more decay at midrange RH (43% to 75%) than at low or high RH. Our data suggest that viral propagation method impacts the stability of H1N1 viruses more than H3N2 viruses, at least for those isolates analyzed in this study, indicating that IV do not all respond to environmental stimuli in the same manner. Seasonal IBV is sensitive to RH-mediated decay in stationary droplets. Seasonal influenza infections are caused by IAV subtypes H1N1 and H3N2 and by IBV (18). However, in contrast to IAV, the onset and spread of IBV within the community tends to occur primarily in children (33), with an estimated efficiency of transmission in households of Ͻ40% by the aerosol route (22). To test the effect of RH on IBV in droplets, a strain from the Victoria lineage was grown in MDCK and HBE cells. This virus replicated to high titers in HBE cells (Fig. S5A). MDCK cell-grown IBV stability varied with RH, with the greatest average decay (Ͼ2 log 10 ) occurring at 75% RH (Fig. 3). Surprisingly, HBE cell-grown IBV also decayed Ͼ2 log 10 at a midrange RH (55%). No significant difference in log decay was observed between the two propagation methods, suggest- ing that HBE ASL does not protect IBV from sensitivity to RH. This phenotype was reproduced in two additional HBE cultures derived from distinct patient samples (Fig. S5B). The decay of IBV did not match its resistance to RH in aerosols, which may suggest a role for aerosolization mechanism or droplet size in resistance to RHdependent decay, although the duration of the incubation for aerosols was half as long as for droplets. The decay of HBE cell-grown IBV was significantly greater (two-way analysis of variance [ANOVA] with Bonferroni's multiple-comparison test) than that of HBE cell-grown Perth H3N2 at 55% RH. In addition, we did not observe a Ͼ2 log 10 decay for H1N1pdm or Perth H3N2 viruses grown in HBE cells under any RH condition. These results, as well as our observation that there was not a significant difference in decay between MDCK cell-grown IBV and HBE cell-grown IBV, in contrast to our observations with H1N1pdm, suggest that IAV and IBV isolates may respond differently to RH in droplets. Such differences in vulnerability to RH-medicated decay may be linked to the differential seasonal infection cycles of these viruses in nature. Longevity of seasonal IV in droplets is dependent upon RH and virus strain. The risk of infection by indirect contact, or by fomite transmission, of a seasonal IV depends on the ability of the viruses to remain infectious for a long period of time following deposition onto a surface. To understand the persistence of seasonal IV in droplets, we tested the stability of HBE cell-propagated H1N1pdm, Perth H3N2, and IBV after 2, 8, and 16 h under four different sets of RH conditions: 23%, 43%, 75%, and 98%. We confirmed that the RH chamber was capable of maintaining constant temperature and humidity for the duration of our studies (Fig. S6). The rate of change in virus titer, presented as (Δ log decay), was determined as the difference between the decay at each RH and the mean log decay at 2 h for each virus. We performed each study in three independent biological replicates from three different patient cell lines (Table S2). Each replicate is presented to illustrate the variation observed under each set of RH conditions over time (Fig. 4). In general, we anticipated that all viruses would decay over time but that the relative amounts of decay would depend upon both virus strain and RH. Surprisingly, the Δ log decay of H1N1pdm in droplets was highly variable based on RH condition (Fig. 4A). Initial average decay at 2 h was determined on the basis of the data presented in Fig. 2 (H1N1pdm and Perth H3N2) and Fig. 3 (IBV). Most strikingly, the infectious titer of HBE cell-propagated H1N1pdm did not diminish at 43% RH, suggesting that the virus was capable of remaining fully infectious in droplets under those conditions for up to 16 h. At 23% and 98% RH, the Δ log decay of H1N1pdm increased from 2 to 16 h. However, at 75% RH, H1N1pdm Δ log decay was different from 2 to 8 h but not 2 to 16 h. We observed a more consistent loss of infectious titer over time for Perth H3N2 and IBV than for H1N1pdm. Specifically, the Δ log decay of Perth H3N2 increased significantly between 2 and 16 h under all four RH conditions tested (Fig. 4B). Similarly, Δ log decay of HBE cell-grown IBV also increased significantly from 2 to 16 h under all four RH conditions tested (Fig. 4C). Mixed-effects analyses using Tukey's multiple-comparison test confirmed that Δ log decay of H1N1pdm was significantly different from that of both Perth H3N2 (P Ͻ 0.01) and IBV (P Ͻ 0.05) at 43% RH as well as from that of IBV (P Ͻ 0.01) at 75% RH. Δ log decay of Perth H3N2 was significantly different from that of IBV (P Ͻ 0.05) at 75% by the same statistical analysis. Together, these data indicate that the levels of persistence of seasonal IV in droplets differ among virus types and subtypes but also that IV can remain stable and highly infectious for long periods of time under certain RH conditions. RH-dependent decay of LPAI viruses in droplets. To this point, our study had focused on the persistence of epidemiologically successful human-pathogenic seasonal IV. However, the host range of IAV is quite broad, resulting in the emergence of IAV pandemics from animal sources (27,31). LPAI viruses, including H9N2 and H6N1 subtypes, have contributed to zoonotic infections (23-26) but have not yet spread efficiently through the human population (27). To understand how environmental factors may affect the spread of LPAI viruses, we tested the stability of MDCK cellpropagated and HBE cell-propagated avian influenza virus H6N1 (avH6N1) and avH9N2 strains in large droplets under a range of RH conditions. Both of the LPAI virus strains replicated well in HBE cells, with titers exceeding 10 7 TCID 50 /ml (Fig. S7A). We hypothesized that, like the seasonal IAV reported here, the LPAI IAV would be resistant to RH-dependent decay following propagation in HBE cells. MDCK cell-grown avH6N1 and avH9N2 decayed Ͼ2 log 10 at midrange RH (Fig. 5). The severity of decay of these viruses was greater for the MDCK cell-grown stock at 43% and 55% RH than for the HBE cell-grown viruses, but both reached at least 2 log 10 decay at midrange RH, suggesting that these viruses are more sensitive to RH in droplets than the human-pathogenic seasonal H1N1pdm and Perth H3N2 viruses tested in this study. The RH sensitivity phenotypes of both avH6N1 and avH9N2 were reproduced in 2 additional HBE cell cultures, although avH6N1 grown in a third patient culture (HBE 0206) decayed significantly less than virus grown in HBE 0195 or HBE 0204 cells (Fig. S7B). These data underscore the need to assess viral stability in samples prepared in, at minimum, three primary cell lines to ensure reproducibility. avH6N1 is less stable at midrange RH than human-pathogenic seasonal IV. In comparing the data from the different isolates tested in this study, we noticed that avH6N1 appeared to be highly sensitive to RH, reaching average decay levels above 3 log 10 at 55% RH (Fig. 5A). To assess whether this virus decayed differently from the other viruses tested, we compared the levels of decay under each set of RH conditions for HBE cell-grown avH6N1 with H1N1pdm (Fig. 6A), Perth H3N2 (Fig. 6B), and IBV (Fig. 6C) as well as with the other LPAI virus, avH9N2 (Fig. 6D). We detected significantly more decay of avH6N1 than of each of the human-pathogenic seasonal IV for at least 1 midrange RH condition. In contrast, decay of avH6N1 was not significantly different from decay of avH9N2 under any RH condition, suggesting that this LPAI isolate may be less resistant to RH-dependent decay than the highly transmissible humanpathogenic seasonal viruses. Human and avian IV generally differ in hemagglutinin (HA) receptor specificity, with human IV preferentially binding to ␣2,6-linked sialic acids and avian IV preferring ␣2,3 sialic acids (34,35). To test whether receptor specificity impacts virus stability at midrange RH, we compared the decay of wild-type H1N1pdm with that of a mutant H1N1pdm having preferential binding to ␣2,3-linked sialic acids (Fig. S8) that we had previously characterized (36). We found significantly more decay of the ␣2,3 HA H1N1pdm mutant under 43% RH conditions than of the wild-type viruses. This result was confirmed in three independent HBE cultures, indicating that viral persistence may be influenced, at least in part, by HA receptor binding preference. DISCUSSION With this work, we have produced a comprehensive investigation of the relationship between RH and the stability of seasonal IV and LPAI in the environment. We found that FIG 6 HBE cell-grown avH6N1 virus is more sensitive to RH than human-pathogenic seasonal IV in stationary droplets. To compare the stability levels of distinct IV isolates following growth in HBE cells, we reanalyzed the decay of avH6N1 compared to the other human-pathogenic seasonal and LPAI isolates used in this study. (A) The level of avH6N1 decay was significantly greater than that of H1N1pdm at 75% RH. (B) The level of avH6N1 decay was significantly greater than that of Perth H3N2 at 43%, 55%, and 85% RH. (C) The level of avH6N1 decay was significantly greater than that of IBV at 85% RH. (D) The decay levels of avH6N1 were not significantly different from those of avH9N2 at any RH tested. Significance was determined at each RH using a two-way ANOVA, and adjusted P values are reported using Bonferroni's correction for multiple comparisons. Data represent means Ϯ standard deviations of results of experiments conducted in triplicate (*, P Ͻ 0.05; **, P Ͻ 0.01). stability of airborne IV in aerosols and droplets is not unique to H1N1pdm as previously reported (15) but is a trait shared by other seasonal IV isolates representative of currently circulating epidemic viruses (Fig. 1). However, a discord remains between these findings and historical evidence that suggests that the stability of IV in the environment can be influenced by RH (4,(9)(10)(11)(12)(13)(14). We now provide a more refined explanation of the relationship between IV stability and RH that considers other factors, including virus strain background and time. We found that RH-mediated decay of seasonal IV varies with virus strain and propagation method, suggesting a role for virusand host-specific factors in the maintenance of stability in the environment. To lend further support to this model, we have also found that the persistence of infectious virus in droplets over extended periods of time varies with RH and virus strain background. Among seasonal IV, we identified differences between isolates of IBV and H3N2. Both viruses were stable in fine aerosols (Fig. 1), but IBV decayed significantly more than H3N2 in stationary droplets at 55% RH based on our statistical tests. Important distinctions between these two experiments are that (i) IV aerosolized into the drum were supplemented with ASL from HBE cells rather than being grown in HBE cells and that (ii) the viral aerosols were exposed to each RH for only 1 h in the rotating drum. These results, together with data from our previously published work (15), indicate that seasonal IV have the potential to remain highly stable and infectious while suspended in the atmosphere and also that aerosol or droplet size may be an important determinant of viral stability in the environment. Among the IV exposed to a range of RH conditions in stationary droplets, all viruses responded to RH with generally more stability at low and high RH and the most decay under midrange conditions. However, the degrees of decay at midrange RH differed among the strains. Previous studies exploring the levels of stability of IAV and IBV smeared onto plastic (37) and banknotes (38) indicated that IAV tend to be more stable than IBV at midrange RH, but variations in ambient RH, virus suspension medium, mode of surface deposition, and surface material make direct comparisons difficult. In our study, virus strain and propagation method affected the degree to which each virus decayed under these conditions. Growth in HBE cells protected pre-and post-2009 pandemic H1N1 viruses from decay at midrange RH ( Fig. 2A; see also Fig. S3 in the supplemental material). In contrast, we found that HBE cell-grown Perth H3N2 (Fig. 2B) and IBV (Fig. 3) decayed similarly to their MDCK cell-grown counterparts, although IBV may be more sensitive to RH than the seasonal IAV tested here. Much less is known about transmission of IBV than about that of IAV, although its spread may be driven primarily by children through direct contact (33). A recent study identified a correlation between IAV transmission efficiency and persistence of aerosolized virus infectivity in a ferret model (39). Further studies will be required to determine whether there is also a causal relationship between RH sensitivity and seasonal transmission cycles of epidemic IV. We have shown that, similarly to the RH sensitivity results, the persistence of infectious IV over time varied with virus strain background and with RH. For example, the rate of decay of H1N1pdm in droplets did not increase for up to 16 h after deposition at midrange RH, while both Perth H3N2 and IBV continued to decay over time under all RH conditions tested (Fig. 4). Previous studies explored the persistence of IV, although never with viruses grown in primary human airway cells. Previous work showed that MDCK cell-grown H3N2 viruses can remain stable for up to 2 days following inoculation of a banknote, with persistence extended to days (IBV) or weeks (H3N2) under conditions of supplementation with human nasopharyngeal secretions (38). Importantly, both that previous study and our work demonstrated that infectious IV have the potential to persist on surfaces for extended periods of time. However, we have also shown that this persistence is closely linked to both atmospheric conditions and virus strain. The impact of other factors, such as deposition surface material, will also need to be considered in future studies of viruses in physiological droplets. In contrast to seasonal IV, LPAI viruses, including avH6N1 and avH9N2, have not caused pandemic outbreaks in people (27). As with seasonal IV, infectious avian IAV have been shown to persist for days on various surfaces, although the RH at which these experiments were conducted was not noted (40). In our study, both of these viruses were highly sensitive to 55% RH in droplets following preparation in either MDCK or HBE cells (Fig. 5). Additionally, we showed that HBE cell-grown avH6N1, but not avH9N2, was significantly less stable in droplets at midrange RH than any of the other human-pathogenic seasonal IV tested (Fig. 6), suggesting that host species origin may contribute to the viral determinants of RH sensitivity. The interaction between IV and ASL under this critical midrange RH condition, including investigations of the role of ASL content in the environmental persistence of IV, will be the focus of future studies. The variations in the sensitivities of these seasonal IV and LPAI to midrange RH that we observed hints at a link between viral factors and the maintenance of viral infectivity outside the host which may impact the transmissibility of H6N1 and H9N2 LPAI viruses (28,29). We do provide evidence for a link between HA receptor specificity and viral persistence in stationary droplets (Fig. S8), although further studies performed with additional mutants will be required to fully define this relationship. Other known determinants of IV transmissibility include viral surface protein function/stability and genomic background (36,(41)(42)(43)(44). Integrational studies of these viral factors under various sets of environmental conditions will be required provide a framework for clarifying the mechanisms driving airborne transmission and may be useful for assessing the transmissibility or pandemic potential of emerging zoonotic IV without the need for animal model systems. Here, we have clarified the relationship between RH and the stability of seasonal IV and LPAI strains resembling those that would be released into the environment from the airway of an infected person. This report provides a distinction between IV strains and virus stability in response to propagation method and specific environmental parameters. Aerosolized seasonal IV are highly resistant to RH-dependent decay, which suggests that removing them via increased air exchange rates, filtration, or UV germicidal irradiation may be critical to reducing the transmission of these viruses indoors. We found that RH is important for the stability of IV on surfaces but also that infectious viruses have the potential to persist on surfaces for hours in physiological droplets, reinforcing the need for surface decontamination in high-risk environments. Cells and viruses. Primary HBE cells derived from human lung tissue were cultured at an air-liquid interface using an institutional review board-approved protocol (16). HBE ASL was collected for RH drum experiments by washing the apical surface of uninfected HBE cells with phosphate-buffered saline (PBS) as previously described (15). Madin-Darby canine kidney (MDCK) cells (ATCC) were cultured in Eagle's minimal essential medium with 10% fetal bovine serum, L-glutamine, and penicillin-streptomycin. MD). Virus stocks were propagated in MDCK cells, harvested following detection of cytopathic effect (CPE), and clarified by low-speed centrifugation. HBE virus stocks were prepared by inoculation of each Transwell with 10 3 TCID 50 /ml virus as previously described (17). Viruses and associated HBE ASL samples were harvested by pooling collected washes from the apical cell surface with PBS at 48 to 72 h postinfection (hpi), prior to the onset of CPE. LPAI replication kinetics in HBE cells were compared using three Transwells from each of two HBE cell lines at 1, 8, 24, and 48 hpi. IV titers were determined by TCID 50 assay on MDCK cells according to the method of Reed and Muench (45). RH chamber for virus stability in stationary droplets. Virus-laden droplets were exposed to seven RH conditions (23%, 33%, 43%, 55%, 75%, 85%, and 98%) in chambers conditioned for each RH using aqueous saturated salt solutions as previously described (14,15). Briefly, chambers were housed inside a biosafety cabinet at room temperature, and temperature and RH were recorded during all experiments using an Onset HOBO temperature/RH logger. msphere.asm.org 11 tube were incubated at ambient temperature within the same biosafety cabinet as the chamber during each experiment. The change in virus infectivity in response to RH is represented as log decay compared to the titer of control samples, as previously described (14). The rate of virus decay over time was quantified as Δ log decay, normalized to the average decay of each virus measured after 2 h under each set of RH conditions, where x is the indicated duration of incubation and n is the number of replicate samples collected at 2 h for each virus at each RH, as follows: log decay ϭ log decay x h Ϫ ⌺(log decay 2 h ) n 2 h Statistical analysis. Comparisons between levels of decay in MDCK cells versus HBE cells under all RH conditions were performed with a two-way ANOVA, with adjusted P values reported using Bonferroni's multiple-comparison test. Significance was determined using a 95% confidence interval where P values of Ͻ0.05 are denoted by a single asterisk (*) and P values of Ͻ0.01 are denoted by double asterisks (**), unless otherwise indicated in the text. One-way ANOVA was used with a Tukey's multiplecomparison test to compare levels of decay among three HBE patient cell lines for H1N1pdm, Perth H3N2, and IBV strains. Two-way ANOVA was used with Bonferroni's multiple comparison's test to compare levels of decay in three HBE patient cell lines for avH9N2 and avH6N1. Specific details regarding statistical tests are also provided in the figure legends. Statistical analyses were completed using GraphPad Prism version 8.0.0 software. Rotating RH drum used for analysis of virus stability in suspended aerosols. Aerosolized viruses were exposed to selected RH conditions in a rotating, 27-liter aluminum drum housed within a biosafety cabinet, as previously described (15). Briefly, virus suspensions were aerosolized into the drum using a three-jet Collison nebulizer (BGI MRE-3) at a pressure level of 40 lb/in 2 . Target RH was achieved by adjusting the flow rates of the aerosol, dry air, and saturated air into the drum. Once RH reached equilibrium, an aerosol sample representing time zero was collected onto a gelatin filter at a flow rate of 2 liters/min for 15 min. The drum was then sealed, and aerosolized virus was incubated for 1 h, after which another aerosol sample was collected. The filters were dissolved into 3 ml of prewarmed L-15 medium, and IV titers were determined by TCID 50 assay. Decay rates were corrected by RH-specific aerosol physical loss rates (15). Quantitative PCR (qPCR) was completed on RNA isolated from samples using a TaqMan RNA-to-C T 1-step kit (Thermo Fisher) (C T , threshold cycle) and a StepOnePlus real-time PCR system (Applied Biosystems). Viral RNA was detected using a probe against the IAV M gene segment (5=-FAM-TCAGGCCCCCTCAAAGCCGA-BHQ1-3=) (FAM, 6-carboxyfluorescein; BHQ1, black hole quencher 1).
2019-08-23T13:03:44.884Z
2019-08-21T00:00:00.000
{ "year": 2019, "sha1": "6e19a7885cdf218c39f996b4e2de146eea2cae87", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/msphere.00552-19", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e1bacf3f5783ffb99d09a7267fa0f0abb607fac", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
158920385
pes2o/s2orc
v3-fos-license
Caste Divide and Social Imbalances Driving Naxalism In Central India In the present day scenario, Naxalism has become the major internal security threat. The root causes of spread of Naxalism are poverty, unemployment, famishment and difference in the status of people in the society. Since, the dawn of the human civilisation, poorer have been ill treated and harassed by the people of higher class and when the level of suffering and harassment reached the pinnacle, people decided to take up guns under the umbrella of Naxalism. There is need to identify and address the root causes of spread of Naxalism. Government has taken up the various steps and launched various schemes with an aim to win the hearts and minds of common people. However, it is important to ensure that these benefits of government initiatives must reach the common man. Those are more serious than external troubles" -Kautilya With the advent of new technology and sincere efforts, India has become one of the fastest growing economies of the world and super powers like USA have started noticing the pace of its growth and development. In order to realise its full potential, India needs to overcome its internal and external challenges. The country has developed a strong military to counter any challenge to the security of its citizens from external aggression and to the territorial integrity however, the internal security situation continues to remain the Achilles heel of the Nation. Naxalite problem is the single largest internal threat to India which has engulfed more than 100 districts in 13 states. The problem has moved through rural armed struggles and penetrated the domains of the policy makers, media, human rights, youth organisations and others. They are influenced and radicalised by the ideology of Communist Party. They were initially formed when the Communist Party of India (Marxist) was divided due to difference in ideology and the Communist Party of India (Marxist-Leninist) was formed. Initially the movement by Naxals began in West Bengal. However in due course of time, it engulfed the central and southern part of the country, which witnessed less development, through the activities of various naxal groups. For the last one decade, it had deepened its root in the least developed areas of central and eastern India where local people, mostly tribals have taken the law in their own hand after being cheated and exploited by people of higher section of society. As of December 2017, 105 districts across 09 states are affected by left-wing extremism, down from 180 districts in 2009. Jharkhand has maximum districts affected by Naxalism. 18 Districts of Jharkhand are adversely affected by the Naxalism and these are Hazaribagh, Chatra, Gumla, Lohardaga, Palamu, Garhwa, Ranchi, Simdega, Latehar, Giridih, Koderma, Bokaro, Dhanbad, East Singhbhum, West Singhbhum, Saraikela Kharsawan, Khunti, and Ramgarh. Bihar is second most effected state, a total of 11 districts are adversely affected by this evil and these are Aurangabad, Gaya, Rohtas, Bhojpur, Kaimur, East Champaran, West Champaran, Sitamarhi, Munger, Nawada, Jamui, followed by Chhattisgarh and Orrisa with 10 and 9 naxal affected districts respectively. CAUSES OF EXPLOITATION If we study the states where Naxalism has deepened its root, we will find out that since the dawn of the civilisation, these were the areas which were severely affected by poverty, unemployment, famishment and difference in the status of people. This was the root cause of exploitation of poorer by the rich and resulted into the poorer going under the umbrella of Naxals. When the threshold of sufferings reached the pinnacle, people from weaker section decided to resist against the suppression and bigotry. In early years, the people started the movement with peaceful protests demanding for the basic rights but the desired result could not be achieved and many of the protesters were put behind the bars. This lead to wastage of their hard earned money and entire family had to undergo agony and suffering, people had started losing their patience and that was the time, Naxals promised to fight for their cause. A section or group, who lent a helping hand towards any person in pain, is like a god to him and he will be ready to do anything in reciprocation. Same thing happened in Central India, and people left the path of peaceful protest and picked up weapons against the state under the leadership of Naxals. Landlords and police were frequently targeted by this section of society. In retaliation, police targeted their villages and harassed the simple people which could not be explained in simple words. Even the private armies of higher section had left no stone unturned in harassing the relatives, sympathisers or the common people. Mrs Smita Narula, researcher for the Asia division of Human Rights Watch had carried out research in 1998 on Caste violence and she had brought out in her report that Ranvir Sena, a private army of people of higher caste, was responsible for brutal murder of more than 400 peoples of weaker section between 1995 to 1999. The irony of situation was that these people were rarely questioned by the Law as they happened to be well connected. Caste Based Division and Exploitation is one of the major causes for aggravating the problem of naxalism, due to the following reasons: Gender based Exploitation: Gender based exploitation was largely seen in the rural areas which includes oppression, sexual exploitation of the tribal women and exploitation of low caste women by the upper caste males. Social Inequalities: Oppression, atrocities and discriminatory treatment of Dalits and lower caste peasants by the upper caste landlords was and still exists in many parts of the country. The rich 'Thakurs' and 'Zamindars' treated poor people and tribals with no dignity and exploited them socially. These inequalities in society forced them to take recourse to violence and join the Naxalites. Land Reforms: Absence of land reforms and aspirations for owing land resulted in a struggle against the rich and powerful landlords. The Naxalites exploited this sentiment, distributed land to poor and landless and caused bloodshed of the Zamindars who opposed them. Tribal and Forest Policies: Due to implementation of Forest Regulatory Act, the tribals have been denied their traditional means of livelihood which was their only means of survival. Low Wages to Farm Workers: On the economic front, there is discrimination in access to services and participation in some category of jobs. These social barriers also exist in the urban labour market in Central India. Despite policy to support entrepreneurship among these groups, they account for only 10% and 4.6% of the private enterprises compared to 40% and 45% in respect of the OBCs and higher caste respectively. These issues also flared up the movement which received utmost support Ideological Inspiration. Motivated by the success of the communist revolutions in Russia under Lenin and in China under Mao in the early Twentieth century, the aim was to bring about a similar revolution in the County to create a classless society providing equal opportunities to one and all. Inadequate Governance: It is a common knowledge that in many of these areas, there is lack of proper governance. The civil administration departments and judicial institutions are not effective. This has allowed the Naxals to run a parallel government in these areas. The practice of holding 'Jan Adalats', land distribution, construction of irrigation facilities, tax collection by the Naxalite cadres, explain the reach and writ of the Naxalites. Induced Displacement: Establishment of Special Economic Zones according to Naxals, is a treacherous policy to snatch the land of the people and hand it over to the MNCs. People strongly believed that their lands and villages falling in these zones would be snatched away from them. Once displaced there would be no place for them to go and would ultimately will be at the mercy of these MNCs. Naxal leaders added fire to the propaganda and left people with the chilling feeling of being displaced from their own lands ultimately gaining support for them in the process. Unemployment: One of the primary reasons of unemployment is lack of education. There is large scale illiteracy in Naxal affected states. Though government has taken some steps for compulsory education of children, however still there are number of children who are illiterate or have only primary education. The illiterate population is idol for Naxals for recruitment as they can be easily brain washed and taken into folds of Naxalism. FACTORS RESPONSIBLE FOR SOCIAL IMBALANCE The Social status of the poor is deeply affected by antagonism and structures which has led to never ending social imbalance in their communities, which had become the integral part of their daily lives. Various factors that create imbalance in the Society are as follows: Agrarian Structure: Isolation of Tribal land was a major issue as it handicapped tribals economic welfare. Gradually tribals started losing their land, which was their only source of income. The browbeaten classes were not only exploited as landless labourers and sharecroppers by the landlords but also were cheated by the money lenders. When the level of harassment and suffering reached the pinnacle they formed groups and demanded social justice and equality, however it was not agreed to zamindars and people of higher section as they were not ready for this social equality and wanted to retain their powers over these tribals. This led to disputes within the various sections of the society, disturbing the social equilibrium and giving a push to the Naxal movement. Social Structure: Various policies launched by the government from time to time invariably disturbed the balance in the tribal societies. There were a number of tribes whose social structure was more conductive to mass mobilisation. The tribes of Rajbhansi, Oraon and Santhal were the main inhabitant of the region of Naxalbari, Phansideoa and Kharibari, which was severly affected by the naxal movement. They were hardest hit by the agricultural commercialisation and the government's forest policies. They also bore the brunt of social oppression. Their egalitarian social organisation was very conductive to mass mobilisation. The landless everywhere shared the same woes. This invariably created a feeling of resentment among the tribes and attracted the youth of these tribes to join the Naxals. Land Holding: With the passage of time various land reforms were introduced and legislated for the landless and people with disputed land cases, but they could not be implemented in the right perspective due to the corrupt and influential people who dominated the local political structure. The poor peasants felt that even the government could do nothing to ameliorate their condition whereas the rich and influential people of the society would get what they deserved. In the bargain they would loose more land to the powerful sections of the societies. This caused resentment amongst the sections of the society which gave a push to the Naxal movement. Socio-economic Alienation: The economic situation is exploited by Naxalites and their extreme left ideologies. On one hand, India has experienced relatively fast economic growth, which has led to increased levels of national wealth. To facilitate and continue this development, business needs more land and natural resources such as minerals. On the other hand, economic growth has been uneven and has widened the disparity between the rich and poor. Proponents of these businesses argue that these regions need economic development if they need to catch up with their richer counterparts. The conflict between economic progress and aboriginal land rights continues to fuel the Naxalites activities. This is more prominent in the tribal belt such as West Bengal, Odisha, and Andhra Pradesh where locals experience forced acquisition of their land for setting up of developmental projects. Contribute towards improvement of the lives of the rural poor in the state through fostering strong self-managed grass root institutions and empowerment. Evolve policies for the empowerment of the people of weaker section in the state. Provide social and technical guidance to the poor in their overall social progress and livelihood development. JEEVIKA: With the assistance of Bihar Rural Livelihood Promotion Society (BRLPS), the state government of Bihar initiated Bihar Rural Livelihoods Project (BRLP), also known as JEEVIKA. These projects are funded by World Bank and aims at economic & social growth of the people from villages. Subsequently, Bihar Kosi Flood Recovery Project (BKFRP) was also formed the part of this project. The BRLP aims to augment economic and social empowerment of the people of weaker section in the rural areas of the state. This objective is to be achieved by :-Augmenting economic and social empowerment of the rural poor. Investing in capacity building of service providers (Public / Private). Playing an important role in encouraging development of microfinance and business related to the agriculture sector. CONCLUSION Naxalism has affected nearly forty percent of our territory and has become the biggest threat to the internal security of the Country. A coordinated effort is required to deal with the menace of expansion of Naxalism which is spreading like wild fire and will challenge the national sovereignty if not checked in time and space. As discussed above, the main cause which fuels Naxalism is Caste Divide, Economic Deprivation and Social Inequalities. It's high time that coordinated efforts are made to address these issues which will go a long way in overcoming the problem of Naxalism in the Country. Meanwhile, government needs to ensure that people from every section of society enjoy equal common rights as bestowed by the Constitution. There is a need to take strong action against those who are responsible for harassing the poor and the downtrodden and this in turn will dissuade the tribals and the labour class from joining the Naxals.
2019-05-20T13:05:49.684Z
2018-04-25T00:00:00.000
{ "year": 2018, "sha1": "35a5e31754b03961ced058b7c51a231744e36a3d", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.30954/2231-458x.01.2018.6", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "127477138c9aad866f51f82a2e2edd8373f8b863", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
56485105
pes2o/s2orc
v3-fos-license
Multifocal multiphoton volumetric imaging approach for high-speed time-resolved Förster resonance energy transfer imaging in vivo In this Letter, we will discuss the development of a multifocal multiphoton fluorescent lifetime imaging system where four individual fluorescent intensity and lifetime planes are acquired simultaneously, allowing us to obtain volumetric data without the need for sequential scanning at different axial depths. Using a phase-only spatial light modulator (SLM) with an appropriate algorithm to generate a holographic pattern, we project a beamlet array within a sample volume of a size, which can be preprogrammed by the user. We demonstrate the capabilities of the system to image live-cell interactions. While only four planes are shown, this technique can be rescaled to a large number of focal planes, enabling full 3D acquisition and reconstruction. In this Letter, we will discuss the development of a multifocal multiphoton fluorescent lifetime imaging system where four individual fluorescent intensity and lifetime planes are acquired simultaneously, allowing us to obtain volumetric data without the need for sequential scanning at different axial depths. Using a phase-only spatial light modulator (SLM) with an appropriate algorithm to generate a holographic pattern, we project a beamlet array within a sample volume of a size, which can be preprogrammed by the user. We demonstrate the capabilities of the system to image live-cell interactions. While only four planes are shown, this technique can be rescaled to a large number of focal planes, enabling full 3D acquisition and reconstruction. Due to its relative independence to absolute intensity value, fluorescence lifetime imaging (FLIM) can overcome issues associated with steady-state fluorescent techniques. The average lifetime of a fluorophore varies according to its local environment and has been used to measure Förster resonance energy transfer (FRET) [1,2], pH [3], protein binding [4] (i.e., NADH), relative ion concentrations, local variations in viscosity [5], aggregation [6], and proximity to metal surfaces [7]. For high-precision FLIM, time-correlated single-photon counting (TCSPC) is unparalleled in its measurement accuracy. Conventional TCSPC is fundamentally limited with respect to the photon counting rate in current implementations of laser scanning microscopy, with typical acquisition rates for conventional laser scanning TCSPC FLIM on the order of minutes. This partly explains why its application is not more widespread in the biomedical community. Until recently, high-speed FLIM could only be performed using modulated or time-gated image intensifier systems [8][9][10]. While such systems offer video frame rate acquisitions, they suffer from significant imaging artefacts [11] and excitation photon flux that may be damaging to cells [12][13][14][15]. We have previously presented a massively parallel, fully addressable time-resolved multifocal multiphoton microscope [16,17] capable of producing fluorescence lifetime images with 55 ps time-resolution giving improvements in acquisition speed of a factor of 64, improving the acquisition time to the order of seconds. Parallelized TCSPC detection was achieved using a specialized 32 × 32 10 bit time-to-digital (TDC) array (∼55 ps) with integrated low dark-count single-photon avalanche photodiodes (SPAD) [18]. The concept of acquiring several axially separated focal planes simultaneously in multiphoton microscopy is not new and has already been presented by a number of groups utilizing techniques such as remote focusing [19,20], spectral encoding [21], and holographic approaches [22]. The benefits of acquiring several planes simultaneously over acquiring a single plane (albeit at high speed) are that volumetric data are acquired without the need for axial translation of the objective or sample. This axial translation is still limited by speed and can also cause perturbation to the sample. In this Letter, we will discuss a modification of a multifocal multiphoton system (MM-FLIM), which enables simultaneous acquisition of four individual planes of both fluorescent intensity and lifetime information, allowing us to obtain volumetric data without the need for sequential scanning at different axial depths. MM-FLIM setup and general operation have been described in much greater detail elsewhere [16,17]. In brief, light generated from the Chameleon Ultra II laser (Coherent) is projected Letter Vol. 43, No. 24 / 15 December 2018 / Optics Letters onto an HSPDM512 spatial light modulator (SLM) device (Meadowlark Inc.). By applying a suitable phase pattern, a two-dimensional array of 8 × 8 beamlets is produced, which is then raster scanned using a set of galvanometer scanners and projected onto the sample. The fluorescence generated from each beamlet is collected, descanned, and projected onto the Megaframe camera (Photon Force Ltd.). Each beamlet is precisely aligned to its associated detector matching the spacing and angular orientation of the array to enable high collection efficiency. For multiphoton excitation, fluorescence is only generated within the focal volume where photon density is sufficiently high. When optically conjugate, the fluorescent beamlet projected onto the detector aperture is significantly smaller (1.8 μm FWHM) than the active area of the SPAD (6 μm dia.) due to the choice of the reimaging objective (Nikon 10×, 0.3 NA). It should be noted that one could apply a small defocus term to the excitation beamlet and still collect the light from the focal volume even if the detector is not completely conjugate to focal point. Due to the small size of the detector, it will only be effective within a certain z-range. From each beamlet, a subimage is generated from a raster scan, which can be then stitched together to create a complete image. All sample imaging is performed using a 40 × 1.3 N.A. Plan Fluor oil immersion microscope objective (Nikon Instruments Ltd.) The generation of uniform beamlet arrays in a single plane using a doubly weighted Gerchberg-Saxton algorithm (DWGS) in conjunction with a SLM has been described previously [23]. We use a modified version of this algorithm to generate the desired uniform 3D distribution of diffractionlimited spots (Fig. 1). A known defocus term is applied to each individual beamlet corresponding to its relative plane position with the four planes centered axially about a zero z-offset position. In the first iteration of the modified DWGS algorithm, the randomly generated phase pattern and the premeasured laser illumination amplitude are coupled, forming a complex incident field, and a Fourier transform is made to determine the amplitude (V ) and phase (φ) components at the image plane. The simulated beamlet pattern is compared with the desired beamlet array, taking the z-offsets of each beamlet into account, and a suitable weighting is applied to produce a new amplitude. This is combined with the phase, which was previously generated at the image plane, and once the inverse Fourier transform is carried out, the first iteration of the algorithm is completed. The phase pattern generated in the previous iteration is then fed back into the next iteration, where it is coupled with the measured laser amplitude, and the process is repeated. After 30 iterations of simulated beamlet generation and feedback, the calculated phase is projected onto the SLM, and the associated fluorescent beamlet array signal feedback generated from a homogeneous fluorescent sample is detected by the SPAD array. These beamlet array signals are then normalized for the quadratic effect of two-photon excitation. The inverse response is then determined and incorporated as the new desired beamlet output. This process is repeated a number of times until the beamlet uniformity is maximized. For an 8 × 8 array, in order to generate four sequential axial planes of equidistant spacing (z), where each plane significantly overlaps spatially in x and y, a particular z-offset pattern was applied to the beamlet array [see Fig. 2(a)]. Each individual z plane consists of 4 × 4 beamlets, and as each adjacent beamlet corresponds to a different plane, the beamlets must be raster scanned over twice the distance required in x and y for a single beamlet in a single plane 8 × 8 acquisition in order to generate a complete image. The generated image is composed of four planes, each containing 16 subimages, as shown in Fig. 2(b). In order to test the ability of the system to vary the interplanar z-offset and acquire data sets at multiple planes simultaneously, we measured the fluorescence axial response with an autofluorescent diagnostic slide (Chroma Inc.) with 870 nm excitation. The diagnostic slide provides a homogeneous fluorescent signal and effectively acts as a fluorescent sea. By calculating the differential of this response, one can determine the surface position from the peak. In Fig. 3, the differential of this axial response for each plane is presented with interplanar spacings (z) of 0.5, 1.0, 1.5, and 2.0 μm. To generate complete images in each of the four planes, each beamlet must be overscanned by a factor of 2 in both the x and y axes as each adjacent beamlet is allocated to another plane. The user simply applies the precalculated phase pattern with the appropriate interplanar spacing required and acquires an image composed of 8 × 8 subimages, which are then processed to give 4 × 4 subimages for each plane, as shown in Fig. 4. As seen in Fig. 4, the 3D image is first constructed from a 30image single-plane z-stack. From this data set, the central z position is chosen, and the precalculated four-planar beamlet array pattern with z offset (in this case 2 μm) is projected onto the SLM. A raster scan is performed for each beamlet array, and the subimages from the original image acquired are sorted into their associated planes, denoted by their corresponding number. To demonstrate the dynamic imaging capability of the system, we imaged live human epithelial cells expressing a RhoA GTPase mTFP/Venus FRET biosensor [24]. Images were acquired at 0.1 Hz before and after media was exchanged for Ca 2 -free media to induce cell-cell dissociation through disengagement of cadherin receptors (Fig. 5). Analysis of spatial changes in RhoA activation revealed a significant increase in fluorescence lifetime at cell-cell junctions from 1.65 0.14 to 1.78 0.18 ns following removal of Ca 2 , shown at time point 0 s in Fig. 5, consistent with a reduction in active RhoA and resulting actomyosin contractility as previously suggested [25,26]. The control lifetime of mTFP was measured at 2.15 ns, indicating that resting cell GTPase activity of the biosensor corresponds to a FRET efficiency of 23%, which is consistent with Fritz et al. [24]. The 3D projection enables us to interrogate the architecture of the cell and has the potential to provide unprecedented spatiotemporal resolution regarding RhoA activity within cells relative to their 3D position. While this is only a proof of principle with a four-plane acquisition, with more beamlets, this technique can be rescaled to a large number of focal planes, enabling full 3D acquisition and reconstruction. In the future, the generation of 32 × 32 beamlets will allow 8 × 8 beamlet acquisition of 16 individual planes simultaneously. At present, we are limited to the number of beamlets we can generate due to the laser power requirements for multiphoton excitation and the low optical efficiency of the SLM used (∼25%). Moving to a custom-designed diffractive optical element with high optical efficiency would enable full utilization of the Megaframe camera.
2018-12-15T14:02:38.047Z
2018-12-12T00:00:00.000
{ "year": 2018, "sha1": "0ec69ad994f63185bd7b893caf6b02e82659b56e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/ol.43.006057", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0ec69ad994f63185bd7b893caf6b02e82659b56e", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
4134739
pes2o/s2orc
v3-fos-license
High Rates of Neurobehavioral Disorder Associated with Prenatal Exposure to Alcohol among African Americans Driven by the Plethora of Liquor Stores in the Community C l i n M e d International Library Citation: Bell CC (2016) High Rates of Neurobehavioral Disorder Associated with Prenatal Exposure to Alcohol among African Americans Driven by the Plethora of Liquor Stores in the Community. J Fam Med Dis Prev 2:033 Received: April 5, 2016: Accepted: May 31, 2016: Published: June 3, 2016 Copyright: © 2016 Bell CC. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Bell. J Fam Med Dis Prev 2016, 2:033 Introduction After nearly half a century of treating low-income, African-American psychiatric patients for a variety of disorders, one single fact has repeatedly proven to be true -"risk factors are not predictive factors because of protective factors" [1].The protective factors operating in people's lives work to mitigate negative outcomes, such as an adolescent's participation in violent behavior or drug use.We first learned this lesson in 1982 when our research team at the Community Mental Health Council, Inc. began to call the nation's attention to the inordinate number of African-American children at risk for negative outcomes because of exposure to violence [2].However, we also learned there were protective factors in these children's lives that nullified that risk.These lessons fueled research into childhood trauma for people all over the planet. When the Aban Aya project ran in Chicago Public Schools between 1994 and 1998, protective factors -such as rebuilding students' "village" of social and emotional support, providing opportunities to increase connectedness and self-esteem, and teaching youth social and emotional skills -were placed into the lives of "at risk" middle school students, reducing growth in violent Historical and Current Perspective As far back as 1966, "familial retardation" or "subcultural retardation" was reported to be the largest category of what was then called "Mental Retardation".The surveillance at the time noted this category of "Mental Retardation" incorporated "25 to 40 percent of the institutionalized retardates and 60 to 75 percent of all mentally retarded in the community".At the time it was thought the lack of intellectual stimulation and inferior environment that being in a low-income community provided was an important reason children in the low-income strata of society had a greater prevalence of "familial retardation" or "subcultural retardation" [10].In fact, the United States "Head Start" program was designed to remedy the lack of intellectual stimulation and inferior environment [11].As science progressed a major cause of what was once diagnosed as "sociocultural mental retardation," and had been assigned many other labels and diagnoses, was finally discovered to be Fetal Alcohol Spectrum Disorder (FASD) [12]. Chicago is home to more than 1000 liquor stores [13].Research has shown liquor stores are disproportionately located in African-American communities [14][15][16].This reality and the fact that more than 50% of pregnancies are unplanned [17] suggests that plethora of liquor stores in African-American communities created a social determinant of biological health making such communities at-risk to high rates of FASD -recently coined Neurobehavioral Disorder associated with Prenatal Alcohol Exposure (ND-PAE) by DSM-5 [18].Thus, the author proposes ND-PAE has been prevalent in the African-American community for decades.However, due to lack of research and the presumption that learning and behavioral disorders among "at risk" children stemmed from the lack of intellectual stimulation and inferior environment that being in a low-income community provided, the true problem which stems from acquired biology has been hidden in plain sight. Prevalence of Neurobehavioral Disorders Associated with Prenatal Exposure to Alcohol (ND-PAE) Although, not unique to low-income African-American populations in the United States [19][20][21], the evidence of ND-PAE as a "risk factor" in low-income African-Americans is found in several recent sources, unfortunately the methodologies and samples in these sources are all different making comparisons impossible.A chart audit randomly sampled one-third of children in several school clinics for children with behavioral problems, and found 39% had ND-PAE [9].Using active case ascertainment methodology in Englewood's St. Bernard Psychiatric Unit, located in one of the poorest African-American communities in Chicago, and in Jackson Park Hospital's Family Practice Clinic revealed that 32% and 29% of patients, respectively, met the clinical criteria for Prenatal Alcohol Exposure [9].Another study looked at youth, 50.6% of whom were African American, referred for severe behavioral disorders by the Department of Children and Family Services; 28.5% of these youth had ND-PAE, and 26.4% were previously misdiagnosed as having ADHD [22].In contrast, the rate of ND-PAE in the general population has been found to be 3.6% [23] -Whites comprised 76% of this sample, and African Americans were only 7% of the sample (with the remaining sample encompassing four or more ethnicities, including Hispanic).These findings are far from definitive as they are replete from the methodological bias that the samples are all different, but the more than 20-point difference between the rates of ND-PAE in the predominately White, "general population" sample and the lowest previously cited rate among the significantly more African American samples, is stark and dismaying. The prevalence of ND-PAE in African-Americans is more common than previously realized, and, like other neurodevelopmental disorders, patients do not "outgrow" them, but carry them into adulthood.Bell and Chimata [24] examined 611 predominately African-American patients in a Family Medicine Clinic on Chicago's South Side (the clinic serves a population of 143,000, 96% of which are African-American and who had a median household income of $33,809).Two hundred and thirty-seven (38.8%) had clinical pictures that were consistent with ND-PAE [18].Our clinical research reveals patients with ND-PAE from living in "food swamps" have unique newborn medical histories, educational trajectories, and difficulty with employment and we have found exploring these issues in youth and adults can provide useful clues that might suggest prenatal alcohol exposure.For example, a medical history that indicates the possibility of ND-PAE would include low-birth weight (< 5 pounds, 8 ounces/2.5kilograms) or prematurity, heart murmurs, and/or deformities of the hands, joints and bones.Frequently, patients with prenatal alcohol exposure have a distinctive facial appearance -epicanthal folds, a flat mid-face, and indistinct philtrum, and thin upper lip -as well as evidence of subtle brain damage characterized by central nervous system dysfunction.A childhood educational trajectory might reveal developmental disabilities (intellectual disability, learning disability, attention-deficit/hyperactivity symptoms, speech and language difficulties, and/or explosive behavior).Finally, adult employment history might reveal chronic poor job performance or repeated tenure of less than 6 months [6,24]. Having outlined the risk factors for ND-PAE and highlighted the concentration of liquor stores in low-income, African-American "food swamps" where the community is, literally, flooded with alcohol, a connection has been made to the disproportionate number of African-Americans who develop ND-PAE.What then are protective factors that make being African-American not a risk factor in this context? Protective Factors that Possibly Prevent ND-PAE from Being Inevitable Currently, research is ongoing to support the efficacy of giving patients choline, folate, Omega-3, and Vitamin A to mitigate alcohol's deleterious effects both pre-and postnatally [25][26][27].However, because the problem is so rampant in the community the author serves, we have been using this regimen clinically with noteworthy results.These nutraceuticals are not a cure-all, but there have been improvements in youth and adult patients with ND-PAE, often misdiagnosed with bipolar disorder, schizophrenia, depression, autism and other psychiatric disorders, whose psychotropic medications did not provide symptom relief [6].The hope is scientific evidence emerges to support this alternative to current the standard pharmaceutical strategy (which is not very efficacious), and we can move forward on a larger scale with public policy that would bring vitamin supplement treatment to those who need it in the low-income, African American community. Public health policy should help African-American communities realize that ND-PAE may be the tremendous current public health threat, and that it increases the risk of negative outcomes in both school and life.Public health policy should also encourage obstetricians to ask all pregnant women when they realized they were pregnant, and were they drinking before they realized they were pregnant.If the answer indicates they inadvertently exposed their unborn fetus to alcohol, the woman should increase her choline intake, as there is research that the nutrient is safe and can remediate damage done to neurodevelopment in the fetus [28].Moreover, public health policy should help implement a policy that insists children in juvenile detention, special education, foster care and mental health care be screened for ND-PAE as the diagnosis rates have been shown to be high in these populations.Once the research is finalized and "pans out" (which the author believes it will), children should be offered the chance to benefit from a biotechnical protective factor -one of the Seven Field Principles [5,6] demonstrated to mitigate negative outcomes in youth -and receive Choline 500 mg, Folate 400 mcg, and Omega-3 500 mg twice daily, and Vitamin A 2,000 IU once daily [29].Twice daily, and Vitamin A 2,000 IU once daily [29].Vita IU once daily [29].
2018-03-23T22:35:16.808Z
2016-06-30T00:00:00.000
{ "year": 2016, "sha1": "48bf67f5e43d109a2c1460be61accd36c80783c0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.23937/2469-5793/1510033", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "48bf67f5e43d109a2c1460be61accd36c80783c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259207034
pes2o/s2orc
v3-fos-license
Transnational public and global health education in China Transnational public and global health programs in China have rapidly expanded over the past 20 years, and have potential to make important contributions to China’s global health workforce. However, there has been sparse if any literature specific to transnational public and global health higher education in China. In response, this perspective article aims to: (1) outline current transnational public and global health programs in China, and (2) delineate opportunities and challenges for transnational public and global health programs to enhance China’s global health workforce. Based on internet searches, eight active transnational public and global health programs in China were identified in September 2022 (one Bachelors; four Masters; three doctorate). Degree awarding institutions are located in Australia, Portugal, the United Kingdom, and the United States. Courses for stand-alone transnational programs were co-delivered by faculty from the Chinese and foreign sponsoring institutions. The earliest and latest programs were respectively established in 2001 and 2022, and the average year of establishment was 2013. The endurance of some programs (three programs operating ≥ 10 years) indicates the potential sustainability of transnational public and global health programs in China. However, opportunities for cross-cultural engagement appear to be constrained by lack of English (or other language) requirements in some programs, limited recruitment of international students, pandemic travel restrictions, and a dearth of funding for global health research outside China. In addition, students enrolled at transnational universities in China are currently ineligible for China Scholarship Council funding. As China’s need for global health capacity grows amid a rapidly shrinking population of younger citizens, strategic investments in transnational public and global health programs may be of increasing value. Introduction Transnational education has been defined as "all types of higher education study programs, or sets of courses of study, or educational services (including those of distance education) in which the learners are located in a country different from the one where the awarding institution is based" [1]. Hence, the primary distinction between transnational education and traditional cross-border education is that the student is not residing in the same country as the degree awarding institution, at least during the time of their study. Over the past 20 years, transnational public and global health education programs in China have expanded rapidly, driven in part by factors that benefit both individual teachers and students alike. By physically teaching in China for extended periods as "flying faculty" or as permanent employees, transnational education universities and programs offer teachers from degree awarding countries a unique opportunity to glean academic experience in China while maintaining affiliation with the degree-awarding institutions and delivering lectures in English or other non-Chinese languages. This can be especially attractive for faculty interested in conducting research in China but who may not be fluent in the Chinese language. For students, the allure of transnational education programs in China is driven by desires for international experience, as well as other factors. First, the costs of a transnational education in China will almost invariably be lower than the costs of physically moving and studying in the degree-awarding country of a transnational program (e.g. United States or United Kingdom). Secondly, even if finances are not of great concern, some non-Chinese citizens from the global South may encounter difficulties securing a visa to study in specific industrialized countries. For example, due to geopolitical concerns, students from some countries may face difficulties trying to secure a student visa to the US or other industrialized countries in the global North [2]. Hence, studying at a transnational program in China may be a viable option that enables non-Chinese citizens to obtain a higher education degree from countries that they may not be able to physically enter. Lastly, some Chinese citizens (and perhaps their parents) may prefer transnational education because it enables them to remain in China. Some students may not have any interest in immigrating abroad, while others may not yet feel prepared to live abroad on their own for an extended period of time. Moreover, physically relocating to a foreign country has profound implications for the development of one's personal/professional network, particularly during college when many important and long-term social connections are formed. If the student is resolute about remaining in their home country, then they may be reluctant to risk sacrificing existing and potential in-country networks by moving abroad. Transnational public and global health programs in China are well poised to bolster the skillsets of China's global health workforce. Since at least the 1960s, China has provided important inspiration and support to Global Health initiatives ranging from the Alma-Ata Primary Health Care Initiative to COVID-19 vaccination campaigns [3]. However, recent research has indicated that China's Global Health Workforce remains relatively underdeveloped, and requires stronger professional and communication skills training [4]. Due partly to such concerns, in 2017, China's National Health and Family Planning Commission (currently known as the National Health Commission) established a global health talent pool of skilled public health professionals within China who would be able to respond to short and long-term Global Health initiatives [5]. However, despite the growth of transnational public and global health programs in China, and the evolving needs of China's global health workforce, to date there has been little if any published analysis specific to this issue. In response, this perspective article aims to: (1) outline current transnational public and global health programs in China, and (2) delineate opportunities and challenges for transnational public and global health programs to enhance China's global health workforce. Strengthening China's global health workforce through transnational global health education Transnational Global Health Education programs can help strengthen China's global health talent pool in two key ways. First, university-based transnational global health programs provide a relatively low-stakes setting for prospective global health professionals to enhance their cross-cultural communication and team work skills, given that lectures and curriculum are often delivered in English by international faculty. Opportunities to write reports, deliver presentations, and engage in discussions in English with international interlocutors will invariably help hone non-native English-speaking students' communication skills that will be applicable to global health projects and engagement with international partners. Second, transnational global and public health programs can help strengthen professional skills needed for an effective global health workforce. Unlike many public health programs in China which are based within medical schools and have a stronger clinical focus [6], transnational global and public health programs in China tend to provide relatively more interdisciplinary training and program management skills [7,8]. The following section describes a public health undergraduate course and summer research project which I led at a Sino-British transnational university in China. Case example of transnational public health education in China From 2018 to 2022, I taught an upper-division undergraduate public health course entitled "Methods for analyzing public health: surveillance, monitoring, and evaluation. " The learning objectives of the course focused on enhancing students' knowledge of public health surveillance and program evaluation, as well as honing students' research skills. In accordance with Bloom's taxonomy of learning [9], the learning objectives for this upper division course focused on knowledge creation, as opposed to only knowledge retention and understanding. Students received training on evaluating a public health intervention using an interrupted time-series analysis based on publicly available data. Previous student projects include using UNICEF data to evaluate a breast Pan Global Health Research and Policy (2023) 8:21 feeding intervention in Bangladesh and government behavioral surveillance data to evaluate a breast cancer screening intervention in the United States [10]. Assessments were designed sequentially, whereby subsequent assignments build upon previously submitted work, and fellow students and the instructor provide written and face-to-face feedback during class time. Structuring assessments sequentially appears to augment student engagement with instructor feedback [11,12], and this has been reflected in my own experience. In the summer of 2020, I conducted a COVID-19 vaccination preferences study with colleagues and undergraduate public health students studying at a transnational university in China. During the data collection phase, COVID-19 vaccines were still in development and there was considerable concern worldwide about vaccine hesitancy and suboptimal vaccine uptake. Hence, the goal of the study was to assess the degree to which vaccine mandates that bar access to public spaces could increase the adult general population's willingness to vaccinate for COVID-19. To that end, our team conducted a nationwide online discrete choice experiment in China. Undergraduate team members received training in discrete choice experiments and gained considerable experience in project management, cross-cultural team building, and the academic publication process. Findings of the study were eventually published in an international academic journal with an undergraduate public health student as lead author [13]. I believe these learning experiences strengthened students' professional and cross-cultural communication skills in ways that are applicable to global health programs and projects. Current transnational public and global health programs in China Although transnational universities and programs have expanded rapidly in China over the past 20 years, there has been sparse if any literature that has specifically surveyed transnational public and global health programs. In response, I sought to identify and summarize key features of active transnational public and global health programs in China. Programs were identified by searching for the key word "health" (健康, 卫生) in the Ministry of Education's "Chinese-Foreign Cooperation in Running Schools" (中外合作办学) database (https:// www. crs. jsj. edu. cn/) (date of search: September 23, 2022). Programs deemed to be within the scope of public or global health were included in the analysis, and additional information about each program was gleaned from publicly accessible institutional and program websites. As of September 2022, there were eight active transnational public and global health programs that have been approved by the Ministry of Education of the People's Republic of China (Table 1). Foreign degree awarding institutions are located in Australia, Portugal, United Kingdom, and the United States. Chinese partner institutions are located throughout China, ranging from Haikou in the South to Harbin in the North. Approved programs include one Bachelor program, four Masters programs, and three doctoral programs. The earliest and latest programs were respectively established in 2001 and 2022, and 2013 was the average year of establishment. Of the eight programs, four are delivered at a transnational university or college, and four are delivered at the Chinese university. Target student enrollment for most Masters programs was 80 students per year, and one doctoral program aims to enroll 25 students per year. Another doctoral program established in 2015 has ceased enrollment of new students. No target enrollment information was found for the Bachelor program. Four program application websites were only available in Chinese, thus suggesting that these programs were focused on recruiting students domestically from within China. English language proficiency criteria are not required for admission to at least three of the stand-alone transnational programs. Programs delivered at transnational universities are exclusively taught in English by institutional faculty (e.g., Duke Kunshan University), but at least three stand-alone transnational programs are taught in both Chinese and English by faculty from the Chinese and foreign institutions (e.g., Dalian Medical University -Benedictine University joint Master of Public Health program). English to Chinese interpretation/ translation is available in stand-alone programs which have no English language proficiency requirements for admission. One transnational Master's program provides a 5-15 week English prep program for prospective students who are unable to meet the minimum English language requirements for admission. The future of transnational public and global health programs in China Transnational public and global health programs have strong potential to enhance critical communication and professional skills among China's developing global health workforce. However, tapping into this potential will require strategic planning and deliberate effort. Currently, several established transnational programs have little to no English language requirements for admission. Foregoing such language requirements clearly widens the pool of eligible program applicants from China, but is not without opportunity costs. First, Chinese students may lack meaningful opportunities to hone their English language communication and teamwork skills if the entire program can be completed in Chinese. For Chinese students who have already entered the domestic workforce, the transnational program may represent one of the few opportunities to experience extended professional and academic engagement with foreign colleagues. It is reasonable to expect that as students' English language communication skills sharpen, so too will their ability to effectively contribute to international global health initiatives. For students who are interested in transnational programs but are unable to meet the minimum English language requirements for admission, program administrators can consider offering intensive English language prep programs prior to enrollment in subject matter courses, as one transnational college in Hainan province is currently doing. Second, tailoring program recruitment and curricula for Chinese speaking students excludes the large population of potential international students who are not fluent in Chinese. International students can enrich the learning experience by increasing student opportunities to strengthen cross-cultural communication, teamwork, and professional networks that will endure well beyond graduation. These prospective international students may also become of increasing interest to higher education program directors in general as China's population rapidly ages and the cohort of young adults shrinks. In 2012, when two transnational public health programs were established, there were approximately 119.6 million Chinese residents between the ages of 20-24 [14]. However, this population has dropped to ~ 80.1 million as of 2022, and is projected to decline to ~ 66.5 million by 2042 [14]. That said, the effect of demographics on student enrollment numbers may be partially mitigated by broader global trends. Travel restrictions, anti-Asian sentiments [15], and geopolitical tensions appear to have dissuaded some Chinese students from studying overseas [16]. If such trends intensify, then enrollment of Chinese students into transnational programs may be adequate as more Chinese students opt to obtain foreign educational credentials from within China rather than abroad. Of course, the sustainability of transnational programs rests upon not just economic viability, but sufficient stakeholder consensus and geopolitical stability as well [17]. It would be prudent for transnational program administrators to continue examining the extent to which current enrollment strategies focused on Chinese citizens are sustainable in the face of medium to long-term demographic and geopolitical trends. Along that vein, it is important to remain clear-eyed about the inherent challenges of transnational public and global health programs. First, limited English proficiency may complicate comprehension of some lectures and reading material, especially during the first year of the program. Students who are not native English speakers may find it more difficult to improve their English language skills given that socializing outside of school may not require the use of English. In an official national survey of transnational education universities and programs in China, students' English proficiency level was the most commonly cited problem by faculty and staff at transnational institutions of higher education [18]. Second, maintaining the quality of the awarding institution in a foreign country can be daunting at times. To begin with, it may be difficult to recruit high quality faculty to the transnational education program. Qualified faculty members may be unable or unwilling to live in the host country for extended periods of time, thus significantly reducing the pool of applicants from qualified faculty. One proposed solution has been to use "flying faculty, " whereby faculty permanently posted at the degree awarding institution fly into the host country to deliver several weeks of intense classes. However, compressing a semester's worth of material into several weeks can be taxing on both student and instructor, and may not allow the students enough time to thoroughly digest the material and develop well-thought written assessments [18]. Moreover, travel restrictions imposed during the COVID-19 pandemic acutely highlighted the risks of flying faculty and other educational arrangements heavily dependent on unimpeded international travel [19]. Third, conflicting policies and cultural norms can also engender tensions that could potentially compromise the learning experience of students. For example, numerous learning resources may be located on websites (e.g., www. youtu be. com) that cannot be accessed due to local government internet restrictions [20]. Hence, students at transnational universities may potentially face more obstacles to accessing learning materials, compared to students studying at the degree awarding institution's home campus. In addition, dual, parallel degrees (one from the host country and one from the degree-awarding country) common in transnational education programs can become complicated by different assessment standards. For example, in the UK, marks as low as 40 are deemed passing, and faculty often use the full range of the marking scale. In contrast, in traditional Chinese universities, marks below 60 are considered as failing, and faculty typical cluster marks of students more closely together [21]. Therefore, when marks for a single course are applied to dual degree programs, there may be a lack of consensus for which marking standard should prevail. Fourth, transnational universities and programs currently encounter some unique funding challenges. For example, the national government's China Scholarship Council enables many students from the global South to study at traditional Chinese universities and programs, but transnational universities and programs have been ineligible for such funding. Thus, transnational universities and programs must be proactive about funding student scholarships through alternative means. Extramural research grants can be a potentially important source of revenue, but international faculty with limited Chinese proficiency may find it challenging to develop compelling research grants in Chinese. International faculty employed at transnational universities can submit English language research proposals to funding programs specifically designed for foreign researchers (e.g., the Research Fund for International Scientists), but global health research funding often cannot be spent outside China [3]. Under the auspices of South-South programs such as the Belt and Road Initiative, China's government can develop global health funding streams explicitly focused on international projects and allow prospective applicants to apply in either English or Chinese. Mitigating language barriers for global health funding opportunities will likely enable a greater pool of international global health researchers at transnational universities (as well as traditional domestic universities) to help build China's global health workforce and spur greater opportunities for student engagement in global health. Conclusion Transnational public and global health programs in China have strong potential to enhance cross-cultural communication and professional skills of China's global health workforce. However, opportunities for cross-cultural engagement are constrained by lack of English (or other language) requirements, limited recruitment of international students, pandemic travel restrictions, and a dearth of funding for global health research outside China. As China's need for global health capacity grows amid a rapidly shrinking population of younger citizens, strategic investments in transnational public and global health programs may be of increasing value.
2023-06-21T14:23:01.485Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "9afc494eae26617fc9ba450a68189576af7e3086", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "9afc494eae26617fc9ba450a68189576af7e3086", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257025883
pes2o/s2orc
v3-fos-license
Discrimination of entangled photon pair from classical photons by de Broglie wavelength Quantum optics largely relies on the fundamental concept that the diffraction and interference patterns of a multi-partite state are determined by its de Broglie wavelength. In this paper we show that this is still true for a mixed state with one sub-system being in a classical coherent state and one being in entangled state. We demonstrate the quantum-classical light discrimination using de Broglie wavelength for the states with all classical parameters being the same. The multiphoton quantum states keep the promise of growing importance in nowadays and future practical applications. This is stemming from the fact that the entangled N photons of wavelength λ propagate as a solitary entity at the de Broglie wavelength of λ N = λ/N. This can be detected provided the detectors are properly arranged in the experimental setup 1,2 . One example is the optical lithography with entangled photons 3,4 promising to achieve higher component density in the microelectronics devises. Imaging with non-classical photons, allowing one to bypass the Rayleigh resolution limit and classical shot-noise level, is another notable example [5][6][7] . The practical schemes realizing quantum imaging (or lithography) are expected to operate with sources having high production rates of correlated photons [8][9][10] . However, non-ideality of sources capable to produce multi-partite photon states as well as optical losses destroying entanglement may result in mixed states, where the entangled and classical photons have the same wavelength, polarization and propagation direction. This makes impossible to discriminate them by using any of the classical variables such as the optical wavelength, polarization etc. On the other hand, if such mixed entangled and classical state is incident in the same beam on the optical detector, as it will be the case in a quantum imaging setup, this will compromise the fidelity and purity of the detected quantum state or even lead to the detector saturation and just a small fraction of entangled photons being recorded. Thus one important step towards realizing quantum imaging with high photon rate in the presence of spurious light background will be a separation of non-classical photon states from the classical ones. It has been already reported 11,12 that under certain conditions, the quantum diffraction of bi-photons at a grating manifests a single-point second-order correlation function G (2) (k,k) with the maxima at the diffraction lobes in the first-order (intensity) pattern G (1) (2k) of classical photons at a half wavelength λ/2. The entangled photon pair of the wavelength λ has the de Broglie wavelength of λ DB = λ/2 that defines such second-order diffraction pattern and, hence, the physical presence of bi-photons in the respective diffraction angles of the configuration space. This observation indicates that photon pairs are physically present in each of these directions, which is the base for our concept to discriminate the quantum and classical states of the same wavelength by a quantum diffraction or quantum interference. In this work we implement this concept using quantum diffraction on an echelle grating and demonstrate the quantum-classical light discrimination (QCD) by diffraction at de Broglie wavelength. experimental set-up The setup used for demonstration of the QCD concept is presented in Fig. 1. It consists of four main parts: the mixed-state photon source, quantum-classical discriminator and two coincidence detection chains for recording, respectively, the time-difference G (2) (t-t′) or spatial G (2) (k, k′) correlation patterns in the far-field of the echelle grating. The dedicated source of mixed photon states is built around a traditional entangled photon source based on a type-0 spontaneous parametric down-conversion (SPDC) in a periodically poled potassium titanyl phosphate (PPKTP) crystal 13,14 . The crystal is cw pumped by a volume Bragg grating (VBG) stabilized GaN diode laser 15 at 405 nm wavelength and produces the signal-idler photon pairs at 810 nm. After the PPKTP crystal, the light is CSEM, rue de l'Observatoire 58, Neuchâtel, 2000, Switzerland. ✉ e-mail: valentin.mitev@csem.ch open a mixed state of classical coherent state at 405 nm and a quantum bi-photon state at 810 nm. When needed, two optical filters block the classical light at 405 nm by 120 dB attenuation. On the optical axis of the pump laser and the SPDC source we add a low-noise single transverse and polarization mode VCSEL laser beam at 795 nm. This VCSEL laser is an independent source of highly coherent classical state at the wavelength close to the optical one of the bi-photon state. The wavelength of the coherent state source is intentionally detuned from 810 nm to exclude possibility of parametric amplification in the PPKTP crystal 16,17 . In addition, the power in the coherent state beam is reduced to the power level of the SPDC bi-photon source (~6 nW) to avoid detector saturation. The quantum-classical photon state discriminator (QCD) is implemented using an echelle grating (Thorlabs GE2550-0363 with 31.6 groves /mm) followed by an adjustable vertical slit located at a specific diffraction order (see below). With sufficiently open slit we may observe several consecutive diffraction orders. Alternatively, by precisely positioning the slit and adjusting its width, we may select only one diffraction order, which is classically prohibited at the given optical wavelength. We demonstrate QCD operation with incident probe beam containing classical and entangled photons at a choice, by properly adjusting a combination of cut-off and band-pass filters. The spatial four-dimensional (4D) G (2) (k, k′) correlation patterns at the QCD output are captured with a novelty 32 × 32 pixels single-photon avalanche diode (SPAD) array detector SuperEllen [18][19][20] . It is based on time-to-digital converter (TDC) architecture and enables 160 ps resolution in the coincidence detection. In order to accommodate for possible TDC delays and jitter across the array, the coincidence resolution is put to 480 ps. The Photon Detection Probability (PDB) is >20% at 420 nm and >5% at 800 nm. More details about the detector and its functions as well as data processing and representation of correlation patterns are given in Supplementary Information. The sensitive surface of the array detector is placed at the back focal plane of the objective lens, thus detecting the far-field (FF) spatial correlation patterns. The imperfections of such SPAD arrays are linked to their relatively high pixel dark count rates and neighboring pixel crosstalk 21,22 , which, respectively, upraise the accidental coincidence background due to events that are uncorrelated by nature and produce the false positive correlation signatures. Here we report spatial correlation patterns G (2) (k, k′) obtained after a correction to reduce such crosstalk and accidental coincidences background ~G (1) (k) × G (1) (k′). (The correction procedure is detailed in the Supplementary Information.) To further avoid possible ambiguities caused by residue crosstalk signatures, we perform the spatial correlation measurements using non-collinear SPDC regime 14 , so as the signal and idler photons hit non-neighboring pixels in the detector array and reveal negative (anti-) correlation traces. The time-difference G (2) (t-t′) correlation functions are recorded using two standalone single-SPAD detector modules (ID Quantique id100-50) placed at the output ports of a beamsplitter (BS). The SPAD outputs are recorded on a digital oscilloscope (Teledyne LeCroy SDA 8137-B). The same oscilloscope also builds the start-stop histograms of delayed photon arrivals. This coincidence detection channel does not suffer from detector crosstalk. We report on QCD operation for both collinear (frequency non-degenerate) and non-collinear (degenerate) SPDC regimes 14 , attesting that non-collinearity of the correlated photon pair has no impact on QCD operation. In the Supplementary Information we experimentally and theoretically show that there is no requirement for corelated photon pair being localized within a spot of the size smaller than the feature to be resolved 3,4 . In our case this is the echelle grating pitch used in the QCD. The only requirement is about the uncertainty of mutual locations of the partite composing the bi-photon state, namely, about its correlation width to be smaller than the grating pitch. (1) (k x , k y ) for the pure coherent states at 795 nm (from the VCSEL) and 405 nm (from the pump laser), respectively. The first one reveals only the two successive diffraction orders, while for a half-shorter wavelength, five successive orders are visible. The intensity diffraction orders in G (1) (k) pattern for classical coherent state at the wavelength 795 nm (close to 810 nm), almost coincide with the even orders of the coherent state at 405 nm. At the same time there is no diffraction lobes of the coherent state at 795 nm in the directions of odd diffraction orders at 405 nm wavelength. The corresponding second-order correlation patterns G (2) (k, k′) obtained for these two coherent states are shown in Fig. 2(b,e). In order to represent First-and second-order correlation patterns measured in the far field of echelle diffraction grating illuminated with various pure states (coherent and bi-photon states). Top row panels depict two successive diffraction orders of coherent state at 795 nm from VCSEL, showing (a) the intensity distribution G (1) (k x ,k y ), (b) second-order correlation pattern G (2) (k,k′) after pixel crosstalk correction and accidental removal represented as 2D map spanned over linearized indexes of the two array pixels and (c) its classical counterpart G (1) (k) × G (1) (k′) due to accidental coincidences. Middle row panels (d-f) show similar patterns for five successive diffraction orders of 405 nm pimping laser. Bottom row panels (g-i) display similar patterns for five successive two-photon diffraction orders of bi-photon state produced in non-collinear SPDC regime at 810 nm wavelength. The residue correlations seen as vertical lines in (b,e) are due to photon detection in one of the diffraction lobe and spurious light or dark count at any other pixel of the array. G (2) (k x , k y , k x ′, k y ′) as a 2D map, we introduce linearized pixel indexes k = k y + N × k x (and k′ = k y ′ + N × k x ′) that continuously numbers in a line all N × N pixels within the array, first along the columns in the k y direction and then changing between the columns in the k x direction (more details can be found in Supplementary Information). As a clue to interpret such G (2) (k,k′) patterns after pixel reshuffling, in Fig. 2(c,f) we provide the corresponding G (1) (1) (k′) patterns for accidental coincidences. To ease the comparison, they are also potted using linearized pixel indexes. For a coherent state one shall have identical patterns 23 G (2) Quantum and classical diffraction regimes while a difference in the G (2) (k,k′) and G (1) (1) (k′) patterns render visible non classical states. In agreement with these considerations, for both coherent states, the two corresponding patterns are very similar to each other. Even after removal of the accidental coincidences, the diffraction lobes are clearly visible on the main diagonal (directed from the bottom left corner to the top right corner) of the pattern in Fig. 2(b) due to low rate of events 24 (see Supplementary Information). The off-diagonal lobes are due to detection of photons from two different diffraction orders. The vertical and horizontal lines appear due to possibility to separate variables in the form of G (1) Because the diffraction lobes of 405 nm laser intensity pattern in Fig. 2(d) are just one-pixel wide in the k y -axis direction and there is a strong noisy background due to spurious light, only horizontal and vertical lines are visible in the second-order patterns in Fig. 2(e,f). In contrasts to this, Fig. 2(g-i) show drastically different G (1) , G (2) and G (1) × G (1) patterns measured in the case of bi-photon state at 810 nm produced in non-collinear SPDC regime 14 . The single-point second-order correlation function G (2) (k, k) taken along the main diagonal of the second-order pattern in Fig. 2(h) reveals five successive two-photon diffraction orders. The directional angles of the two-photon diffraction lobes coincide with 405 nm diffraction orders of classical photons, thus revealing the effect of quantum diffraction at the de Broglie wavelength of 810 nm/2. In the G (2) (k, k′) map spanned over all pixels of the array, they are represented by five anti-correlation traces directed along the main antidiagonal (from the top-left corner to the bottom-right corner of the pattern), as expected for anti-correlated signal-idler pair. More details on experimental results and modelling of the diffraction of bi-photon state as well as the use of single-point correlation function can be found in Supplementary Information. The G (1) Fig. 2(i) does not show any signature of anti-correlation traces because the variables k and k′ are not separable, attesting for the detection of entangled state in each two-photon diffraction order. Panels (a) and (d) in Fig. 3 display, respectively, G (1) and G (2) patterns for the case when only coherent state at 795 nm is incident at the grating and the slit is wide open. They are very similar to the patterns in Fig. 2(a,b) discussed above, showing just two successive diffraction orders at 795 nm. The only difference is that the grating incidence angle is changed for higher angular dispersion. The diffraction lobes at 795 nm are very close to the directional diffraction angels at 810 nm wavelength. The G (2) pattern in panel (d) shows the two diffraction lobes on the main diagonal due to photons being detected simultaneously in the same diffraction lobe. The two off-diagonal lobes are due to photons detected in different diffraction orders. As discussed above, the weak vertical and horizontal lines are the artifacts caused by the noise events uniformly distributed across the array and detected simultaneously with a photon in one of the diffraction lobes. These features are due to accidental coincidences. The photon state is a classical one because the variables are separable so as the G (2) (k,k′) pattern is given by a product of the two intensity patterns G (1) (k) × G (1) (k′) (see the discussion to Fig. 2 above). Panels (b) and (e) in Fig. 3 report on the diffraction of the mixed input state of coherent photons at 795 nm and entangled photons at 810 nm wavelengths produced in non-collinear SPDC regime (the slit is kept wide open). As expected, the two successive diffraction orders at 795 nm wavelength remain visible. The G (2) pattern now shows additional features due to diffraction of anti-correlated signal-idler pairs. Those are seen as the three anti-correlation traces pointing in the direction of the main antidiagonal. Note that the traces near the main antidiagonal and in the top right quadrant are clearly visible. The third anti-correlation trace in the bottom left quadrant of the figure is barely seen because of the inhomogeneous diffraction pattern of the echelle grating, residue detector pixel crosstalk (seen along the main diagonal) and the noise events detected simultaneously with the photons (seen as vertical and horizontal lines). The principle diffraction lobes of the classical photons at 795 nm wavelength seen on the main diagonal of the G (2) pattern are located nearby the two anti-correlation traces from the two consecutive even diffraction orders of bi-photons at the de Broglie wavelength of 810 nm/2. The small shift due to the spectral dispersion of the grating has no impact on the generality of the results reported here. Most importantly, the odd G (2) (k, k) diffraction order at bi-photon de Broglie wavelength seen as the anti-correlation trace passing near the main antidiagonal of the pattern does not overlap with any principle diffraction lobe of 795 nm light. This can also be appreciated from the G (1) intensity pattern in Fig. 3(b). The entangled photon state is seen as a blurred cloud, thus attesting that bi-photons are physically present at the diffraction angles prohibited for classical photons of the same optical wavelength (see also Supplementary Information). The G (2) pattern cannot be any more decomposed as a product of two G (1) patterns, attesting that the diffracted state has non-classical content and is non-separable. Demonstration of the quantum-classical discrimination by the spatial correlation patterns The effect of the QCD on this input mixed state is pictured in panels (c) and (f) in Fig. 3. The output slit of QCD is now narrowed and positioned halfway between the two principle lobes of the classical diffraction pattern, i.e. to one of the odd diffraction orders of bi-photons at their de Broglie wavelength of 810 nm/2. Thus, only the classically prohibited range of directional angles between the two lobes is passing. The photon pairs passing through the slit can be physically seen in the G (1) intensity pattern in Fig. 3(c) as a blurred cloud with no signature of classical photons. Respectively, the G (2) pattern in Fig. 3(f) reveals only one anti-correlation trace if we neglect the residue pixel crosstalk and noise events. The signature of only non-classical correlations of the transmitted photon state testifies that the combination of a diffraction grating and slit discriminates the quantum and classical components. We may thus refer to it as "Quantum-Classical discriminator" (QCD) based on the de Broglie wavelength diffraction. Demonstration of the quantum-classical discrimination with temporal correlation patterns The spatial correlation patterns G (2) in Fig. 3 are distorted by the residue pixel crosstalk and noise events, limiting so far the QCD tests to the signal-idler pairs produced in non-collinear SPDC regime. In order to unambiguously confirm that the transmitted light state of the QCD is a non-classical bi-photon state and that QCD operates equally well with collinear signal-idler pairs, we measure its temporal correlations by recording the start-stop histogram of the delayed photon arrivals at the output of QCD, using a beamsplitter (BS), two standalone single-SPAD detector modules and a digital oscilloscope (Fig. 1). Contrary to the spatially resolved measurements, the temporal resolution bin size is 100 ps, which occurs insufficiently large to accommodate for the possible jitter and delay time variations in long-duration measurements, as shown below in the discussion to Fig. 4(d). Figure 4 shows recorded time-difference histograms for four different input states of QCD operating with the slit selecting the same range of directional angles as in Fig. 3(f). In panel (a) the input state of QCD contains only coherent photons at 405 nm. This is achieved by introducing a 30 °C temperature offset of the PPKTP crystal from the phase-matching condition for SPDC and temporally suppressing one pass-band filter, to attenuate the pump beam just by 60 dB down to 30 nW level. As the slit is located in the odd diffraction order of 405 nm, each detector reveals counts well above its dark counts. As expected, no photon bunching is observed at zero delay for this coherent state. Panel (b) displays the case where only the radiation of 795 nm laser is fed at the input of QCD while all output beams are blocked by the slit in the odd diffraction order at the de Broglie wavelength 820 nm/2. The count rate is low, mainly defined by the detector dark counts and spurious light, attesting that no classical photons at 795 nm are present at the output of QCD. Panel (c) presents the time-difference histogram when the QCD input is probed with the mixed state of coherent photons at 795 nm and non-collinear SPDC bi-photons at 810 nm. We see the appearance of a correlation peak at zero-delay at the output of QCD, having peak-to-background ratio of 2.5:1. This result indicates the www.nature.com/scientificreports www.nature.com/scientificreports/ presence of pure bi-photon state and thus provides an additional evidence to conclusion drawn from the results in Fig. 3(b) on the QCD operation. Note that the background is defined by accidental events due to dark counts and photons detection from different correlated pairs, while as shown in Fig. 4(b), 795 nm classical photons are suppressed by QCD (see also the Supplementary Information). So far we have reported results obtained with non-collinear degenerate SPDS photons. In order to prove that observed QCD features are not linked to the angular spread of bi-photons, we repeat the measurement form Fig. 4(c) with the PPKTP crystal temperature increased by a few °C, in the collinear non-degenerate SPDC regime 13,14 . Note that with increasing temperature, the pair production rate lowers 13,14 and therefore we perform measurements with significantly longer integration time (the integration times are quoted in the figure caption). Panel (d) represents the time-difference histogram for such mixed input state of coherent photons at 795 nm and collinear photon pairs at 810 nm. Once again the coincidence peak at zero delay in Fig. 4(d) indicates presence of bi-photons at QCD output. A careful examination of this histogram reveals the excess correlations in the two consecutive bins near zero delay (two black points above the background), indicating that the bin width was not sufficiently large to fit the measured time difference variations and drift during this long-term measurement. Applying binning to the two excess counts and placing them in the central bin (shown as a red point at zero delay) we find the peak-to-background ratio of 2.0:1. Within the accuracy of the accidental coincidence background due to several photon pairs arriving at detectors in the interval corresponding to their integration time, these experimental results agree with theoretical interpretation given in the Supplementary Information. possible extension of quantum-classical discriminator design The concept of quantum-classical discrimination with selection of only one order by a single slit may be extended to a mask, containing multiple apertures. Such mask will provide a higher throughput for non-classical photon states and eventually will make possible observation of entangled photon states with order higher than 2. This can be achieved by selecting a set of diffracted beams containing only entangled photon states. We investigate a mask with three slits located at successive odd diffraction orders of the two-photon diffraction pattern of bi-photons at de Broglie wavelength. The mask is placed in the far field of the echelle grating at the location of adjustable slit shown in Fig. 1. Respectively, the SuperEllen SPAD array detector is now used to analyze the far field patterns of the mask. The mask is sketched in the top of Fig. 5. It contains apertures corresponding and aligned to the odd orders of 405 nm diffraction pattern. To simplify masks alignment, we temporally suppress one pass-band filter to reduce attenuation of 405 nm beam. This results in 405 nm classical light being of slightly higher power (~30 nW) as compared to SPDC photons power (~6 nW). The G (1) (k x ,k y ) intensity pattern in Fig. 5(a) attests that the slit mask is properly aligned. The presence of weak contribution from SPDC photon pairs is verified by observing the donut-like structures superimposed on laser diffraction pattern. As expected, the measured G (2) (k,k′) pattern in Fig. 5(c) reveals only classical component. Within the accuracy to suppressed even diffraction orders, it is similar to the initial pattern for diffraction of 405 nm coherent state in Fig. 2(e). Figure 5(c,d) show the G (1) (k x ,k y ) intensity pattern and the second order correlation pattern G (2) (k,k′) measured after the mask with a pure bi-photon state produced in non-collinear SPDC regime. For this we recover back the 60 dB pass-band filter and reduce the residue 405 nm beam power down to 30 fW. Like in Fig. 2(g), the intensity pattern in Fig. 5(c) shows several donut-like structures superimposed on each other. Like in Fig. 2(h), the multiple anti-correlation traces in G (2) pattern in Fig. 5(d) are also clearly visible, but surprisingly, the separation www.nature.com/scientificreports www.nature.com/scientificreports/ between the anti-correlation traces is twice less than in Fig. 2 and the literature case 3,4 of quantum two-photon diffraction at de Broglie wavelength 810 nm/2, when no mask is applied. In this measurement, the half-period translation of the mask from the locations of odd diffraction orders at de Broglie wavelength 810 nm/2 to the locations of even orders does not change the G (2) pattern. The fact that G (2) fringe period is reduced by a factor of two, provides a hint that a possible theoretical explanation should be linked to a π-phase shift between adjacent slits of the mask for classical photons at 810 nm, yielding 2π-phase shift for bi-photon state, in accordance with the de Broglie wavelength concept. Because the (angular) distance between the slits of the mask is twice larger than the distance between the two-photon diffraction orders of the grating before the slit, the two-photon fringe density in the far field of mask is twice larger as well. In Supplementary Information we provide a simplified theoretical description for such quantum diffraction mask experiment. This mask can be used as an alternative QCD with multiple beam output. Its detailed experimental examination is left for another study. conclusion We demonstrated a discrimination of entangled states from the classical photons, when photons have the same classical characteristics such as the optical wavelength, propagation direction and polarization. The discrimination is based on the effect of quantum diffraction of the bi-photon state according to its de Broglie wavelength of λ pair = λ/2 instead of the optical wavelength λ of photons composing the pair. It is realized experimentally with the echelle grating and a slit by selecting an odd diffraction order at de Broglie wavelength which is prohibited for classical light. The predominance of the bi-photons in the output light state after the discriminator is confirmed in two ways, by detection of their spatial and temporal G (2) correlation patterns. Particularly, for the detection of the (1) patterns. The second order correlation patterns G (2) (k,k′) are plotted after the accidental removal and pixel crosstalk correction and are represented as 2D maps spanned over linearized indexes of the two pixels within the detector array. spatial correlation patterns, we used a purposely developed SPAD array detector, what itself is also a step forward towards the practical application of entangled photons, e.g. in imaging with resolution beyond the Rayleigh limit. In the reported validation we used an echelle grating as diffractive element. This element was selected because of its convenience of multiple orders making easy the selection of even and odd orders sequence. In perspective practical applications where entangled photon state purification will be necessary, it may be better to use diffraction gratings with higher efficiency and less orders. On the other side, the echelle grating offers a possibility to select a multiple-beam patterns with purified quantum states. This arrangement can provide a higher throughput but also it may be useful in many other quantum optics setups, replacing complex optical systems with multiple beams splitters. The reported discrimination effect is based on quantum diffraction of bi-photons, physically present in each two-photon diffraction order at de Broglie wavelength. We may assume that the same effect may be also used for the selection of entangled photon groups of higher orders, using the n-photon diffraction patterns at their corresponding de Broglie wavelength of λ N = λ/N.
2023-02-20T14:46:25.836Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "98b251ab272831586928e33ef7f90f07b1ad2bbe", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-63833-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "98b251ab272831586928e33ef7f90f07b1ad2bbe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
211544591
pes2o/s2orc
v3-fos-license
Teaching theology from a distance : Experiences of the Institute of Distance Learning at St Tikhon ’ s Orthodox University in Moscow , Russia Distance Learning (DL) projects for training both priests and laymen appeared in Russia from the end of 2003 to the beginning of 2004. Further development showed that while DL related to theological training programmes is restricted (available only to seminary students, which seems reasonable), the programmes for lay people were divided into two groups. One group included orthodox educational projects, while the other included programmes aimed at systematic longterm and short-term DL. The first group includes several well-known web portals, where orthodox literature is available for a variety of purposes – from liturgical books to fiction. In addition, there are calendars with life stories of the saints and readings from the Holy Scriptures, applications that offer daily quotes of wisdom, multimedia materials, as well as self-control tests and an opportunity to ask a question to a priest or a moderator. These projects can provide the required large-scale involvement and accessibility, but they cannot be likened to educational ones because traditional education – and especially theological education – involves interaction between a teaching instructor and learners. Delamarter (2005) and Heinemann (2006, 2007) suggest that face-to-face residential education is considered as the standard of excellence for theological education. Palka (2004) supports this and states that formational learning best occurs in an on-campus context. Introduction and theoretical framework Distance Learning (DL) projects for training both priests and laymen appeared in Russia from the end of 2003 to the beginning of 2004. Further development showed that while DL related to theological training programmes is restricted (available only to seminary students, which seems reasonable), the programmes for lay people were divided into two groups. One group included orthodox educational projects, while the other included programmes aimed at systematic longterm and short-term DL. The first group includes several well-known web portals, where orthodox literature is available for a variety of purposes -from liturgical books to fiction. In addition, there are calendars with life stories of the saints and readings from the Holy Scriptures, applications that offer daily quotes of wisdom, multimedia materials, as well as self-control tests and an opportunity to ask a question to a priest or a moderator. These projects can provide the required large-scale involvement and accessibility, but they cannot be likened to educational ones because traditional education -and especially theological education -involves interaction between a teaching instructor and learners. Delamarter (2005) and Heinemann (2006Heinemann ( , 2007 suggest that face-to-face residential education is considered as the standard of excellence for theological education. Palka (2004) supports this and states that formational learning best occurs in an on-campus context. Hockridge (2013) asserts that concerns about the suitability of Distance Education (DE), and particularly online DE, is twofold: whether face-to-face interaction is a necessary part of formational learning and whether web-based technologies can provide a tool for genuine social interaction. There is an ongoing debate in theological circles around these points, which is supported by Lowe and Lowe (2010:85) who state that '[p]rofound disagreements exist among theological educators regarding the wisdom of delivering theological education at a distance, apart from the salient attributes of a campus community'. At the beginning of the 20th century, only people who were training to become priests could enrol for studies in theology in Russia. In 1990, the university was established to train lay people as well as priests, and in 2004, St Tikhon's received university status. St Tikhon's developed a short-term one-semester catechetical programme 'Foundations of Orthodoxy' and a long-term theological education programme -'Theology', designed for several This article considers the basic problem of online education owing to the lack of direct contact between all participants in the study process. The experience of distance theological education in Russia as a whole and the personal experience of two of the authors who are lecturers at St Tikhon's Orthodox University in Moscow are used to understand and describe the methods of addressing the challenge of direct communication. Based on 15 years of experience of two of the authors (Egorov and Malanina) in the field of distance theological education in Russia, as well as survey results on preferences for communication in theological distance education training, the article presents research results that confirm the preference for the way the current study process is organised based on different communicative activities for various levels of learning. The authors also report on the existing, and actively used, teaching staff training system on theological distance education. The results are presented in the form of a pyramid as a framework for theological distance education in a Russian context. years of training, which have been successfully implemented for 14 years and presented completely in a DL format. In addition, there are several short-term programmes for individual theological courses and the first graduation from the master's programme (MSc) in Theology, the course of study lasts 2.5 years, will soon occur. Keywords The implementation of DL on the Internet more than 20 years ago was considered a promising means of replenishing an ever acute need for mass education, by eliminating some of the shortcomings of traditional, full-time on-campus education. Among these shortcomings are the lack of infrastructure, financial constraints, as well as a deficiency of qualified teaching staff capable of satisfying the requirements of all those wishing to qualify with a certain degree of education. In 2016, the Russian Federation launched a project called 'Modern Digital Educational Environment' to provide opportunities for many more Russians to further their education. One of their targets was to train 11 million students through online courses and to develop 4000 such courses (Barinova 2017). However, despite the growth of emerging opportunities offered through online education, there has not been any radical breakthrough yet (Roshhina, Roshchin & Rudakov 2018). None of the above obstacles has been significantly overcome, which still raises a question of their causes. Perhaps, they are rooted neither in economic, nor in technological aspects, but in the very nature of learning. Theological DE is not an exception in this respect. An analysis of the development of DL theological curricula (Egorov & Melanina 2014) has led to the conclusion that it makes sense to distinguish between two target audiences, namely, students of theological schools (future clergymen) and laymen who want to extend their theological background knowledge. Naidoo (2012) discusses the concept of formational learning in theology education, and particularly highlights the development of ministerial and spiritual maturity that is expected of church ministers. According to Overend (2008) and Percy (2010), theological education should encompass the training of the whole person, which includes spiritual and character formation -not just the transmission of theological content. Thus, the question being asked is whether formational training can take place in an online DE environment. This article is based on the 'the Community of Inquiry Model', as put forward by Garrison, Anderson and Archer (2001), which links three presences to successful online learning -cognitive, social and teaching presences. It is the collaborative aspect of these three presences that result in students being able to achieve deep and meaningful learning. Garrison (2009:352) defines social presence as 'the ability of the participants to identify with the community, communicate purposefully in a trusting environment and develop interpersonal relationships'. Anderson et al. (2001:5) see teaching presence as 'the design, facilitation and direction of social and cognitive processes for the purpose of realizing personally meaningful and educationally worthwhile learning outcomes'. Finally, Garrison et al. (2001:11) state the cognitive presence refers to 'the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse'. Computer-based discussion forums play a central role in the Community of Inquiry Framework; thus, the pedagogy behind online discussion forums assumes that students and teachers will work together, not independently, as in traditional DE (Swan & Ice 2010). Garrison (2015:27) states that 'as a result of the proliferation of modern communication technologies, Higher Education is no longer solely purposed to provide access to information to the students'. It needs to go much further and enable the learners to develop skills that empower them to critically assess the information that is presented to them. Thus, the research question being asked is: What methods of communication in DE theology training are best suited to both the subject and content specific requirements, as well as the formational training of theology students in Russia? The objective of this article is, therefore, to describe the experiences of the leaders of the courses, the lecturers' evaluation of their training, as well student views on the methods of communication that best suit them. From these results, a distance teaching and learning model for the students is proposed in the conclusion of this article. Methodology The methodology for this article draws on a narrative, ethnographical review of the experiences of two lecturers at the St. Tikhon's Orthodox University, Moscow, Russia. St. Tikhon's Orthodox University is a theological university for the laity who are affiliated with the Russian Orthodox Church. Narrative inquiry is a way of understanding and inquiring into experience through 'collaboration between researcher and participants, over time, in a place or series of places, and in social interaction with milieus' (Clandinin & Connelly 2000:20). It is a 'genre of analytic frames whereby researchers interpret stories that are told within the context of research and/or are shared in everyday life' (Allen 2017(Allen :1069. The role of the narrative in this article is to provide a background story and description of distance education theological teaching in the Russian context. In addition, this article also includes descriptive results from a quantitative survey conducted among students at the university. Additional results emanate from programme evaluation of lecturers who have completed a course on distance education teaching. The results are reported as a narrative from the two lecturers, as well as descriptive quantitative frequencies from a survey conducted among students from both a long-term programme as well as a short-term programme. In addition, feedback from the DL programme for lecturers was analysed to identify expressions (semantic units) in which the respondents http://www.hts.org.za Open Access described their experiences and provided an assessment of the course. The population for the survey consisted of 131 students who were studying for the one-semester short course on the fundamentals of orthodoxy, and 182 students undertaking the long-term professional training programme on theology. A census survey was conducted, and all students were invited to complete the survey. A census study occurs if the whole population is either small or if there is a possibility for the entire population to be sampled. The sampling frame for this study used a complete list of all the members of the organisation, that is, all students in the course, and is thus considered to be a census study. The response rate for the one-semester course was 72% (94 respondents) and for the long-term programme the response rate was 78% (142 respondents). The survey was conducted by means of an online questionnaire using the Learning Management System (LMS) Moodle, and participation was voluntary. A questionnaire using a five-point Likert scale was sent to all participants. The questions were related to the types and methods of communication between the students and the lecturers as well as the types of responses, time frame for responses, types of assessment communication, discussion forums and the use of emoticons. It also included a question related to the students' preferred method of teaching and communication -and this aspect is reported in this article. In addition, open-ended questions were asked and analysed. In addition, results from an evaluation by teachers at the university, on their perceptions of the online training programme on teaching through DE, are presented. The study involved all the teachers who successfully completed the online DE programme. Each participant wrote evaluation reviews directly after the end of the programme. For the period 2013-2018, 108 reviews were received. Feedback on the course was presented in free-form writing, but all respondents were asked to pay special attention to the following aspects, namely, general impressions of the course construction and the experience of training on it, errors in the design of the course, lack or excesses of content, shortcomings in assignments and discrepancy in the forms of activity. The feedback was analysed in order to identify expressions (semantic units) in which the respondents described their experience and gave an assessment. Then these expressions were combined according to the principle of similarity and the most frequently used ones are presented in the results section. Drawing on all the empirical results of this study, a DE framework for teaching theology in Russia is proposed in the form of a programme pyramid in the conclusion of this article. Review of literature on teaching theology at a distance Rovai, Baker and Cox (2008) found in their study that on-campus teaching is important for formation and they questioned whether online DE can fulfil this role. Nichols (2015), however, carried out empirical research and ascertained that statistically there was little difference in the spiritual formation training between distance and on-campus students. In a further qualitative study, based on in-depth interview with theological DE students, Nichols (2016) confirmed these results. He concluded that both on-campus and DE students experienced similar transformative learning experiences. Egarov and Melanina (the two lecturers who are presenting this ethnographic evidence) suggest that this interaction should be compulsory and, therefore, deliberately organised by the training institution, systemic and based on methodologically approved approaches. It is this aspect that has determined the emergence and relevance of distance theological education programmes, which we have attributed to the second group. St. Tikhon's Orthodox University is a pioneer in this field, as in 2004 it offered the first DL programme for lay people. However, as it follows from numerous publications in Russia, the educational space of the future belongs to Massive Open Online Courses (MOOCs) (Barinova 2017;Gotskaya & Zhuchkov 2016), where the training organisation is opposed to systematically designed curricula with the compulsory direct participation of a teaching instructor. These MOOCs, according to many authors reflecting on this topic, will go a long way in solving the problem of the accessibility of education -including higher education -for anyone wishing to get it. This is in line with the Russian priority project as discussed in the previous paragraph. Some authors predict that the rapid growth of online courses predicts the potential obsolescence of traditional educational institutions. The implementation of these courses does not require any buildings, classrooms, deans or departments. Motivated Internet users can freely form their study trajectory, selecting the courses they need, which are designed and made available on the Internet, and teaching to a wide-range audience. The quality of the subjects taught is assessed by 'free voting of users'. At the same time, enthusiasts of open education assume that the trainees can build their own 'study trajectory'. Moreover, it is presented as one of the significant advantages over traditional higher education, namely, one needs to study only 'necessary' courses without wasting time on 'redundant' courses. A valid argument in favour of this standpoint is the notion that the rates of changes in the modern world are so great that no one can be considered an expert or a qualified teaching instructor. All of the above-mentioned facts can be considered plausible. However, we also cannot but agree that not all areas of human activity can be represented in the field of mass education because of its specific nature. The main and http://www.hts.org.za Open Access intrinsic characteristic of MOOCs is the minimisation or even complete absence of direct individual contact of learners with a teaching instructor. Massive open online courses can be an excellent tool for mastering individual areas of knowledge, developing certain skills, expanding horizons, etc., but they cannot replace traditional education because education is something of a deeper and wider nature, something that is born only from the interaction of learners and teachers. This is especially the case in religious education where 'formation' is considered an essential component. Formation can be described as the development of character and spiritual maturity (Hockridge 2013). Nichols (2011) refers to the distinction between akadmeia and ecclesia, and purports that online DE might be more suited to the academic community (akademeia) rather than the church community (ecclesia). If in engineering training it is possible to minimise the communicative component with acceptable quality losses through controlling the correct learning outcomes produced by students (final or intermediate), this is not feasible in humanitarian education. Humanitarian and theological knowledge belongs to the category of poorly formalised areas of education. A significant negative aspect is the absence of traditional, direct interaction of a teaching instructor with learners in the presentation and acquisition of the teaching content and the lack of immediate feedback, which is an intrinsic feature of full-time courses of study. Both these aspects are typical for any classroom activity and, more importantly, for out-of-audience communication. The importance of this factor in humanitarian education is conditioned by the need not only to inform the student of the teaching content and to ensure its acquisition and mastering but also to solve a problem of the interpretation of this content within the framework of authentic scientific and cultural practices. Thus, compared with natural science subject areas, in DL of humanitarian and theological courses, the role of the communication component not only essentially increases but also becomes the leading one, and the very purpose of communication acquires a different nature, namely, a teaching instructor not only accompanies independent teaching of learners or controls their results. It is the task of the teaching instructor to adapt the study material to the personal characteristics of a particular learner, because the incomprehensible material always happens to be only individually incomprehensible. Challenging material should be taught and explained to students according to their personal characteristics, and an even more difficult task is to perceive and understand the material within a particular tradition. Bates (2015) supports this and asserts that the main limitation in teacher-student interaction is the time demands that are made on the teacher. As such, discussion forums are therefore not easily scalable. In addition, in theological education, the possibility of group work plays an important role, both from the viewpoint of learning and solving problems associated with personal development. Therefore, the DL methodology applied in St. Tikhon's Orthodox University significantly differs from that used in MOOCs. The core of the method of teaching used is as follows. The study material of a course is divided into semantic blocks, which are studied consistently by a group of students within a specific, predetermined period of time (a course schedule). For each block, a system of study tasks, both reproductive and creative, have been developed. In reproductive tasks, students are required to reproduce the key points of a block of the studied material. Reproduction can be made from memory (using automated testing or polling in a real-time mode) or using didactic tools (submitting individual answers to questions for the studied material or its summarising). In the latter case, it is not only quick checking of the submitted answer by the teaching instructor (within 24 h of the submission of answers by students) but also its addition or clarification by the learners, if necessary (answering additional questions of the teaching instructor). Creative tasks are, as a rule, group activities, and they require joint solutions by students to problems within the framework of the studied material. The problem task is formulated by the teaching instructor and is offered for joint discussion in a discussion panel (forum) or chat format (if possible, a webinar). Problem tasks are formulated in such a way that in the course of group work, the learners can exhibit an understanding of the studied material and independently express their opinion about it. The level of its complexity and the competence of the learners determine the participation of the teaching instructor in the problem discussion. The teaching instructor can moderate the discussion to achieve the learning outcomes, facilitate the understanding of the problem by individual participants, set a role model for discussion and opinion sharing within the framework of the studied material and illustrate (if necessary) the uncertainty and ambiguity of the offered solutions. As a rule, the discussion ends with summing up by the teaching instructor or one of the learners appointed by the teaching instructor. Certain types of creative tasks can be individual or mixed. For example, writing an essay on a given topic followed by a group discussion. In this case, the teaching instructor acts as an expert in the essay area and a moderator of the discussion. The assumption here is that these discussion forums are best handled in a face-to-face environment. Results The results are presented in three sections. Firstly, the narrative voice of the two lecturers concerned is presented in which they provide the background to distance education theological training in Russia and their reflections on the future use of online training. This has been presented in the background and the literature review section above contextualises the study. This is followed by the descriptive statistical results from the relevant question in the larger survey, that is, students' preferred method of teaching and communication. Finally, results are drawn from the lecturer course evaluation and experiences of the online DE course that they undertook. The effectiveness of this training methodology is confirmed by the results of studies presented by two of the authors of this article at the eLearning Stakeholders and Researchers Summit 2017 conference held in Moscow, Russia, in the report 'Structure and characteristics of communication in distance learning' (Egorov & Melanina 2017). In the study mentioned above, in one of the questions, the respondents were asked to express their preference for the model of course designs, either active communication with the teaching instructor and groupmates or video lecture models supplemented with tests and a final course essay (similar to MOOCs ). This question from the study addresses the objective of this research, which is to obtain students' views on the methods of communication that best suit them. Here we will only discuss the most significant conclusions regarding the relevance of communication in DE theological education. The options available for this particular question, regarding the preferred method of communication in the quantitative survey, were: 1. in favour of the existing model -full teacher involvement (face-to-face) 2. in favour of the model of MOOCs -no teacher involvement (MOOC) 3. a blended model, where the second model is supplemented by the first one 4. undecided. The descriptive results of the questionnaire are presented in Figure 1. As a result, the first and the third options (face-to-face and blended) were chosen, respectively, by 47% and 23% of the students of the short-term programme, 'Foundations of Orthodoxy'. Of the participants of the long-term programme, 'Theology', 52% and 28%, respectively, chose these options. The second option of a MOOC (online) type of presentation was chosen by 17% and 7% of learners, respectively. The results from Figure 1 indicate that the most popular choice of communication method is face-to-face, for students of both the short course on orthodoxy, as well as the longer course on theology, although slightly higher for the long-term students. Approximately, half the students prefer this method of communication. Around a quarter of the students are in favour of a blended approach, which is partly online and partly face-to-face. When analysing the MOOC (online) option, it can be seen that very few 'Theology course' students (7%) are in favour of online training. This figure increases with the 'Foundations of Orthodoxy' students where 17% prefer an online means of teaching and learning. When analysing the responses, it should be considered that more than half of the students taking these courses live in Moscow, St. Petersburg or their environs. This means that if there is a choice of how to organise the study process, the distance format is more suitable for them than the full-time format, with preference being given to the communicative (face-to-face) model. The most interesting are the reasons provided by respondents that prefer MOOCs (with no teacher involvement) instead of those courses offered in our university. In most cases, the respondents pointed out that video lectures with a minimum 'feedback' are less labour-intensive than the DL methodology used in St Tikhon's Orthodox University. Most answers (52% of the long-term 'Theology' course students) expressed a clear approval of the experienced training methodology, which deserves attention, despite what was said above about the method of selecting entrants. The reasons for this choice include the most frequent statement that the system of tasks performed under the guidance of the teaching instructor is the most effective method of learning, as it changes the worldview, positively affects the moral views, helps to go beyond the scope of available knowledge and broadens the horizon. An important result of the analysis of open-ended questions was the confirmation of the educational effect of the deliberate organisation of interaction in a 'learner-learner' pair. Not less than a quarter of the respondents who advocated the DL methodology used in St Tikhon's Orthodox University spoke about the importance of intra-group communication on study issues for personal development. Their responses include the following components, namely, discussion skills, the ability to listen to someone else's opinion, expanding one's own outlook and looking at the problem through the analysis of groupmates' opinion, etc. Obviously, this approach to the organisation of study interaction requires from the teaching instructor somewhat different skills and abilities than when working in a classroom or when designing MOOCs. Therefore, an educational institution developing DL courses in the field of humanitarian knowledge should solve one of the central tasks of selecting and training teaching instructors, which includes the following aspects: 1. Provide opportunities to develop new types of study (learning) activities through the new types of activities offered for them to engage. 2. Allow to visualise the methodology of the DL process, technical capabilities of the system and their methodological applicability, without intentional emphasising of this teacher training aspect. 3. To give a teaching instructor an opportunity to assess the labour intensity, advantages and disadvantages of new activities, communication problems, etc. A real example of a course, in the development and implementation of which an attempt has been made to solve the above-mentioned tasks, is a refresher course for university teachers. This course is called 'Theory and Methodology of Distance Learning' and was developed and implemented in St Tikhon's Orthodox University, along with the introduction of DL. The classes in this course are arranged in such a way as to enable teachers to participate as learners in all possible forms of study activities, both individual and group. In addition, they assess the advantages and disadvantages both from the perspective of learners and teaching instructors (relying on the available experience of teaching), after which each participant is expected to design his or her own course and offer several classes from it in the system, acting as a teaching instructor. This study involved all those who successfully completed the online DL programme. Reviews were written by each participant directly after the end of the programme. For the period from 2013 to 2018, 108 reviews were received. Feedback on the course was offered in free form, but all respondents were asked to pay special attention to such aspects as general impressions of the course construction and the experience of training on it; errors in the design of the course; lack or excesses of content; shortcomings in assignments; and discrepancy in the forms of activity. The feedback was analysed to identify expressions (semantic units) in which the respondents described their experience and gave an assessment. Then these expressions were combined according to the principle of similarity and the most frequently used ones were singled out. Below is a selection of quotes from the feedback of teachers who have completed the course on learning outcomes: 'I discovered distance learning for myself' -a change in the whole concept of the opportunities offered by distance educational technologies in obtaining a fully-fledged education. (Semenov Sergey, Orenburg Seminary, Lecturer) [A]n incomparable advantage of distance learning as compared with traditional correspondence learning (the availability of materials for students, ease of their duplication for the teaching instructor -once published and then available for everyone, etc. As the long-term practice of the implementation of this course shows, it allows not only to solve the above-mentioned tasks but also to reveal the pedagogical motivation of the learners, and their ability and readiness to work in a setting of distance interaction. Designing and implementing their own training course in a DL format involving real learners allows the university administration to assess in advance the level of readiness of a future teaching instructor for communication with learners, the flexibility of methodical thinking and the mastery of the study process 'technology'. Conclusion Summarising all that has been discussed above, the authors arrived at the following conclusions: 1. There is a constant need for theological education. There is an obvious shortage of resources in Russia for obtaining it, including competent teaching staff. For most people, DL is the only possible way to satisfy this need. And this is true not only for residents of remote or sparsely populated areas but also for residents of metropolitan areas, who -if given a choice -prefer to study in a distance format. 2. Today, the desire for completely replacing formalised systematic education under the guidance of a teacher with public open courses in theological education is not proving to be efficient, because it is in this part of human experience that individual transfer of knowledge from one person to another is especially important. In the study of theology, the main emphasis is placed not only on obtaining information but also on the development of certain skills that need to be developed through formational learning. The essence of the study of theology is the operation of meaning, information and skills contained in them and opened by them. The fact is that the transmitted information and skills are in many ways symbols, and therefore evidence of something other than what they portray. Therefore, MOOCs are able to meet the human need for theological education only in part and should be supplemented with a possibility of studying under the guidance of experienced teaching instructors and in a team of like-minded people. An alternative is to follow the suggestion of Nichols (2016) that formational training can take place in a church community environment that is outside the formal university course. 3. Therefore, to achieve this goal, theological DE should include schemes (methods) that ensure a systematic two-way communication between the teaching instructor and students, as well as among the students themselves, in the field of the covered theological subject. This is confirmed by the social and teaching presences that were depicted in the Community of Inquiry model that was presented earlier. Study activities should be constructed by the programme designers and carried out under the guidance of the teaching instructor in active interaction, initiated by both the teaching instructor and the learners. It is possible to carry out these interactions in both a face-to-face and an online environment, as confirmed by Nichols (2016). 4. The central problem of development and implementation of theological DE is the training of teaching instructors. A teacher of theology, working in a DL format, must comply with special requirements to establish personal contact with the students in conditions of distantly mediated interaction. It is important that this contact should be carried out mainly and first in a field of the taught subject, within the limits of study tasks. 5. In this case, a system of theological DE could be in the form of a pyramid based on MOOCs available for a large number of learners (see Figure 2). The middle part of this pyramid should be composed of less massive, but subjectoriented, courses provided by system designers with minimal teacher's involvement. On the top of the middle part, there should be more extensive, multidisciplinary programmes that are mastered through the participation, and under the guidance, of experienced teaching instructors. These programmes cannot be as massive as those that lie below them in the described pyramid, but, as the authors' experience shows, as supported by a wellestablished methodical and administrative system, they are also able to provide a relatively greater coverage for those wishing to obtain a certain level of education, compared with full-time or part-time training. Finally, programmes capable of preparing future teaching instructors of theology, including DL programmes of different levels, and those able to design such programmes, form the upper part of the pyramid. The said pyramid is presented in Figure 2. This pyramid model (Figure 2) is based on the results that were obtained from research carried out at St Tikhon's Orthodox University in Moscow, Russia. It combines results from students' questionnaires on preferred communication types, programme evaluation of teachers who undertook an online course on DE, as well as the narratives of two lecturers. Online training has been identified as the way forward for teaching in Russia because of the large number of students that can be educated and its cost-effectiveness. However, in theological training, the preferred method is still face-to-face approach or a blended approach. This pyramid takes all these factors into account and proposes the MOOC format for aspects of training that are contentbased, and then moves up the pyramid with more teacher involvement, particularly for preparing teachers and instructors in DE theology teaching. It is recommended that the model should be tested in other countries, as well as different contexts of DE theology training.
2019-11-28T12:36:17.625Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "aabb895a5d446a7563fe2be5f7c8717fa6c64d0b", "oa_license": "CCBY", "oa_url": "https://hts.org.za/index.php/hts/article/download/5343/14016", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e3e32c0e1a45570f6f9d80bc553a76ba656b8c2b", "s2fieldsofstudy": [ "Education", "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
54087310
pes2o/s2orc
v3-fos-license
Edinburgh Research Explorer Are dogs that eat quickly more likely to develop a gastric dilatation (+/- volvulus) than dogs that eat slowly? The available evidence is mixed and of variable quality. Most evidence indicates that rate of eating has no effect on risk of gastric dilatation – volvulus (GDV). Where significant effects occur, fast eating was implicated as a risk factor. No studies found that slow eating was significantly associated with an increased risk of GDV. Slowing the rate at which a dog consumes a meal will not increase risk of GDV but it may possibly reduce the risk. Based on the limitations and unreliability of the current relevant literature there is not enough evidence to make a conclusion either way. The evidence The literature searches uncovered six papers (Glickman et al., 1997;Elwood, 1998;Theyse et al., 1998;Glickman et al. 2000; Raghavan et al., 2004;Pipan et al. 2012) that directly addressed the PICO question. Of these, four of the studies (Elwood, 1998;Theyse et al., 1998;Raghavan et al., 2004;Pipan et al., 2012) found no significant effect of speed of eating on risk of having a GDV episode. One paper (Glickman et al., 1997) found that dogs that ate quickly were significantly more likely to present at a clinic with a GDV episode. The final paper (Glickman et al., 2000) found that large breed dogs (but not giant breeds) were significantly more likely to develop a GDV if they ate fast, but the 95% confidence intervals associated with the relative risk value that the authors report suggests that the true risk may not differ from that of slow eaters. Summary of the evidence 1. Pipan (2012) Population: Privately owned dogs (any breed or mix, both sexes, neutered and entire) across a wide geographic area. The survey was potentially available to English speaking dog owners with access to the internet in any country worldwide. Clinical bottom line The available evidence is mixed and of variable quality. Most evidence indicates that rate of eating has no effect on risk of gastric dilatation -volvulus (GDV). Where significant effects occur, fast eating was implicated as a risk factor. No studies found that slow eating was significantly associated with an increased risk of GDV. Slowing the rate at which a dog consumes a meal will not increase risk of GDV but it may possibly reduce the risk. Based on the limitations and unreliability of the current relevant literature there is not enough evidence to make a conclusion either way. Intervention details: Online survey based study, with ad hoc convenience sampling of dog owners. The questionnaire was divided into three sections: 1. Demographic information (year of birth, breed, sex, neuter status, and purpose of the dog; country and postcode of respondent). Whether the dog had ever had a GDV that required surgical intervention. This latter question was used to divide dogs into the control group and the GDV group. However, the authors then also included within the GDV group dogs that did not have surgery, or died / were euthanised without surgery but were considered to have a GDV (whether confirmed or presumed). 2. GDV group: a series of 44 questions divided into 4 categories: i. Dog specific factors ii. Management factors iii. Environmental factors iv. Personality factors 3. Control group: The same questions were asked of the control group dog owners as were asked of the GDV group, with the exception that they were not asked any questions about the GDV episode (as the dogs had not experienced a GDV). They were asked one additional question (had the dog had a prophylactic gastroplexy?). These changes reduced the total number of questions asked to 32. Study design: Cross-sectional study Outcome studied: The outcome measure was a GDV episode in the dog's history. The study looked for factors that were associated with an increased incidence of GDV in the population studied. Of relevance to this PICO, the authors' asked owners to rate, on a scale of 1 to 5, the speed with which their dog consumed its food ration. Main findings: (relevant to PICO question): The findings in relation to speed of eating are not reported within the results section. The authors note in the discussion that speed of eating was not significantly associated with risk of GDV. Limitations: The primary limitation is the failure to report their findings (along with numerical data) within the results section as this precludes the reader making an independent assessment of their conclusion. The measurement of speed of eating on a scale of 1 -5 will be sensitive to owner subjectivity in assessment. The authors do not provide any information on how owners were guided (if at all) to select the number that most represented their dog. Further, the incidents of GDV were retrospective so owners may have already taken measures to reduce their dog's speed of eating post-surgery, on the basis of veterinary recommendation or lay research. The authors do not outline whether owners were asked to complete the form on the basis of the dog's feeding behavior at the time of (or preceding) the GDV, or present time, or not directed as to a time frame to use. There was no attempt to match control dogs and GDV dogs across other dimensions that may have been relevant (e.g. breed, size, and / or age). The population of dogs studied is not constrained to types of dogs at high risk of GDV (large and giant breed dogs). Therefore, if any risk factors identified co-vary with the size of the dog, this would represent a confounding variable in interpreting the data that limits any conclusions that may be drawn. Glickman (2000) Population: Dogs (male, female, neutered and entire) from eleven different large and giant dog breeds (Akita, Bloodhound, Collie, Great Dane, Irish Setter, Irish Wolfhound, Newfoundland, Rottweiler, Saint Bernard, Standard Poodle, and Weimaraner), that were located within the USA. Dogs were required to be at least 6 months old and not to have a medical history that included an episode of GDV before the study commenced. Sample size: 1637 (large breeds: n= 894; giant breeds: n = 743): -Dogs that developed a GDV during the course of the study: 98 -Dogs that did not develop a GDV during the course of the study: 1539 Nb. This study was derived from a larger prospective cohort study carried out by the authors. 1991 dogs initially enrolled on this study; however, for inclusion in the current study (which used data from the larger study), certain criteria needed to be met and this reduced the sample size. Further details are outlined in the intervention section. Intervention details: The study began in June 1994 and ended in February 1999. Therefore, the maximum possible period that a dog could be studied for was 58 months. Dogs were recruited through breed clubs and dog shows. At the start of the study, owners were asked about the presence of GDV in the medical history of the dog or any of its first-degree relatives. The dog was physically assessed for body condition and temperament, and conformational measurements taken. Within thirty days of recruitment, owners were provided with a detailed questionnaire to complete that provided data on the dog's GDV history (if positive, the dog was excluded), breeding, medical history, reproductive status, personality and temperament, and dietary factors. Owners were instructed to notify the researchers if any of the following outcomes occurred: -The dog developed a GDV (if so, the researcher confirmed this with the veterinarian who treated the dog) - The dog died of another cause - The ownership of the dog was transferred to another person Owners were contacted in 1997, 1998, and 1999 to find out if, over the duration of the study, their dog had developed a GDV and, if so, whether the dog died or survived. This was the methodology for the original prospective cohort study by the authors (referred to above). Dogs from that study were included only if: -The initial questionnaire had been fully completed -There was at least one follow up set of data for that dog -86.7% of the owners in this study fulfilled both criterion for inclusion (n = 1660) -2.4% of these owners were excluded because their dog was less than 6 months old at the point of completing the initial questionnaire, leaving 1637 (82.2% of the original sample) available for analysis in this study. Of these dogs, the median duration during which the dogs were followed up was 2.4 years (Max: 4.8 months; minimum: no stated). The reasons for loss to follow -up (other than death) are not stated. 182 dogs died during this period: 29 died due to the GDV episode, 24 died for unknown reasons. 55 are reported to have died from other medical problems. 74 dogs are not accounted for in the figures reported. The authors converted the raw data into number of GDV cases per 1000 dog years in order to present the data as incidence of GDV (± confidence interval) among the large and giant breed dog population. Study design: Prospective cohort study Outcome studied: The outcome measure was whether the dog developed an episode of GDV during the course of the study and whether it survived this episode. The study then looked for non-dietary related factors that were associated with an increased risk of developing GDV in the Of relevance to this PICO, the authors' asked owners to rate, on a scale of 1 (slow) to 10 (fast), the speed with which their dog consumed its food ration. The authors did not direct the owners further as to what constituted e.g. a rating of '3', but, instead, the owner was left to use their judgement and experience. In analysing and presenting the data, the authors' merged speed ratings to form three categories: a. Slow: speed rating of 1 -3 b. Average: Speed rating of 4 -6 c. Fast: Speed rating of 7 -10 And then, further split the data into: a. Large breed dogs b. Giant breed dogs Main findings: (relevant to PICO question): Large breeds: -Large breed dogs that ate quickly were 2.36 times more likely to develop a GDV during the study than dogs that ate slowly. Limitations: The authors use the proportional hazards model to calculate the risk of having a GDV, as a function of speed of eating score, and include an interaction between breed size (large versus giant). However, the population attributable relative risk of GDV that they report for fast eating in large breed dogs (2.36 times more likely) is drawn from their univariate analysis. The 95% confidence interval for this is 0.91 -6.12, which is both a wide range (giving less confidence in the 2.36 value reported) and overlaps an odds ratio of 1.0 (awarded to the slow eating group to which the other groups are compared), indicating that, based on the univariate analysis, the true relative risk may not differ between slow and fast eaters. The authors collect data on eating speed at the start of the study (within 30 days of recruiting dogs). No further attempts were made to collect further data on this at regular intervals. Thus, unless speed of eating is an unchanged behavior of the individual dog and not influenced by other factors (e.g. diet change, age, etc), this reduces the ability of the study to detect real effects or meaningfully explain the effects observed. The measurement of speed of eating on a scale of 1 -10 will be sensitive to owner subjectivity in assessment. The total number of dogs lost to follow up is not reported. The authors exclude dogs from the original study that were lost to follow-up before at least one follow up questionnaire was completed. However, this is not the same as saying these dogs were not lost to follow-up as, for example, if the study was still in operation, why were further questionnaires not completed by these owners? The authors do not report how many dogs recruited to the study, remained with the study until the study ended. Instead they report only median duration of follow-up; this is meaningless to assess the number of dogs that remained with the study from recruitment to study end, as the dogs were signed up to the study at different time points. If the dogs that developed GDV and the dogs that didn't develop a GDV differentially dropped out prematurely, this could introduce a 'loss to follow-up bias'. We know that, by the nature of the recruitment process for the subset of dogs included in this study, that already 13.3% of dogs had been excluded due to lack of follow-up data (no follow-up questionnaires completed). Therefore, it seems likely that total losses to follow-up before the study ended would be higher (and possibly considerably so) but the authors fail to give us the information needed to assess this. There is not attempt by the authors to assess whether participant drop out before the study ended was random or whether particular risk factors or characteristics were associated with an increased risk of drop out. Furthermore, there are a lot of dogs lost to death that remain unaccounted for in the authors' reporting. Theyse (1998) Population: Great Danes. Owners of both groups of Great Danes were asked to complete a questionnaire that asked owners about their feeding and exercise regime. Demographic information was also recorded (age, sex, neuter status, and, for GDV dogs only, type of food eaten before the GDV episode) Study design: A cross sectional study (based on the RCVS Knowledge's Knowledge Summary guide); the authors describe it as a case-control study Outcome studied: Of relevance to this PICO, the authors assessed speed of eating by asking owners whether their dog took more than, or less than, five minutes to consume its feed ration. Main findings: (relevant to PICO question): A significant association between food intake time and development of a GDV was not observed. No further information is provided. Limitations: The use of a binary less than, or more than, five minutes to consume a feed ration is a crude assessment tool: -Owners were not specifically asked to measure length of time taken so it probably represents a variable and subjective assessment -The authors also ask owners whether they feed their dog once, twice or more often per day. Dogs that consume several meals will have smaller portions per feeding session and so be more likely to consume a ration within five minutes. Therefore, there may be a partial confound in the findings between size of portion and time taken to consume the ration. This weakens its use as a measure to assess speed of eating. -There is no evidence that the authors tried to control for this statistically (e.g. by only analysing the dogs that were fed once daily). The authors provide no numerical data to support their assertion that there was no significant effect of food intake time. The authors retrospectively searched clinic records for GDV cases in Great Danes between 1981 and 1994 and owners of affected dogs contacted to complete a questionnaire that retrospectively assessed feeding and exercise regime. -Thus, owners were often being asked to recall information about their dog's exercise and diet regime many years after the GDV episode and / or likely death of their dog. Factual recall is likely to be poor under these circumstances. Whereas, control group owners were probably being asked about an existing, current dog that they own (not enough information is provided to say this for certain). -Alternatively, dietary and exercise regime modifications may have been implemented post-acute GDV episode (in the dogs that survived) and these reported by the owner as their regime. This would prevent accurate measurement of risk factors associated with GDV development. There is not enough detail provided by the authors to allow this possibility to be evaluated. Glickman (1997) Population: Owned dogs within the USA. Intervention details: Several veterinary practices were contacted to complete a clinical data sheet for dogs that were presented at the clinic, diagnosed with GDV, and whose owners were willing to be contacted by researchers. Vets were asked to also identify a similar dog (matched for age and breed if pure breed, or age and weight if cross bred). Researchers provided their own case control dog through the university veterinary hospital if vets were unable to. All owners (GDV, and case-matched control) were interviewed by phone. Data on the following areas were collected: -The owner of the animal -Environmental factors -Clinical history -Physical activities -Dietary factors -Personality and temperament Two types of questions were asked: 1. Those designed to evaluate the dog in the 8 hours preceding the GDV episode (GDV dogs) or telephone interview (casecontrol dogs) 2. Those designed to evaluate the dog's behavior. Diet, etc more generally over the preceding year. Of particular importance to this PICO was rate of eating. This is mentioned in the abstract and results section but the authors fail to mention in the methods section either rate of eating per se or how this was assessed by the owners or quantified by the researchers. In the results section, the authors refer to slow, moderate, and fast groupings in relation to rate of eating, but how dogs were allocated to these groupings remains unclear. Study design: A cross sectional study (based on the RCVS Knowledge's Knowledge Summary guide); the authors' describe it as a case-control study Outcome studied: Of relevance to this PICO, owners were asked about rate of eating. However, the authors fail to provide any information about whether this was objectively quantified or represented a subjective impression of the dog's feeding behavior. Dogs that have a moderately fast (P = 0.05) or fast rate of eating are significantly more likely (P = 0.005) to have presented at the clinic with a GDV than dogs that ate slowly. Compared with slow eaters, dogs that ate moderately fast were 2.59 (1.01 -6.79, 95% confidence interval) times more likely to have developed a GDV. Fast eaters were 4.72 (1.57 -14.24, 95% C.I.) times more likely to have developed a GDV, than slow eaters. Limitations: Failure to report any information about how speed of eating was assessed represents a failing in this study as it is difficult to critique the approach used or draw any conclusions as to the validity or otherwise of the method. It is not clear how many dogs (total; GDV; case-controlled pairs) were represented within each of the groups (slow, moderate, and fast). It is not clear whether the GDV group included dogs that were deceased as a consequence of the first GDV episode. It is possible that referring veterinarians would not approach owners of dogs that died or, alternatively, that owners whose dogs died were more or less willing to be interviewed. This may have introduced bias into the data set if survival rate from a GDV is associated with speed of eating. Population: Irish Setter dogs (both sexes, neutered and entire) owned by members of UK Irish Setter Breed Clubs Sample size: 669 dogs: -75 dogs that had had an episode of gastric dilatation and / or volvulus -594 control dogs Intervention details: A questionnaire was sent to members of UK Irish Setter breed clubs, and owners requested to complete one form per Irish Setter that they had owned in the last ten years. Demographic information included age, sex, neuter status and whether the dog had ever had an episode of bloat/GDV. Owners were requested to complete the answer by providing data for the dog that applied at the time of the first GDV episode (GDV dogs) or current data (control dogs). A range of questions were asked about potential risk factors. These included a range of dietary, environment, temperament and exercise-related factors. Study design: Cross-sectional study Outcome studied: Of specific relevance to this PICO, the owners were asked to rate their dogs speed of eating from 1 to 10 (1 = very slow, 10 = very fast). In the statistical analysis of this, the authors gender-and agematched control dogs to those of the GDV group. There is insufficient information as to whether this was objectively quantified (i.e. authors provided a descriptor for how fast each dog should eat in order to be awarded a given score). Or, whether this represented a subjective impression of the dog's feeding behavior. Main findings: (relevant to PICO question): Speed of eating was not identified as a risk factor for GDV. Limitations: The questionnaire was asking owners to complete a form for every Irish Setter dog that they had owned in the previous ten years. This poses a number of related issues for the data: -The dogs may not still be alive. Thus, it is not clear how the owners of the dogs not affected (the control dogs) could complete the questionnaire as per the instructions as the dogs may have been dead at the time of completion. Do the owners then complete the form based on the management, exercise, feeding, etc routines of the dog shortly before it died, or when it was younger/fitter/healthier? If the owners all elect for the form (as the closest point to 'current') this could introduce significant biases into the data set. -Up to ten years ago, is a long time to expect owners to accurately reflect back and recall their dogs feeding, exercise, housing regime, and so on. As the GDV group owners were asked to recall this information from the time the dog had its first episode of GDV, the length of time the owners were required to reflect back could be even longer. There is not enough information supplied regarding the speed of eating score to allow further criticism of it. Intervention details: This study used dogs drawn from a larger study. The methodology for this study is detailed above (Glickman, L. et al. 2000). Raghavan (2004) At the end of that prospective study, there was sufficient information on diet and 'vital status' (not defined by authors, presumed to be GDV development and other demographic information matched for in the current study), for 1634 dogs to be potentially included in this study. Of these, all dogs that developed a GDV (n = 106) were included. A nested case control study design was used so dogs that developed a GDV were placed in one of six groups (corresponding to the year -1994 -2000 -they experienced the episode of GDV). The dogs that made up the control group (n = 212) were placed into one of six groups according to the year they joined the study (i.e. the year they completed the detailed questionnaire about diet, etc). From each of these year groups, for every GDV case that occurred in that year group, two dogs were randomly selected to act as controls. This was done to ensure that the diet related information (including the estimation of how fast the dogs ate) was collected at a similar time for both GDV dogs and control dogs. Study design: Case-control study Outcome studied: The outcome measure was whether the dog developed an episode of GDV during the course of the study and whether it survived this episode. The study then looked for breed related factors that were associated with an increased risk of developing GDV in the population studied. Of relevance to this PICO, the authors asked owners to rate, on a scale of 1 (slow) to 10 (fast), the speed with which their dog consumed its food ration. The authors did not direct the owners further as to what constituted e.g. a rating of '3', but, instead, the owner was left to use their judgement and experience. To analyse the data, dogs from each group were split into two groups: slow eaters (score: 1-3); moderate speed eaters (4-6); fast eaters (7-10) and odd ratios calculated based on difference from moderate eating. Thus, both slow and fast rates of eating were evaluated as a risk factor for GDV. Main findings: (relevant to PICO question): There was no significant effect of how quickly (or slowly) a dog ate, on risk of developing a GDV. Limitations: The measurement of speed of eating on a scale of 1-10 will be sensitive to owner subjectivity in assessment. The authors collect data on eating speed at the start of the study (within 30 days of recruiting dogs). No further attempts were made to collect further data on this at regular intervals. Thus, unless speed of eating is an unchanged behavior of the individual dog and not influenced by other factors (e.g. diet change, age, etc), this reduces the ability of the study to detect real effects or meaningfully explain the effects observed. Appraisal, application and reflection This Knowledge Summary aimed to identify whether eating quickly increased the risk of GDV in dogs. It was concerned with being able to advise clients, who wish to use a device to slow down their dog's rate of eating in order to reduce the GDV risk, whether the use of these devices was warranted. found no significant effect of speed of eating on risk of having a GDV episode. One paper (Glickman et al., 1997) found that dogs that ate quickly were significantly more likely to present at a clinic with a GDV episode. The final paper (Glickman et al., 2000) found that large breed dogs (but not giant breeds) were significantly more likely to develop a GDV if they ate fast, but the 95% confidence intervals associated with the relative risk value that the authors report suggests that the true risk may not differ from that of slow eaters. The approaches used to assess speed of eating varied in both type and quality. The weakest of these studies in relation to the PICO question was the study by Theyse et al. (1998). The questionnaire asked owners to identify whether their dog consumed its meal in less than five minutes or more than five minutes. This apparently arbitrary cut off point may have been prone to ceiling effects (unpublished data by the author of this Knowledge Summary suggests most dogs will consume their ration within five minutes). Furthermore, it seems likely (based on the study methodology), that the authors were asking the owners to estimate 'time taken to consume meal' on the basis of recall of a dog that may have had a GDV episode as 10 years or more previously. Finally, the authors fail to provide any numerical data to support their finding so further examination of the results is impossible. Where the studies provided information on the methodology used to assess speed of eating, most authors (Elwood, 1998 None of the authors report providing descriptors to accompany each ratings score, but Glickman et al. (2000) reports leaving it to the owner's own judgment and experience (as Raghavan et al. 2004 also used a sub-section of this data this point will also apply to this study as they are not truly independent studies). This may have reduced the ability to find a true effect as owner judgement may be subjective and partially dependent upon other dogs owned and utilised as a comparator. It is not clear how, in the absence of a definition/descriptor to accompany each rating, the 1-10 scale was any more useful than a 1-5 scale. The Pipan et al. (2012) authors fail to report the speed of eating findings within the results section. The authors then note in the discussion that there was no significant effect. However, this failure to report their finding adequately reduces the clinical and research value of this study. A failure to case match against potentially relevant dimensions (e.g. breed, size) may also have reduced the ability of this study to identify significant effects in at risk breeds as there may be many low risk breeds or size dogs that also eat fast. However, conversely this may also have reduced the risk that other causal factors that might correlate with speed of eating in high risk breeds and do cause increase risk of GDV, may wrongly lead to speed of eating being implicated a risk factor. The Elwood (1998) study into risk factors for GDV specifically focused on Irish Setters and found no significant effect. However, the methodology employed in this study limited its ability to detect meaningful differences. Control dog owners were asked to provide current data for speed of eating; GDV dog owners were asked to provide speed of eating data that pertained to when the dog had its first episode of GDV. Thus, there is likely to be a difference in how long ago owners were being asked to reflect back and remember accurately their dog's speed of eating. This may be reflected in the data: median values did not differ between the two groups, but the variation around this median was much wider for control dogs, and much less (clustered relatively tightly round the median) for GDV dogs. The Glickman et al. (2000) study focused on 11 large and giant breed dogs known to be high risk for GDV. This study reported that eating fast significantly increased risk in large, but not giant, breeds. However, the 95% confidence interval for the relative risk that they report for eating fast overlaps the odds ratio of 1.0 for slow eaters, indicating that the relative risk may not differ between the two groups. The giant breeds appeared to show the converse relationship when plotted graphically but this was not statistically significant. It cannot be discounted that it was underpowered to detect this effect statistically as the confidence interval associated with each speed of eating parameter (both large and giant dogs) was wide; however, given the number of giant breeds included (n = 738) this seems unlikely for any biologically important effect. Another issue surrounded dogs lost to follow-up as the authors failed to report this figure, but, from the information they do report, it seems likely that the loss to follow-up was high enough to severely risk invalidating the findings if participant losses were not random. The authors do not evaluate whether losses were random or systematically related to one of more of the participant characteristics or potential risk / protective factors for a GDV episode. The other main issue with this study was that it asked owners to rate their dog's speed of eating at the start of the study and then followed the dogs' outcome for up to 58 months. However, there is no evidence that speed of eating is a fixed behaviour trait that is unchanging over time. Interestingly, the Finally, the Glickman et al. (1997) used dogs that presented at participating veterinary clinics with a GDV and case matched them with dogs of a similar age and breed (or size) that did not have an episode of GDV. This study found that, compared with slow eaters, eating as a moderate or fast speed, both significantly increased risk of GDV occurrence. However, the authors fail to mention in the methods section anything about collecting data on speed of eating, thus it is impossible to evaluate their methodology further in relation to this specific issue. This was a definite study weakness. Furthermore, the case matched dogs were (where needed), drawn from the authors' own university veterinary hospital. This was a source of bias in one other dimension (rural living), but is potentially a source of bias in other areas. It is not clear how rural living might affect speed of eating; however, it cannot be excluded as a potential risk for bias. In conclusion, the evidence that eating fast is associated with an increased risk of GDV is mixed and inconclusive. The current studies that address this question are of variable quality and sometimes fail to report sufficient detail about either their methodology or results to facilitate adequate interpretation of the findings. However, it is worth noting that none of the studies found that eating slowly significantly increased the risk of GDV; where a significant effect was found, the increased risk of GDV was always associated with a faster rate of eating. Thus, if owners wish to slow down the rate at which their dog consumes it's meal, the veterinarian practitioner may advise that there is no evidence that this will increase the risk of GDV (though it may have no effect at all anyway). Search terms: (dogs OR dog OR canine OR bitch) AND ("gastric dilatation" OR "gastric dilation "gastric dilatation volvulus" OR GDV OR "gastric torsion" OR "stomach volvulus") AND (feed* OR diet* OR food*) Dates searches performed: 28th September 2016 Exclusion / Inclusion Criteria Exclusion: Pre-defined exclusion criteria: non English language, popular press articles Inclusion: Any comparative (control group utilised) study in which the effect of rate of feed intake on development of a gastric dilatation (+/volvulus) was investigated.
2018-11-30T20:32:25.976Z
2016-12-13T00:00:00.000
{ "year": 2016, "sha1": "c5013e0a7095da5af1d682b16afd346819417cbc", "oa_license": "CCBY", "oa_url": "https://www.veterinaryevidence.org/index.php/ve/article/download/53/115", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e35bd43e8cd5c64324fe80aa104e9281a91dbb55", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225997864
pes2o/s2orc
v3-fos-license
A Method for Creating a Genealogical Tree of Khom Script Manuscripts: A Case Study from the Mahāvagga of the Saṃyuttanikāya Khom script palm-leaf manuscripts are found in central and southern Thailand. Conventionally, a bundle of palm-leaf manuscript was separated into several small fascicles (phūk) and a scribe would copy text on both sides of a palm leaf, i.e. front and back. A page number or Anga number was inscribed on the back side. The manuscripts can be divided into two categories by their source or creation circumstances and repositories: royal manuscripts which relate to a king and a non-royal manuscript which do not relate to a king. Most royal palm-leaf manuscripts were made from the best quality palm leaf and inscribed by skillful experts in Khom script. Normally, we can know the year of creation by reading its colophon. However, more than 80% of Khom manuscripts of the Pāli canon in Thailand have no colophon, which creates difficulty in finding information about their creation. It is therefore challenging to determine the age of a manuscript. Under the circumstances, information about the age and creation of manuscript can be determined by studying its physical appearance such as the decorations on the first or last page, a king’s emblem, the edition it belongs to, and the style of handwriting which differs by period and royal reign. To create a genealogical tree of Khom script manuscripts, a scholar is required to appraise each manuscript from multiple aspects and conduct a comparative study of the various sets of information. Background Khom script palm-leaf manuscripts are found in central and southern Thailand. Conventionally, a bundle of palm-leaf manuscript was separated into several small fascicles (phūk) and a scribe would copy text on both sides of a palm leaf, i.e. front and back. A page number or Anga number was inscribed on the back side. The manuscripts can be divided into two categories by their source or creation circumstances and repositories: royal manuscripts which relate to a king and a non-royal manuscript which do not relate to a king. Most royal palm-leaf manuscripts were made from the best quality palm leaf and inscribed by skillful experts in Khom script. Normally, we can know the year of creation by reading its colophon 1) . However, more than 80% of Khom manuscripts of the Pāli canon in Thailand have no colophon, which creates difficulty in finding information about their creation. It is therefore challenging to determine the age of a manuscript. Under the circumstances, information about the age and creation of manuscript can be determined by studying its physical appearance such as the decorations on the first or last page, a king s emblem, the edition it belongs to, and the style of handwriting which differs by period and royal reign. To create a genealogical tree of Khom script manuscripts, a scholar is required to appraise each manuscript from multiple aspects and conduct a comparative study of the various sets of information. Physical appearance This paper makes use of eight copies of the Mahāvagga of the Saṃyuttanikāya of Khom script manuscripts from the National library, Bangkok, Thailand. The information regard-55 1161 A Method for Creating a Genealogical Tree of Khom Script Manuscripts (SriSetthaworakul) ing the physical appearance of each manuscript is as follows: Most manuscripts are in good condition with complete text. They can be divided into two groups by the number of small fascicles. The manuscripts N1, N2, N3, N4, N7, and N8 have 17 fascicles (around 800-900 pages), while N5 and N6 have only 14 fascicles (around 700 pages). Only three manuscripts have a colophon that clearly shows the information about the manuscript creation. However, detailed study is needed to determine the age of the rest of manuscripts. Information from a colophon The colophons of the manuscripts N5, N6, and N7 are available and the year of creation is described clearly. The manuscript N6 was created in the Ayutthaya period and the N5 and N7 were created in the reign of King Rama II and IV in the Rattanakosin period. For the manuscripts N1, N2, N8, there are some traces of the scribe or donor which hint to the manuscript age. In the manuscript N1, the name Phraya Srisahadeva is described as the donor. He was a well-known person in the reign of King Rama III in Rattanakosin period. Therefore, it can be assumed that N1 was created in the reign of King Start point of each chapter In each manuscript, the start points of each chapter of the Mahāvagga of the The results show that the start points of each chapter of the manuscripts N1, N2, N3, N4, N7, and N8 are mostly identical, and the N5 and N6 are clearly different from the others. Uddāna An uddāna is a summary list added after each chapter of the text but it is not counted as content of the Pāli canon. Therefore, the details of uddāna are more likely to differ by the manuscript lineages than the text itself. Thus, uddāna provide more information to determine the age of manuscripts. and N8 are similar. It also seems that the manuscripts N1, N2, and N8 are occasionally in the same sub-group. But the N5 and N6 are different from the others. Selected passages Results of comparing selected passages found that the manuscripts N1, N2, N3, N4, N7, and N8 are similar and the N2, N7 are occasionally in the same sub-group. It is interesting that in the fascicle no. 8 of N1 and N3, both manuscripts skip page ṇaṃ but have page ṇaḥ twice which is different from the N2, N4, N7, and N8. It seems that the manuscripts N1 and N3 are also in the same sub-group. On the other hand, the manuscripts N5 and N6 are partially similar and distinctive from the above group. Conclusion Based on the acquired information and comparison results, the eight copies of the manuscripts can be divided into two groups: Finally, a genealogical tree of selected Khom script manuscripts of the Mahāvagga of the Saṃyuttanikāya can be drawn as follows: Notes 1) A colophon is a scribe s statement at the beginning or end of a manuscript. Normally, it provides information about the creation of the manuscript. 2) Painting style with golden and black colors.
2020-10-28T17:58:52.182Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "a49c73e47bbb7c450745cbc440f0f2de363798a0", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ibk/68/3/68_1160/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d7c8e30a19a2c6a4ac1d4ac48617904a9113d20b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "History" ] }
196515779
pes2o/s2orc
v3-fos-license
Emerging Evidence of Mast Cell Involvement in Oral Squamous Cell Carcinoma Conclusion: Mast cells can promote tumour proliferation and aggressiveness via a plethora of secreted molecules and raised levels of such secretions as well as mast cell aggregation at Oral Squamous Cell Carcinoma sites is implicit of their involvement in progression of the pathology. There lacks, however, correlative data in the literature between mast cells and clinicopathological features such as tumour size, regional nodal involvement, or metastasis. MC in modulating tumour growth-both positively and negativelyhas become a focus. Tissue Distribution, Activation and Migration of Mast Cells in Oral Cancer The altered tumour microenvironment induces changes in mast cell migration, activation and tissue distribution. Mast been demonstrated in a number of studies [5][6][7][8][9][10][11][12], and correlation between Mast Cell Density (MCD) and disease progression has also been reported [13][14][15]. In a slightly different vein Aromando, et al. [16] noted no change in total mast cell numbers in a hamster cheek pouch tissue during experimental carcinogenesis, but rather a decrease in MC in the adventitious tissue and an accumulation of MC in peritumoural and intratumoural stroma, as well as a reversion of the ratios of active/inactive MC in favour of the former [16]. Furthermore, differences in MCD between intratumoural and peritumoural stroma were evaluated, showing significantly higher MCD values in the peritumoural stroma than intratumoural, probably reflecting the migration of the cells from adventitious tissue as well as functional roles in Extracellular Matrix (ECM) degradation and induction of cell proliferation [16]. shown that, while both MCTC and MCT counts are significantly increased throughout the tumoural stroma, MCTC type predominates in the PT stroma, while MCT subtype predominates in the intratumoural stroma [17]. The authors hypothesise that the distribution of subpopulations reflects functional requirements: MCTC contain chymase, which plays a role in activation of Pro-Matrix Metalloproteinase-2 (MMP-2) and Pro-Matrix Metalloproteinase-9 (MMP-9) to their active MMP-2 and MMP-9 forms, respectively [18]. Both MMP-2 and MMP-9 possess the capacity to degrade type IV collagen [19], a significant component of the basement membrane and barrier to tumour invasion. Hence, the localisation of MCTC at tumour peripheries suggests an ECM remodelling role for these cells. Similarly, MCT predominance in the IT stroma suggests a role of these cells and their potent angiogenic mediator, tryptase [20] Transforming growth factor-beta (TGF-β) is synthesised and released by MC, and is increased in OSCC [5]. Its local roles are pleiotropic, including: its initially cytotoxic, but progression of the pathology. There lacks, however, correlative data in the literature between mast cells and clinicopathological features such as tumour size, regional nodal involvement, or metastasis. Eventually cytokinetic role in tumourigenesis [25] its action as a potent chemotactic factor for MC [28]; its role in angiogenesis; and its supposed part role in mediating a phenotypical change in tumours from CD34 + fibrocytes to alpha-smooth muscle antigen + (α-SMA + ) myofibroblasts [5]. ; hence a phenotypic shift away from CD34 + fibrocytes as they differentiate to alpha-SMA myofibroblasts, decreases repression of CD117 expression and consequently, allows MC migration and infiltration [5]. Mast Cells in Oral Cancer and Angiogenesis Angiogenesis and neoangiogenesis are the processes of formation of new blood vessels from pre-existing blood vessels, and formation de novo, respectively. Tumour proliferation is limited by oxygen perfusion, and tissue oxygen perfusion greater than 2mm has been reported to be prohibitive of tumour growth Mast Cells in Extracellular Matrix Remodelling in OSCC An important feature of cancer progression is the ability to . MC tryptase itself has also been shown to directly exert gelatinase-like activity [56], and tryptase is also involved in the processing and activation of MMP-3 and MMP-1, the latter being dependent on the activation of the former [57,58]. Chymase is also capable of directly activating MMP-1 and MMP-3 [59]. Further MC chymase, but not tryptase, may directly cleave procollagen to fibril-forming collagen [60]. Hence MC contribute both directly and indirectly to processes which degrade the ECM. In the context of oral cancer, MMP-9 expression has been shown to be upregulated in OSCC compared with healthy tissues, and significantly correlated with MCD [61]. Another study showed lip SCC samples that expressed higher MC counts also showed increased collagen degradation, assayed by picro-sirius staining [7]. MMP-9 has been associated with aggressive tumour growth, proteolytic processing of the ECM and activation of cytokines (such as TGF-β) [10]. MMP-9 is capable of processing type IV collagen of the basement membrane [62] and other ECM components, which are key events in tumour invasion and metastasis (see Fig. 1). However, evidence supports a fluctuating role of MMP-9 in OSCC. High MMP-9 expression has been shown to correlate with nodal involvement and metastasis, and poor prognosis in OSCC [63]. Meanwhile, Guttman et al. [64] reported no correlation between MMP-9 and tumour size or nodal involvement. Similarly, other authors reported that MMP-9 expression was not associated with clinical variables, such as tumour stage, recurrence rate, etc. [65]. Other data suggest that MMP-2 and MMP-2 expression significantly correlates with collagen degradation and local invasiveness, though this was not related to metastatic potential of the disease [66]. Meanwhile, it has been suggested that although MMP-2 and MMP-9 expression is high in OSCC, the ratio of active/inactive MMP-9 is low, suggesting MMP-2 is the gelatinase of greater importance in OSCC [67]. Conversely, MCs have also been implicated in collagen deposition. Vidal et al. [10] observed the accumulation of MC in areas of fibrosis surrounding malignant minor salivary gland tumours and proposed the hypothesis that ECM remodelling, specifically collagen synthesis, may be mediated by MC. A similar hypotheses have been made regarding odontogenic tumours [68] and breast cancers, in which it was suggested that tryptase played a role in collagen deposition [69]. Additionally, an association between MC and fibroblasts in the potentially malignant condition, oral submucous fibrosis, has been inferred [70,71]. Mast Cells and Tumour Proliferation, Invasion and Dissemination Mast cells can precipitate mitogenicity in tumour cells directly [16,77]. The proliferative consequence of tryptase-mediated PAR-2 activation has been reported in lung tissue, colon cancer and breast cancer [76,78], but few studies exist correlating MC with tumour cell proliferation in OSCC, and those that fail do to demonstrate a significant correlation [7]. A study pertaining to the potentially malignant oral condition actinic cheilits has, however, quantified COX-2, PAR-2, MC and tryptase in human actinic cheilitis tissues. COX-2 is responsible for eicosanoid biosynthesis from arachidonic acid, and among the metabolites is Prostaglandin E2 (PGE2), which is also capable of promoting tumour proliferation [79]. The authors reported a significant correlation between tryptase-positive MC and PAR-2 expression, as well as COX-2 overexpression, inferring a role for tryptase in PAR-2 activation and COX-2 overexpression. Increased MC counts have also been associated with higher levels of DNA synthesis in an experimental hamster oral carcinogenesis model, again implicating tryptase-mediated PAR-2 activation [16]. Conclusion Mast cells are influenced by, and influence, malignant tumours.
2019-07-15T22:29:28.088Z
2019-03-22T00:00:00.000
{ "year": 2019, "sha1": "3f81fbded0f36f903d37a80889b93e9d29570904", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.002832.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "df1f2968fe100068204103e4ab48a7fd55af4f4b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }