text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Toward an indoor lighting solution for social jet lag There is growing interest in developing artificial lighting that stimulates intrinsically photosensitive retinal ganglion cells (ipRGCs) to entrain circadian rhythms to improve mood, sleep, and health. Efforts have focused on stimulating the intrinsic photopigment, melanopsin; however, recently, specialized color vision circuits have been elucidated in the primate retina that transmit blue-yellow cone-opponent signals to ipRGCs. We designed a light that stimulates color-opponent inputs to ipRGCs by temporally alternating short and longer wavelength components that strongly modulate short-wavelength sensitive (S) cones. Two-hour exposure to this S-cone modulating light produced an average circadian phase advance of one hour and twenty minutes in 6 subjects (mean age = 30 years) compared to no phase advance for the subjects after exposure to a 500-lux white light equated for melanopsin effectiveness. These results are promising for developing artificial lighting that is highly effective in controlling circadian rhythms by invisibly modulating cone-opponent circuits. 26 People who spend most of their time under artificial light often suffer a phase delayed circadian 27 rhythm 1-3 . The discrepancy between an individual's delayed biological rhythm and the daily timing 28 determined by social constraints like school and work schedules causes "social jet lag" 4 which is 29 associated with disturbed sleep, daytime fatigue, reduced cognitive function, and a general feeling of 30 Khalsa (2003) 5 that is aligned with earth time so that the beginning of the internal biological night occurs at sunset and the end of the internal biological night occurs before wake time just after sunrise as indicated below the x-axis of the curve. B. (left) Illustration of the color vision circuitry for S-ON and S-OFF types of primate ipRGCs. (right) Illustration of the spectrally opponent response of an S-ON ipRGC with S -(L+M) cone inputs. C. Image of sunset in Seattle Washington illustrating how contrasting short and long wavelength light near the horizon produce a stimulus capable of driving spectrally opponent inputs to ipRGCs making them act as sunrise/sunset detectors. D. Spectral distributions of experimental light stimuli and their predicted effects on the color opponent inputs to ipRGCs. (Top left) Spectrum of the experimental white light with chromaticity coordinates 0.333, 0.333. (Top middle) Spectrum of the LED-derived experimental "blue" light with a spectral peak at 476 nm. (Bottom; left and middle) the product of wavelength-by-wavelength multiplication of the spectral distribution of the white light (Bottom left) times the spectrally opponent response of an ipRGC. Integration of the curve in across wavelength yields the predicted very small relative response of the ipRGC to the white light. (Bottom middle) The product of multiplication of the spectral distribution of the blue light times the spectrally opponent response of an ipRGC. Integration across wavelengths yields the predicted large relative response of the ipRGC to the blue light. (Right) The two spectra which are alternate to produce the S-cone modulating light. unwellness. A potential solution to social jet lag is to develop artificial lighting that is capable of 31 stimulating ipRGCs in the morning during times when such stimuli produce phase advances 5 ( Figure 1A). 32 With regard to circadian rhythms there has been an emphasis the effects of light on the intrinsic 33 photopigment, melanopsin, however ipRGCs can be activated by light absorption by cone 34 photoreceptors whose signals are carried by color opponent circuitry ( Figure 1B) in which short (S) and 35 long (L) plus middle (M) wavelength cones have opposite signs 6-8 . The color opponent input to ipRGCs 36 may have evolved so that changes in the color of sky at dawn and dusk ( Figure 1C) can contribute to 37 synchronization of the internal circadian clock such that the internal biological night begins at sunset 38 and ends before wake time just after sunrise. Previous experiments have provided evidence for a role 39 for color opponency in circadian phototransduction 9 and clear evidence for an S-cone contribution in 40 humans 10,11 . 41 Compared to melanopsin, cone-opponent circuits activate ipRGCs at much lower thresholds 12 . Thus, at 42 common indoor low illumination levels, lights optimized to stimulate the color-opponent circuits could 43 be much more effective in producing circadian phase advances than typical white artificial lighting. Color 44 opponent circuitry in humans is normalized through experience to null to white 13 . Thus, even though 45 artificial white light stimulates S-cones, because the excitatory and inhibitory cone components of the S 46 vs. (L+M) circuitry are balanced by white light it is predicted to have little net effect ( Figure 1D). 47 Narrowband lights that primarily stimulate one side of the opponent circuit are predicted to be much 48 more effective ( Figure 1D). Finally, the circuity carrying cone signals has relatively transient response 49 properties, so under laboratory conditions using narrow band lights that primarily stimulate S-cones, 50 their contributions decay upon extended light exposure 10,11 . Thus, the intensity, spectral and temporal 51 characteristics of the light must all be considered when developing indoor illumination capable of 52 combating social jet lag. 53 We designed a light that stimulates color-opponent inputs to ipRGCs by temporally alternating short and 54 longer wavelength components that strongly modulate short-wavelength sensitive (S) cones. We 55 determined the ability of a morning exposure of this light to produce a phase advance capable of 56 combatting social jet lag compared to a static white light and a static narrow band blue light. Our goal is 57 to evaluate the most effective dynamic lighting approach for circadian photoentrainment at the 58 comparatively low general lighting lux levels typical for homes, offices, schools, and health care facilities. 59 We hypothesize that practical lighting solutions that drive cone-based color-opponent inputs to ipRGCs 60 in the early morning can mediate circadian phase advances that will promote improved mood and 61 cognitive function, combat social jet lag and other circadian problems such as seasonal affective 62 disorder. 63 64 Participants circadian phase relative to solar time 65 When humans are exposed only to natural light, the internal circadian clock synchronizes to solar time 66 such that the internal biological night begins at sunset and ends before wake time just after sunrise 1 67 ( Figure 1A). We used dim light salivary melatonin onset (DLMO) as a measure of circadian phase. Figure 68 2A shows the rise in evening melatonin levels assayed from saliva samples for the six subjects who 69 participated in this study (each subject is represented by a different color). Compared to being 70 synchronized to solar time (shown by the dashed gray 71 curve; Figure 2A) the excitatory and inhibitory sides of the color-105 opponent response, thus producing little net drive to 106 the ipRGCs from cones. In contrast, almost all 107 wavelengths in the blue light stimulate the S-cones on 108 the excitatory side of the response of the color-109 opponent system. Thus, the white light is expected to 110 produce a null response, and the blue light is predicted 111 Figure 2. Curves showing the nighttime dim rise in salivary melatonin levels under various conditions equated for melanopsin effectiveness. A. Rise in evening melatonin levels for the six subjects who participated in this study (each is shown in a different color). The dashed gray curve shows the predicted rise if the subjects were aligned to earth time where beginning of internal biological night occurs at sunset. On average, subjects were phase delayed 2.8 h. B. Average rise in evening melatonin after two-hour exposure to the static white light (gray curve) of Figure 1A compared to a baseline (dashed curve) measured on day one of the 3-day protocol. There was a slight, nonsignificant, phase delay associated with the white light exposure (n=3 subjects). C. Average rise in evening melatonin (blue curve) after a two-hour exposure to the 476nm blue light of Figure 1B compared to baseline (dashed curve) (n=6 subjects). The 476-nm light produced a phase advance of 40 minutes. D. The Rise in evening melatonin (orange curve) after two-hour exposure to 19 Hz S-cone modulated light compared to baseline (dashed curve) (n=6 subjects). This light produced a phase advance of 1 hour and 20 minutes. to be many times more effective at driving the color-opponent pathways upstream of the ipRGCs ( Figure 112 1D). 113 To evaluate the ability of lights with different spectral and temporal characteristics to advance circadian 114 phase, we followed a 3-day protocol for each light condition. On the evening of the first day, subjects 115 collected saliva samples every hour starting at 6 PM ending at 2 AM. The following day, the samples 116 were analyzed to measure the rise in melatonin the evening before and the time of DLMO was 117 determined for each subject, defined as the time the melatonin levels reached 20% of maximum 14 . On 118 the morning of the third day of the protocol, each subject viewed a test light for two hours centered 119 10.5 hours after their individual DLMO. This corresponds to the time of circadian cycle expected to 120 produce the maximum light-induced phase advance ( Figure 1A) 5 . On the evening of the same day, 121 subjects again collected saliva samples that were used to evaluate whether the light exposure produced 122 a phase advance. 123 Figure 2B shows the results for the static white light. After exposure to the static white light, the average 124 rise in evening salivary melatonin levels did not differ significantly from the baseline, measured before 125 exposure). The slight phase delay after the exposure is within experimental error (p<0.05; paired t-test). 126 In contrast, the 470 nm blue light that was equated in melanopsin effectiveness to the static white light 127 produced a phase advance of 40 minutes ( Figure 2C). 128 Our goal is to develop lighting that can replace standard indoor white lighting and give people control of 129 their circadian phase. A static blue light (like Figure 1D; top left) is not an acceptable substitute for 130 standard lighting because it must be pure blue to drive the color vision circuitry. Any added long-131 wavelength components that make the light whiter, cancel the effectiveness. As an alternative, we 132 tested a temporally modulated light because, unlike the melanopsin drive to ipRGCs, which is quite 133 sustained, the cone inputs have transient responses. There are two types of color-opponent ipRGCs in 134 primates, S-ON and S-OFF, but both are ON-OFF cells responding both to the onset of one colored light 135 and the offset of the light of the opposing complementary color 6 . 136 Thus, theoretically, the best stimulus is a light that alternates between short and long-wavelength 137 components such that the color-opponent cells are being stimulated by the simultaneous offset of one 138 spectral component and the onset of the opposing component. It is possible to produce lights that, 139 when temporally alternated, appear white but strongly modulate S-cones. The S-cone inputs to ipRGCs 140 are tuned to respond to higher temporal frequencies than those serving hue perception making it 141 possible to modulate the S-cone input to ipRGCs strongly but minimize (and ideally eliminate) the 142 percept of flicker. The S-cone modulating light tested here consisted of a 19 Hz alternating pulse train 143 designed to modulate the quantal catch of S-cones with a differential of 100X between the two phases. 144 This was done by alternating the intensities of LEDs peaking at 427 nm vs. 545 nm, and the addition of 145 light from a 638 nm LED made the S-cone modulated pulse train appear nominally white. The intensity 146 of this light was adjusted to produce a time-averaged quantal catch in melanopsin matched to the 500-147 lux static white light of Figure 1D. As shown in Figure 2D, the S-cone modulated "white light" elicited a 148 striking 1 hr 20 min phase advance. 149 150 151 Blue lights are particularly effective in driving ipRGCs 15,16 , and it is often assumed this is mediated by 152 melanopsin. However, one novel aspect of the experiments here is that the blue and white lights were 153 equated for melanopsin effectiveness, thus, the large effect of blue compared to white cannot be 154 attributed to activation of melanopsin. Since the white condition nulls the color-opponent response 155 ( Figure 1D; left), it effectively isolated the melanopsin drive to the ipRGCs. We conclude that under the 156 relatively low light conditions and two-hour exposure duration used here, melanopsin activation is 157 insufficient to produce any significant circadian phase advance. Moreover, it follows that the substantial 158 phase advance produced by the blue light equated in melanopsin effectiveness to the white light is the 159 result of activation of the color-opponent circuitry, not melanopsin, as commonly assumed. The 160 implication of our result reported here is that since modest illumination level (ca. 500 lux) white lights 161 presented for relatively short duration exposures (≤ 2 hours) are ineffective in stimulating melanopsin 162 sufficiently to produce a phase advance, any practical indoor lighting solution to social jet lag and other 163 problems associated with a delayed circadian clock should focus on stimulating the color opponent 164 inputs to ipRGCs. 165 Previously, one hour of bright white (~10,000 lux) light produced a 40 minute advance in circadian 166 phase 17 . When white lights are sufficiently bright, they can produce a phase advance by activating the 167 much less sensitive melanopsin expressed in human ipRGCs compared to the 500-lux static white light 168 that was ineffective here ( Figure 2D). However, light that strongly modulates the S-cones for two hours 169 (500 lux X 2 hr vs. ~10,000 Lux X 1 hr) amounts to 10X fewer lux-hours but produced a circadian phase 170 advance per exposure hour that was twice as great. Thus, the S-cone modulating light is twice as 171 effective as very bright white light at 1/20 th the intensity. 172 As a different alternative to static illumination, Zeitzer et al. administered 60 2-msec pulses of 473 Lux 173 broad spectral band light over an hour and produced a phase change nearly half that of 1-hour 10,000 174 Lux static white light 18 . We assume that the increased effectiveness is due to the involvement of cone 175 circuits, as in the experiments reported here, since transient white flashes drive spectrally opponent 176 cone inputs to ipRGCs by virtue of differences in the temporal properties of their components. However, 177 because of the spectrally opponent nature of the cone inputs to ipRGCs, modulating S-vs. LM cones is 178 superior to non-spectrally selective cone modulation. The S-cone modulating light is 4 times more 179 effective and the exchange between long and short wavelength components can be invisible whereas 180 bright flashes every minute are not a practical alternative to traditional illumination. 181 Earlier, Spitschan and colleagues 19 measured melatonin suppression using two light stimuli which 182 differed exclusively in the amount of S-cone excitation by almost two orders of magnitude, but not in 183 the excitation L and M cones, rods, and melanopsin. Since the light with stronger S-cone excitation did 184 not differentially suppress melatonin, it might be interpreted to suggest a lack of support for a role for S-185 cone signals in circadian phototransduction. However, the Spitschan et al. experiment relies on the 186 assumption of additivity which doesn't apply to color opponent systems. Static white lights can produce 187 strong S-cone excitation but provide zero drive to ipRGCs because of the opponent nature of the cone 188 inputs. The "S cone isolating light" used by Spitschan was a pinkish color compared to "S-light" which 189 was orangish. This is because to equate the two lights for L and M effectiveness the S+ light had to 190 include about equal amounts of long and short wavelength light, nulling the color opponent response 191 like what occurs with the white light, as illustrated in Figure 1D. Thus show that color opponent circuitry is involved in circadian phototransduction 10,11 . 196 The color of the sky at sunrise and sunset ( Figure 1C) is the ideal cue for synchronizing one's internal 197 body clock to solar time. The intensity of light overhead can vary greatly for many reasons making it an 198 unreliable indicator of the time of day, but the orange color of the sky at the horizon always indicates 199 that it is sunrise or sunset. Retinal ganglion cells act as feature detectors. The color opponent inputs to 200 ipRGCs confer the ability to act as sunrise/sunset detectors. The orange color of the horizon that 201 characterizes the rising and setting sun produces a color contrast with the blue sky ( Figure 1C). The blue 202 and orange parts of the image on the retina produced by the sunset moving across the receptive field of 203 an ipRGC activates the transient color-opponent response very strongly. As shown in Figure 1A, when 204 our internal clock is aligned with solar time, sunrise occurs after the peak of the phase advance portion 205 of the phase response curve and sunset occurs before the peak of the delayed phase portion. When the 206 ipRGCs are strongly stimulated at both dawn and dusk the human phase response curve is perfectly 207 tuned to keep the phase of our internal pacemaker precisely aligned with solar time. 208 Color opponent mechanisms are associated with sensory systems that regulate circadian activity 209 throughout the animal kingdom including fish and reptiles 20,21 . Ancient single-celled organisms exhibit 210 color sensitivity that they use to their circadian activity 22 . It appears that the capacity to sense colors 211 originally evolved to serve circadian rhythms, not for hue perception 23 . The fact that primates have 212 evolved multiple independent circuits that provide color-opponent inputs to ipRGCs is a testament to 213 the importance of these sunrise and sunset detectors to our evolutionary survival. Thus, it makes 214 perfect sense to develop lighting to use these color vision circuits to take control of our circadian 215 wellbeing. 216 Our goal is to take control of our circadian rhythms by adding light exposures that strongly modulate S-217 cone opponency in the morning in the context of the light experience in people's regular daily lives. 218 Thus, here, each subject was exposed to the experimental lights on a background of their regular daily 219 lives as academics at the University of Washington. In this context, exposure to a 500-lux static white 220 produced no significant phase advance but a light with the same melanopsin effectiveness that 221 temporarily modulated S-cone color opponent circuitry produced phase advances, that if administered 222 in the context of a person's normal lighting routine, would be capable of offsetting the average 2.8-hour 223 delay, therefore eliminating social jet lag. 224 The discoveries of color vision circuitry inputs to primate ipRGCs 7,8 together with the evidence which has 225 accumulated showing the role that circuitry in circadian phototransduction, indicate a complete 226 paradigm shift in the strategy to develop healthy circadian lighting away from focusing on melanopsin to 227 emphasizing the cone inputs. Melanopsin might have been emphasized over the powerful effects of the 228 color-opponent inputs to ipRGCs because ideas about resetting of phase in humans have been 229 extrapolated from experiments on rodents that have emphasized melanopsin. While it has been 230 recognized that ipRGCs could be activated by classic photoreceptor input in the absence of melanopsin 231 in mice 24 , neither M1 or M2 ipRGCs in mice were reported to have inputs from the color-opponent 232 circuitry observed in primates; 25,26 however, more recently, differential input between S and M cones 233 were shown to produce responses in the suprachiasmatic nucleus of mice, recognizing the importance 234 of cone inputs for circadian entrainment, especially in cone dominated species 27 . Here, we demonstrate 235 that rather than focusing on melanopsin, under the constraints of making lights that appear white with 236 intensities like standard artificial lighting used indoors, stimulating ipRGCs by modulating S-cones has 237 promise to give people control of their circadian rhythms to improve mood, sleep, and health. 238 239 All methods were performed in accordance with the relevant guidelines and regulations. Data collected 240 and used in this study is available upon request. 241 Miniature, programmable, and portable ganzfeld design Minolta) positioned 1 meter behind each goggle. The two spectrums that were alternated temporally to 274 drive high S-cone modulation were calculated theoretically using retinal sensitivities for S-, melanopsin, 275 M-, and L-retinal sensitivities given by a photopigment template 28 with peaks set at 420 nm, 480 nm, 276 530 nm, and 559 nm, respectively, corrected for absorption by the lens 29 . For the S-cone modulating 277 light, the ratio of S-cone activation between the temporally alternated spectrums was 100:1, while L-278 and M-cone activations were held constant between the two temporal phases. The alternating 279 spectrums ( Figure 1D right; top and bottom) were programmed onto the goggles and modulated at 19 280 Hz presented as a square wave with 50% duty cycle. The radiance of these lights measured at the back 281 of the goggles was 150.5 μW/cm 2 . The alternation of the two spectrums produced approximately 500 lux 282 at the subject's pupil plane as measured with a lux meter (Digital Light Meter, LX 1330B). Melanopsin 283 activation was determined by integrating the measured time averaged spectrum with the corneal 284 sensitivity for melanopsin. The two other conditions, the static white light spectrum ( Figure 1A) which 285 produce a radiance measured at the back of the goggles of 72.9 μW/cm 2 and the static blue spectrum 286 from the 476 nm LED ( Figure 1B) which produce a radiance measured at the back of the goggles of 31.6 287 μW/cm 2 , were adjusted in intensity to produce the same time averaged melanopsin activation as the S-288 cone modulated light. 289 290 The Institutional Review Board at the University of Washington approved the human subject's research. 291 Research involving human subjects was performed in accordance with local and federal regulations. 292 Human subjects research adhered to the principles embodied in the Declaration of Helsinki. Informed 293 consent was obtained from all participants. The subjects were adult volunteers from the University of 294 Washington community in Seattle. 295 Six healthy adult (2 male and 4 female) subjects (mean age = 30; range 23-43) continued with their daily 296 academic lives during the winter months (December -February) in Seattle, WA over the course of the 297 experiments. The purpose of the experiments was to determine the effects on circadian phase of three 298 different lighting paradigms which were viewed for two hours centered 10.5 hours after their individual 299 DLMO. Lights administered at this time should produce the maximum circadian phase advance ( Figure 300 1A). Circadian phase was determined from the rise in evening melatonin levels assayed from saliva 301 samples. To measure phase accurately it was important to identify subjects with a robust, reliable 302 evening rise in salivary melatonin. In addition, it is important that our participants are stability entrained 303 to the 24-hour environmental cycle even though we expect most members of the University of 304 Washington university community to suffer from some amount of phase delay. New recruits collected 305 baseline evening salivary melatonin samples every hour from 6 PM until 2 PM. During this period, they 306 were instructed to generally keep illumination levels as measured by an illuminometer below 10 lux. 307 Short periods of higher illumination were allowed, when necessary, but were always kept below 30 lux. 308 Subjects also confirmed that they were keeping a regular sleep-wake schedule in the days surrounding 309 the experiment. After the first baseline salivary melatonin measurement, the only participants that 310 continued with the experiment were those that showed a robust rise in salivary melatonin between 6 311 pm to 2 am. Four of the original recruits did not meet this requirement. Failure may be because 312 subjects' internal clocks are free running, or they may be arrhythmic. This high number of failures may 313 be a consequence of the large number of gray and short winter days in Seattle. 314 Of the six subjects who met the inclusion criteria, all are graduate students, post-docs and one assistant 315 professor involved in studies related to circadian rhythms and five of them are co-authors on this 316 manuscript. As such, they were all very motivated to adhere to the somewhat grueling demands of the 317 protocol. These included adhering to the strict evening lighting regimen, collecting saliva on a strict 318 schedule, proper handling of the saliva samples and viewing the lights at the times and durations 319 specified. We believe that having motivated compliant, participants was a key to obtaining precise and 320 reliable results. Salivary melatonin measurements are objective so the fact that participants were not 321 naïve to the objectives of the experiment could not bias the results. 322 Experimental protocol for viewing light stimuli 323 The experiment was conducted during the COVID19 pandemic. Safety protocols prevented participants 324 from coming to the laboratory for experimental procedures, thus, all experiment procedures were 325 conducted in participants' homes. Saliva samples were collected by the subjects at one-hour intervals 326 starting at 6 PM PST and placed on dry ice immediately after collection. Two separate saliva samples 327 were collected at each time point, which were analyzed separately and averaged to minimize noise for 328 each individual timepoint. Since the experiments were done in the winter in Seattle, saliva collection 329 was done well after sunset so there was no possibility of exposure to sunlight during saliva collection 330 and subjects stayed in their homes with the illumination generally kept below 10 lux and always below 331 30 lux. Circadian timing was measured by the dim light salivary melatonin onset (DLMO, Salimetrics 332 melatonin ELISA). DLMO20% was calculated as the time point at which melatonin levels reached 20% of 333 the fitted peak-to-trough amplitude of each person's data. The data was fitted to an integrated Gaussian 334 (error function) by minimizing the sum of least squares. Maximum phase advances were assumed to 335 occur 10.5 hours after DLMO20%. Administrations of a 2-hour light pulse of the therapeutic lights were 336 therefore centered around 10.5 hours after DLMO20%. Lights were administered in the subjects homes 337 the morning after the baseline internal circadian timing was measured. To determine the phase advance 338 caused by each light, circadian timing was remeasured the evening of the day the light was 339 administered. Phase advances were calculated as the difference between DLMO20% after light 340 administration and baseline DLMO20%. Differences in phase produced by the light treatments were 341 evaluated using a paired t-test using each person DLMO measurement before and after treatment as a 342 pair. 343 344 Data Availability. Contact J.A.K. to request the data from this study. 345
6,291.6
2023-03-17T00:00:00.000
[ "Biology" ]
RFID-Based Vehicle Positioning and Its Applications in Connected Vehicles This paper proposed an RFID-based vehicle positioning approach to facilitate connected vehicles applications. When a vehicle passes over an RFID tag, the vehicle position is given by the accurate position stored in the tag. At locations without RFID coverage, the vehicle position is estimated from the most recent tag location using a kinematics integration algorithm till updates from the next tag. The accuracy of RFID positioning is verified empirically in two independent ways with one using radar and the other a photoelectric switch. The former is designed to verify whether the dynamic position obtained from RFID tags matches the position measured by radar that is regarded as accurate. The latter aims to verify whether the position estimated from the kinematics integration matches the position obtained from RFID tags. Both means supports the accuracy of RFID-based positioning. As a supplement to GPS which suffers from issues such as inaccuracy and loss of signal, RFID positioning is promising in facilitating connected vehicles applications. Two conceptual applications are provided here with one in vehicle operational control and the other in Level IV intersection control. Introduction On 3 February 2014, The U.S. Department of Transportation's (DOT) officially announced its decision to move forward with vehicle-to-vehicle communication technology for light vehicles. After decade-long research and experimentation, this decision signifies USDOT's resolution to transform transportation safety and mobility by allowing cars to -talk‖ with each other. A long list of innovative applications have been tested or under the way including cooperative collision warning [1][2][3] intersection safety supporting [4], intersection movement assist, etc. In these applications, real-time vehicle positioning is assumed in their algorithms and protocols for motion guidance, operational control, and interaction with other vehicles. This is a reasonable assumption since Global Positioning System (GPS) technology has become widely available and affordable. As a matter of fact, many vehicles have already been equipped with GPS for navigation and tracking purposes. However, GPS-based vehicle positioning begins to show its limitations as connected vehicles are advancing toward real-world implementation and especially when the success of these applications depends heavily on the accuracy of vehicle positioning. These limitations include poor or no signals in certain areas especially urban canyon, and positioning accuracy in a dynamic environment. To address the above limitations, this paper proposes a supplementary yet independent approach, i.e., radio-frequency identification (RFID)-based vehicle positioning, to facilitate connected vehicle applications at critical locations where GPS service is unavailable or unreliable. This paper is arranged as follows: the next section identifies research gaps based on a survey of literature in terms of vehicle positioning technologies. Following that, Section 3 proposes the RFID-based vehicle positioning approach with design details. Section 4 verifies the accuracy of the RFID-based vehicle positioning in two empirical ways. Section 5 provides two conceptual examples to illustrate the application of RFID-based vehicle positioning in connected vehicles. Lastly, conclusions are drawn in Section 6. Research Gap in Vehicle Positioning Due to its wide coverage and availability, GPS seems to be ideal for connected vehicle applications [5,6]. Stand-alone GPS has the capability of achieving an accuracy of about 20-30 m, which can be narrowed down to 8-12 m after the removal of the selective availability [7]. Differential GPS can even enhance accuracy further to 1-2 m. However, it relies on ground-based reference stations which only covers limited areas and, thus, significantly drives up the cost [8]. In connected vehicles, the requirement on vehicle positioning varies with the nature of application. In general, those involving a large spatial and temporal scope, such as roadway incidence assistance and dynamic routing, do not require accurate positioning updated at a high frequency. For example, an accuracy of 5-10 m is required to warn drivers of hazard at a fixed location (e.g., an accident site), to which general purpose GPS receivers suffice the need. In contrast, applications in small spatial and temporal areas, such as motion control especially crash avoidance, require accurate positioning in real time. For example, most safety applications require one to two meters [9] Shladover et al. [10] pointed out that assigning vehicles to the correct lanes would require a standard deviation of about 1 m, but 50 cm accuracy is likely to produce significantly better performance especially for blind spot warning. Combined with adverse locations such as urban canyon, these situations pose a great challenge to GPS-based positioning. As such, further enhancements of GPS or alternative vehicle positioning technologies are called for. Consequently, a number of approaches have been proposed including inertial systems, dead reckoning, information fusion, and map matching. To improve the positioning performance with GPS/DGPS, a common choice is to integrate it with inertial systems. For example, Farrell et al. [11] implemented a real-time carrier phase DGPS aided inertial navigation system which is able to achieve an accuracy at the centimeter level. Huang and Tan [12] used a Kalman filter to incorporate in-vehicle motion sensors in the refinement of vehicle position. However, Jimé nez et al. [13] pointed out that this approach is valid only if inertial measurements are used before DGPS signals begin to degrade. In addition, the fine accuracy relies on DGPS which is a costly solution only available at limited locations. Dead reckoning [14] advances a vehicle's position from its last known position by integrating its speeds over elapsed time and course. However, this approach is only good for a short period of time and is subject to cumulative errors. Closely related to the use of inertial systems is the fusion of information [15]. For example, GPS signals can be combined with inertial sensors and digital maps to infer the best estimate of vehicle location [16]; Edelmayer et al. [17] used a cooperative federated filtering approach to enhance position estimation based on a variety of position measurements, e.g., from on-board vehicle positioning system, from other cooperating vehicles in the vicinity, as well as from the immediate roadside environment via communication. Bevly [18] attempted to correct inertial sensor errors by using a kinematic Kalman filter estimator to integrate GPS signals, accelerometers, and rate gyroscopes. Islam et al. [19] implemented a multi sensor system consisting of a single-axis gyroscope and an odometer integrated with GPS receiver. Though information fusion can achieve high accuracy in some cases, the resultant position is inevitably an estimate that depends on multiple sources of information. Error or missing of a component would degrade estimation quality. Map matching is to determine the position of a vehicle by constructing a trajectory from a few reliable locations that the vehicle has recently passed and then matching this trajectory to a digital map to find the best fit among multiple likely arcs [20,21]. This approach is best suited for applications that rely on a GPS receiver as the sole means of positioning. However, the uncertainty introduced by inferences in the underlying algorithm limits its use in safety-related applications. Therefore, in order to obtain high reliability, low cost, and sufficient accuracy under all operational conditions, there exists a great demand for alternative approaches that are readily available, do not rely on GPS, and minimize the need for estimation and fusion. In this context, the approach of using radio sensors such as infrared, microwave, and radio frequency devices has received increasing attention [22,23]. Capable of tracking moving objects [24], these devices can be mounted at roadside to transmit and receive data from vehicles passing in close proximity if they are equipped with transceivers. These systems have been employed in several research projects [25][26][27] and already been used in transportation such as vehicle speed control [28], real-time bus recognition [29], group location management [30], and electronic toll collection. Due to its low cost and reasonable accuracy, radio-frequency identification (RFID) is promising as a supplement to GPS in connected vehicle applications at critical locations where GPS is unavailable or unreliable but the demand for real-time positioning is high. In the next section, we present an RFID-based vehicle positioning approach and two conceptual applications of the above nature are provided in Section 5. RFID-Based Vehicle Positioning RFID tags are a series of passive RFID tags which are fastened on road surface containing position information, e.g., the distance to a reference point, lane number, and direction of travel. When a vehicle passes above an RFID tags, the RFID reader and antenna carried by the vehicle activates the tag and reads in the position information. The layout of the RFID position system is illustrated in Figure 1 and hardware installation pictured in Figure 2. An example design of the format of position information is provided in Table 1. In this setup, we used an XCAF-12L Panel Antenna (Invengo Information Technology, Co. Ltd., Shenzen, China) which is a rugged UHF directional antenna with a central frequency of 915MHz and circular polarization. The RFID Reader was an Invengo XCRF-502E with a working frequency of 902-928 MHz and working range up to 10 meters. The RFID Tags were ZT-T80s with an effective range of 2-100 m and identification speed up to 200 km/h. To facilitate the communication between the reader and the tags, an electronic control unit (ECU) was developed. As indicated in Figure 2, the ECU was used to control the reader by RS232 and to transfer data to other modules by CAN Bus. The connection of RFID reader, ECU and CAN Bus is shown in Figure 3. The ECU includes a power module and CPU is based on the Motorola 9s08DZ16 chip. The serial port transceiver is a MAX232 and the CAN transceiver is a TJA1050. Since each RFID tags contains static position information at a fixed location, a need arises for a vehicle in motion to acquire its accurate position in a continuous fashion in order to support connected vehicle applications. As such, a kinematics integration algorithm has been devised and added to the RFID positioning system, see Figure 4. … … Distance to Intersection where, is current position; is the stored position obtained from RFID tag last time; is estimated driving distance according to speed integral; is a flag whose value is 1 (the system is able to read information from RFID tag) or 0 (otherwise); k is data sequence number, starting to count when the system fails to read the tag and reset to 0 when reading resumes; v and a are vehicle speed and acceleration respectively;  is time elapsed since last successful reading from RFID tag. Experimental Verification The accuracy of the RFID positioning system can be affected by RFID communication range and distance between tags. Since RFID only communicates within a few meters, reading from a tag only occurs when a vehicle moves over the tag, which ensures accuracy. If the vehicle fails to obtain position updates from tags, its position has to be estimated. The longer the kinematics integration runs, the larger the estimation error. Therefore, it is necessary to avoid long gaps between tags to ensure accuracy. In order to verify the feasibility and the accuracy of the positioning approach, this paper proposes two test methods with one based on radar and the other photoelectric switch. Experimental Verification Based on Radar The objective is to verify whether the dynamic position obtained from the tags matches the -true‖ position of the vehicle measured by the radar. The experiment is set up as shown in Figure 5. The experiment vehicle is equipped with radar, RFID reader and its antenna. The radar is installed on the vehicle's front fender guard. The radar wave beam is oriented forward in the direction of travel. The antenna is installed below the fender guard, and the surface of the antenna senses the ground. The tags are installed on the test road, at the end of which is a fixed target to help radar measure distance. The radar features a millimeter wave with frequency 76~77 GHz, range up to 180 m, and resolution 0.7 m. In the experiment, the vehicle passes through each tag consecutively while the vehicle accelerates and decelerates several times. The computer on board calculates the distance between the vehicle and the last tag using the proposed approach. The radar measures the distance between the vehicle and the fixed target independently. The results obtained from these two methods are all transferred to the CAN bus, which can be logged in the computer. The comparison of the test results is shown in Figure 6. Note that the estimated distance is zero at the beginning since there is no tag reading and hence nothing to estimate. Starting from the 5th second, tag readings become available and vehicle position estimation begins. The result shows that positions from radar, tags, and estimation match very well. Experimental Verification Based on Photoelectric Switch The objective focuses on verifying whether the position estimated from kinematics integration matches the position obtained from the tags. The experiment is set up as shown in Figure 7. The photoelectric switch consists of a transmitter which is fixed at roadside and a receiver which is fixed at the outside of the vehicle. Make sure that the transmitter is in the same cross section as a tag, while the receiver is also in the same cross section as the RFID antenna. As such, when the receiver moves with the vehicle and is aligned with the transmitter, both the RFID and the switch are triggered simultaneously. Starting from this instant, the on-board computer begins to estimate vehicle position using kinematics integration. Meanwhile, another source of position information is obtained from RFID tags. Figure 8 shows the result of one of the tests. In this test, the error of position is about 5.4% in the first 30 m probably due to accelerating; when speed is relative stable, the errors drops to around 2.5%. It is also noticeable that, as the estimation goes on, the accumulated error increases. Further tests with lower maximum speed (e.g., 36 km/h) reduced the above errors to 3.1% and 1.8% respectively. The error in position is mainly derived from the accumulation error caused by velocity inaccuracy, especially when the vehicle is accelerating or decelerating. Accordingly, a calibration algorithm is derived using least square method: where is the position error; is the calibrated position based on integral, is the vehicle acceleration. The coefficients are estimated as = −1.79 and = 0.0613. After calibration, the errors in the first test drop to 0.07% and 0.66% respectively. Limited by time and resources, this research only conducted the above simple, straightforward tests. Nevertheless, the test results revealed that the proposed RFID is promising in providing accurate vehicle positioning in a dynamic process. Before large-scale applications, it is suggested that further tests be performed in more realistic environment (e.g., involving multiple lanes and mixed traffic) with better knowledge of ground truth. Example Applications in Connected Vehicles Allowing vehicle-to-vehicle and vehicle-to-infrastructure communication, connected vehicle technology opens the door to many innovative applications such as intelligent cruise control [28] that transform safety and throughput. Presented below are two conceptual paradigms in which the above RFID positioning approach helps achieve the goals of connected vehicle technology. Vehicle Operational Control With accurate information about positions and speeds of connected vehicles, it is feasible to synchronize these vehicles on one or more special, managed lanes at high speeds without compromising safety. Such a paradigm is illustrated in Figure 9. RFID tags on the ground pinpoint the location of each vehicle which is equipped with a cooperative driving assistance system. To ensure safety in the lateral direction, deviation from lane center is translated to a potential field [31,32] that vehicle needs to overcome. This potential field is imagined in the lateral direction as bumps along the lane lines, road edge, and center line. By taking the first derivative of with respective to , one obtains the correction force that is necessary to steer the vehicle back on track: where is imagined as the spring between vehicle and lane line bump, and can be implemented in the actuator that controls vehicle steering. Still in the lateral direction, vehicle in the vicinity poses a safety hazard. As a result, driver may choose to -shy‖ away and this effect becomes more remarkable when is a heavy truck. Similar to the treatment of lane deviation, the mechanism to avoid parallel running can be created by imagining a repulsive force which is illustrated as the spring between vehicles and . Such a force can be derived from the potential field of perceived by : To ensure safety in the longitudinal direction, a mechanism to maintain safe car following is essential. Hence, the safety hazard in the longitudinal direction can be represented as a potential field of the leading vehicle perceived by . Therefore, the repulsive force that imposes to keep safe distance can be generically derived from as above, a more concrete form of which can be found in [33]: where is the operational control (acceleration or deceleration) of vehicle , is the maximum acceleration desired by driver when starting from standing still, is the speed of vehicle , the desired speed of driver , is the actual spacing between vehicle and its leading vehicle , and is the desired value of . Level IV Intersection Control An intersection is a point in transportation systems where two or more streams of traffic meet and share roadway capacity. To ensure traffic safety, three levels of intersection control are used conventionally. Level I control does not use any physical device to assign priority to traffic, but rather it relies on each driver understanding and observing basic rules specified in Driver's Manuals such as yielding to vehicles on the right and vehicles already in the intersection. If safety hazard poses an issue (typically identified through intersection sight triangle analysis [34,35]), Level II control may be considered which implements YIELD and/or STOP signs to resolve conflict [36]. Currently, the ultimate form of intersection control is Level III, i.e., intersection signalization [36] which alternately assigns right-of-way to specific movements through signal indication such as Green, Yellow, and Red. Though potentially capable of reducing certain types of crash, Level III control may give rise to other types of collision and negatively impact efficiency. For example, pre-timed signal control ignores the dynamics of approaching traffic, so green time may be wasted on approaches with light or no traffic; Even though actuated control is made traffic-aware, it is not flexible enough to accommodate demands with varying patterns, especially issues caused by unnecessary calls, mandatory minimum green, and arbitrary max out. Interestingly, the above three levels of control seems to be no match in many aspects to the old-fashioned traffic control by a police officer. For example, the officer is able to watch vehicle clearing an intersection before releasing traffic from a conflicting approach. By clearing before releasing, conflicting vehicles are well protected. In addition, waste of time is minimized since right-of-way is switched right after clearance. For another example, the officer has full flexibility to assign a relatively long green time to an approach to match its demand or to skip this approach if there is no demand. Moreover, the officer may optimize traffic heuristically on a cycle-by-cycle basis to achieve the overall success of competing objectives such as safety, throughput, and reducing delay. The only drawback of this officer-directing-traffic paradigm is that it requires the presence of a trained officer around the clock which is impractical. Fortunately, the advent of connected vehicles, combined with sound positioning technology, makes it possible to reproduce this safe yet efficient paradigm electronically which can be called Level IV control. Figure 10 illustrates such a paradigm where each vehicle is able to talk to other vehicles through on-board equipment (OBE) and communicate with the roadside equipment (RSE) at the intersection. The RFID positioning can help by providing real-time, accurate vehicle positions and speeds, with which the RSE can serve as an -electronic police officer‖ to direct traffic. More specifically, the RSE can send individualized instruction to each driver regarding stop/go and travel speed. Within the RSE, the internal logic dynamically optimizes traffic based on current demands and vehicle positions, resolves conflict, issues customized command to each driver, monitor vehicle status, and update instructions accordingly. Note that the above discussion concerns only about technical feasibility of Level IV control without complicating the problem with legal and moral issues. Conclusions This paper proposes an RFID approach as a helpful alternative to positioning in connected vehicle applications where GPS is not available or of poor quality. This approach installs RFID tags on the road surface and on-board tag readers in vehicles. When a reader passes over a tag, the reader can receive the position information stored in the tag. To fill gaps between tags, estimation has to be made based on the latest position update from tags. As such, a kinematics integration method is proposed to serve the purpose. When vehicles accelerate or decelerate, their speeds are changing, which affects the accuracy of the estimation method. Error of this nature can be diminished by applying the proposed calibration algorithm. Road experiments are carried out to validate the RFID-based positioning approach. One type of experiments involves both radar and RFID reader on board. The radar is used to provide -true‖ positions of the test vehicle, against which estimates from RFID-based positioning are compared. The result shows good match between the two sources of vehicle positions. The other type of experiments focuses on verifying whether the position estimated from the kinematics integration matches the position obtained from the tags. A photoelectric switch is used to trigger the estimation of vehicle position based on the latest tag update. The results indicate that the error of position is about 5.4% during acceleration or deceleration process and around 2.5% when speed is relative stable. With the help of calibration algorithm, the errors can drop to 0.07% and 0.66% respectively. Before large-scale applications, further tests are recommended in more realistic environment with better knowledge of ground truth. RFID-based positioning appears promising in connected vehicle applications due to its low cost and reasonable accuracy. Two conceptual applications are conceived in this paper. One application deals with vehicle operation control where RFID position provides accurate vehicle positions to enable the prediction of safety hazard. The other application conceives a Level IV intersection control where RFID position makes it possible to conduct traffic by an -electronic police officer‖.
5,187
2014-03-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Cistanche Species Mitogenomes Suggest Diversity and Complexity in Lamiales-Order Mitogenomes The extreme diversity and complexity of angiosperms is well known. Despite the fact that parasitic plants are angiosperms, little is known about parasitic plant mitogenomic diversity, complexity, and evolution. In this study, we obtained and characterized the mitogenomes of three Cistanche species (holoparasitic plants) from China to compare the repeats, segment duplication and multi-copy protein-coding genes (PCGs), to clarify the phylogenetic and evolution relationship within the Lamiales order, and to identify the mitochondrial plastid insertions (MTPT) in Cistanche mitogenomes. The results showed that the mitogenome sizes of the three Cistanche species ranged from 1,708,661 to 3,978,341 bp. The Cistanche species genome encodes 75–126 genes, including 37–65 PCGs, 31–58 tRNA genes and 3–5 rRNA genes. Compared with other Lamiales and parasitic species, the Cistanche species showed extremely high rates of multi-copy PCGs, ranging from 0.13 to 0.58 percent of the total number of PCGs. In addition, 37–133 Simple Sequence Repeat (SSRs) were found in these three mitogenomes, the majority of which were the mononucleotides Adenine/Thymine. The interspersed repeats contained forward and palindromic repeats. Furthermore, the segment-duplication sequence size ranged from 199,584 to 2,142,551 bp, accounting for 24.9%, 11.7% and 53.9% of the Cistanche deserticola, Cistanche salsa and Cistanche tubulosa mitogenome, respectively. Furthermore, the Ka/Ks analysis suggested that the atp4, ccmB, ccmFc and matR were probably positively selected during Lamiales evolution. The Cistanche plastome suggested the presence of MTPT. Moreover, 6–12 tRNA, 9–15 PCGs fragments and 3 rRNA gene fragments in the Cistanche mitogenomes were found in the MTPT regions. This work reports the Cistanche species mitogenome for the first time, which will be invaluable for study the mitogenome evolution of Orobanchaceae family. Comparison of Multi-Copy Protein-Coding Genes (PCGs) in the Three Cistanche Species and Eight Other Lamiales and Six Parasitic Species Mitogenomes We compared the mitogenome size, GC content and PCGs copy of the Cistanche species and other Lamiales species with published mitogeomes. The published Lamiales species included Boea hygrometrica, Mimulus guttatus, Ajuga reptans, Salvia miltiorrhiza, Hesperelaea palmeri, Castilleja paramensis, Utricularia reniformis and Rotheca serrate (Table S2). Their GC contents ranged from 43.27% to 45.5%, which were fairly similar (Tables 1 and S2). However, the sizes of these mitogenomes were extremely variable (Figure 1, Tables 1 and S2). The C. tubulosa mitogenome (3,978,341 bp) was the largest; its size was 11.3 times larger than the smallest mitogenome (A. reptans, 352,069 bp) (Tables 1 and S2). The Cistanche mitogenomes were the largest among all the mitogenomes found in the Lamiales order. To determine whether there were any correlations between the PCG copy numbers and the mitogenome size, the PCGs copy numbers from those 11 mitogenomes were compared. The duplication of PCGs was observed in all the Lamiales mitogenomes. The degree of duplication was especially high in the Cistanche genus ( Figure 1). The mitochondrial PCGs were divided into two categories, core genes and variable genes, according to a previous study [9]. Among the Cistanche mitogenomes, the proportion of duplicated core genes ranged from 13% to 58%, in the following order: C. tubulosa (58%), C. salsa (25%) and C. deserticola (13%) (Figure 1 and Table S4). The proportion of duplicated variable genes ranged from 0-35%, in the following order: C. tubulosa (35%), C. deserticola (6%) and C. salsa (0%) (Figure 1 and Table S4). Among other Lamiales species, the duplication of core genes was only present in the H. palmeri and U. reniformis mitogenomes ( Figure 1). Furthermore, the duplication of variable genes was present in the E. guttata, H. palmeri and U. reniformis mitogenomes (Figure 1). PCGs were divided into two categories, core genes and variable genes, according to a previous study [9]. Among the Cistanche mitogenomes, the proportion of duplicated core genes ranged from 13% to 58%, in the following order: C. tubulosa (58%), C. salsa (25%) and C. deserticola (13%) (Figure 1 and Table S4). The proportion of duplicated variable genes ranged from 0-35%, in the following order: C. tubulosa (35%), C. deserticola (6%) and C. salsa (0%) (Figure 1 and Table S4). Among other Lamiales species, the duplication of core genes was only present in the H. palmeri and U. reniformis mitogenomes (Figure 1). Furthermore, the duplication of variable genes was present in the E. guttata, H. palmeri and U. reniformis mitogenomes (Figure 1). We also compared the size, GC content and number of PCG copies of the mitogenomes from the Cistanche species and several parasitic plants, including C. paramensis, Cynomorium coccineum, Epirxanthes elongate, Lophophytum mirabile, Viscum album and V. scurruloideum (Table S5). Their GC contents ranged from 43.52% to 47.4%, which were fairly similar (Tables 1 and S5). However, the sizes of these mitogenomes varied greatly ( Figure S4, Tables 1 and S5). The smallest mitogenome was from V. scurruloideum (65,873 bp). The Cistanche mitogenomes remained the largest among the parasitic plants ( Figure S4, Tables 1 and S5). It is worth noting that the mitogenome size of C. coccineum, a holoparasitic plant, was also over 1 Mb (Table S5). In addition to C. paramesis, V. scurruloideum and V. album, the duplication of PCGs was also present in the mitogenomes of the parasitic plants ( Figure S4 and Table S6). Among the Cistanche mitogenomes, the proportion of duplicated core genes ranged from 13% to 0.63%, in the following order: C. tubulosa (63%), C. salsa (25%) and C. deserticola (13%) ( Figure S4 and Table S6). Furthermore, the proportion of duplicated variable genes ranged from 0%-35%, in the following order: C. tubulosa (35%), C. deserticola (6%) and C. We also compared the size, GC content and number of PCG copies of the mitogenomes from the Cistanche species and several parasitic plants, including C. paramensis, Cynomorium coccineum, Epirxanthes elongate, Lophophytum mirabile, Viscum album and V. scurruloideum (Table S5). Their GC contents ranged from 43.52% to 47.4%, which were fairly similar (Tables 1 and S5). However, the sizes of these mitogenomes varied greatly ( Figure S4, Tables 1 and S5). The smallest mitogenome was from V. scurruloideum (65,873 bp). The Cistanche mitogenomes remained the largest among the parasitic plants ( Figure S4, Tables 1 and S5). It is worth noting that the mitogenome size of C. coccineum, a holoparasitic plant, was also over 1 Mb (Table S5). Repeats and Segment Duplication Analysis The types and numbers of repeats varied among the three mitogenomes. SSRs are sequences composed of repeats with motifs 1 to 6 bp in length. Among the Cistanche mitogenomes, the number of SSRs ranged from 37-133 in the following order: C. tubulosa (133), C. deserticola (44) and C. salsa (37). Polyadenine or polythymine repeat types were the most prevalent mononucleotide SSRs ( Figure 2A and Table S10). This result was in agreement with the fact that the AT content (55.41-55.43%) was higher than the GC content (44.57-44.59%) in the Cistanche mitogenomes (Table 1). Next, we detected the interspersed repeats by REPuter. The interspersed repeats were divided into four types: forward, palindrome, reverse and complement repeats. In the Cistanche mitogenomes, the forward and palindromic repeats were the main types of interspersed repeats ( Figure 2B and Tables S11-S13). Only 353 interspersed repeats were detected in the C. salsa mitogenome (Table S12), and more than 1300 in the C. tubulosa (Table S13). In addition to the SSRs and interspersed repeats, we also detected tandem repeats >30 bp in length and similarities of >90%. The number of interspersed repeats ranged from 4 to 26 in the Cistanche mitogenomes ( Figure 2C). The number of repeat units ranged from 1.8 to 2.5 copies per tandem repeat, and the repeat sizes ranged from 21 to 127 bp (Tables S14-S16). The segment-duplication-identification results showed that the segment sequences ranged from 199,584 bp to 2,142,551 bp in length (Table 1), accounting for 24.9%, 11.7% and 53.9% of the lengths of the C. deserticola, C. salsa and C. tubulosa mitogenomes, respectively ( Figure 3). In the C. deserticola mitogenome, 39 alignments were identified. The lengths of the alignments ranged from 5078 bp to 38,025 bp (Table S17). By contrast, only 14 alignments were identified in the C. salsa mitogenome. The lengths of the alignments ranged from 5385 bp to 23,085 bp (Table S18). It is worth noting that 168 alignments were found in the C. tubulosa mitogenome. The lengths of the alignments ranged from 5169 bp to 64,106 bp (Table S19). These repeat sand segment duplications might have promoted genome rearrangement and contributed to the variations in genome size. 14 alignments were identified in the C. salsa mitogenome. The lengths of the alignments ranged from 5385 bp to 23,085 bp (Table S18). It is worth noting that 168 alignments were found in the C. tubulosa mitogenome. The lengths of the alignments ranged from 5169 bp to 64,106 bp (Table S19). These repeat sand segment duplications might have promoted genome rearrangement and contributed to the variations in genome size. Phylogenetic Analysis by Mitogenome Sequences The phylogeny was reconstructed using shared mitochondrial PCGs from 11 Lamiales mitogenomes using the maximum-likelihood (ML) method. The sister genus of Cistanche was Castilleja, with a bootstrap score (BS) of 100 (Figure 4). These two genera belong to the Orobanchaceae family. The species of Cistanche were distributed in two main clades. The first clade (BS: 100) was formed by C. deserticola and C. salsa, with the same mitogenome size. The second clade contained C. tubulosa with a BS of 100. These two clades were subsequently clustered together (BS: 100) (Figure 4). The bootstrap scores were high for all the branches, indicating the high degree of reliability of the phylogenetic tree. Moreover, the phylogenetic relationship of the Cistanche species constructed using the mitogenomes was congruent with that using the plastid genome, as shown in our previous studies. Phylogenetic Analysis by Mitogenome Sequences The phylogeny was reconstructed using shared mitochondrial PCGs from 11 Lamiales mitogenomes using the maximum-likelihood (ML) method. The sister genus of Cistanche was Castilleja, with a bootstrap score (BS) of 100 (Figure 4). These two genera belong to the Orobanchaceae family. The species of Cistanche were distributed in two main clades. The first clade (BS: 100) was formed by C. deserticola and C. salsa, with the same mitogenome size. The second clade contained C. tubulosa with a BS of 100. These two clades were subsequently clustered together (BS: 100) (Figure 4). The bootstrap scores were high for all the branches, indicating the high degree of reliability of the phylogenetic tree. Moreover, the phylogenetic relationship of the Cistanche species constructed using the mitogenomes was congruent with that using the plastid genome, as shown in our previous studies. The Substitution Rate of Mitochondrial PCGs The shared mitochondrial PCGs were used to estimate the nucleotide substitution rate of the mitochondrial PCGs in Lamiales. For each of the 28 PCGs, the pairwise Ka/Ks ratios were calculated. We found that the Ka/Ks rations of the four PCGs were over 1.0 in most of the species ( Figure 5 and Table S20). These four PCGs were atp4, ccmB, ccmFc and matR, suggesting potential positive selection. However, most of the mitochondrial PCGs showed low Ka/Ks ratios, indicating possible purifying selection. In particular, the Ka/Ks ration of atp9, cox1, cox3 and nad4L showed a relatively low value (Figures S8-S31). rpl23, rpl36, rps1, rps3, rps4, rps7, rps10, rps11, rps12, rps13, rps14, rps19, sdh3 and sdh4. Species in red box were three Cistanche species. The Substitution Rate of Mitochondrial PCGs The shared mitochondrial PCGs were used to estimate the nucleotide substituti rate of the mitochondrial PCGs in Lamiales. For each of the 28 PCGs, the pairwise Ka/ ratios were calculated. We found that the Ka/Ks rations of the four PCGs were over 1.0 most of the species ( Figure 5 and Table S20). These four PCGs were atp4, ccmB, ccmFc a matR, suggesting potential positive selection. However, most of the mitochondrial PC showed low Ka/Ks ratios, indicating possible purifying selection. In particular, the Ka/ ration of atp9, cox1, cox3 and nad4L showed a relatively low value (Figures S8-S31). Genome Expansion in C. tubulosa A substantial portion of plant mitogenomes may be made up of small repetitive sequences [26]. For example, low-complexity repetitive DNA made up 5-10% of the sequences in the Citrullus and Cucurbita mitogenomes [27]. Furthermore, similar ratios of low-complexity repetitive DNA sequences were observed in other plants [9,28]. In our study, C. tubulosa, C. salsa and C. deserticola had repeat sequences that were 174,618 bp, 32,375 bp and 149,945 bp long, accounting for 4.4%, 1.89% and 8.06% of the mitogenome sizes, respectively. These results were consistent with the results of previous studies, which showed that repeat sequences can cause changes in mitogenome size [9,[26][27][28]. C. deserticola possessed the largest proportion of repetition sequences among the three Cistanche mitogenomes in our study. However, the mitogenome size of C. derserticola was close to that of C. salsa, which means that other factors may also play important roles in mitogenome size in addition to small repetitive sequence. Segment duplication and multi-copy protein-coding genes were two other factors causing mitogenome expansion. In the C. tubulosa mitogenome, the sequences of the duplicated segments were 2,142,511 bp in length, accounting for 53.9% of the whole mitogenome size. Similarly, Sloan reported that 4.6-megabyte repeats were identified in the mitogenome of Silene conica, accounting for 40.8% of the whole mitogenome [13]. Furthermore, multicopy protein-coding genes might result in mitogenome expansion. In the C. tubulosa mitogenome, the proportions of duplicated core genes and variable genes were 58% and 35%, respectively. For typical large mitogenomes (Cucumis species), some protein-coding genes (rps19) are also presented twice in both Citrullus and Cucurbita [14]. In C. tubulosa, complex I (nad4, nad4L, nad6 and nad7), complex IV (cox1 and cox2), complex V (atp4, atp8 and atp9), Cytochromec biogenesis (ccmFc and ccmFn), Ribosomal protein small subunit (rps3, rps4 and rps14), Intron maturase (matR) and SecY independent transport (mttB) protein-coding gene all had multiple copies. We speculated that this was related to the fact that holoparasitic plants do not conduct photosynthesis. In addition, environmental stress might up-regulate the expression of some related genes. Previous studies suggested that the plant mitochondrial electron-transport chain could improve plant performance under stressful environmental conditions [29]. Unlike C. deserticola and C. salsa, C. tubulosa experience salt stress and cold stress rather than drought stress [30][31][32]. This might lead to the duplication of genes in C. tubulosa. In summary, small repeats, segment duplication, and multi-copy genes were the main causes of the mitogenome expansion. The Presence of MTPTs In angiosperm plants, MTPTs are almost always present [33]. In our study, we found 158 C. deserticola, 128 C. salsa and 139 C. tubulosa MTPT sequences. They were 35,165 bp, 28,963 bp and 26,911 bp in length, accounting for 1.89%, 1.64% and 0.68% of the C. deserticola, C. salsa and C. tubulosa mitogenomes, respectively. Cheng et al. suggested that 26.87-kilobyte MTPT fragments were found in Suaeda glauca, accounting for 5.18% of the mitogenome [34]. In addition, the MTPTs discovered in Salix suchowensis account for 11.3% (17.5 kb) of the plastome and 2.8% (18.1 kb) of the mitogenome [35]. Interestingly, the proportion of MTPTs in the Cistanche mitogenomes was relatively low. In addition, our results showed that, in the MTPT sequences, the plastid PCGs and ribosome RNA were partial sequences. By contrast, the plastid tRNA genes had complete sequences in the MTPT fragments. The partial loss of plastid PCGs and ribosomal RNAs suggest that they might no longer function in the MTPT sequences. This supports the theory that fragments of DNA from plastomes usually become nonfunctional pseudogenes, while some tRNA genes still perform normal functions [36]. Sampling, DNA Extraction, and Genome Sequencing Fresh samples of C. deserticola, C. salsa and C. tubulosa were collected from the Alxa League (Inner Mongolia Autonomous Region), Tacheng City (Xinjiang Uygur Autonomous Region), Hotan Prefecture (Xinjiang Uygur Autonomous Region), China (Table S1). The samples were identified by Professor Yulin Lin and stored at the Herbarium of the Chinese Academy of Medical Science and Peking Union Medicinal College (under specimens registry numbers CMPB13484, CMPB13485 and CMPB13487). Total DNA extraction was carried out using a plant genomic DNA extraction kit (Tiangen Biotech, Beijing, China). Utilizing NEBNext ® library building kit [37], a DNA library with an insert size of 400 bp was constructed. Subsequently, Illumina HiSeq4000 sequencing platform was used for sequencing. The sequencing produced 5.44, 5.16 and 4.34 G of raw data, respectively (Table S1). Trimmomatic was used to filter the raw data to obtain the clean data [38]. In total, 4.88 G, 4.62 G and 3.92 G clean data were obtained, respectively. The plant samples were also used for Oxford Nanopore sequencing. Library construction, quality detection and sequencing were conducted following the manufacturer's standard protocol. Consequently, 78.96 G, 39.19 G and 59.52 G of raw data were obtained, and 48 G, 31.96 G and 54.56 G remained after filtering and qualification (Table S1). Mitogenome Assembly and Annotation Eight Lamiales mitogenomes were downloaded as references from NCBI (Table S2). We initially enriched the mitogenome-related clean reads from Oxford Nanopore data using USEARCH [39]. The filtered Nanopore reads were assembled into contigs using Nextdenovo v2.4.0 (available online: https://github.com/Nextomics/NextDenovo (20 December 2020)) with the default parameters. The obtained contigs were then used as references, named Cistanche Structure Contigs. Illumina paired-end reads were mapped back to the Cistanche Structure Contigs using Minimap2 [40] and SAMtools [41]. We extracted the filtered Illumina paired-end reads and assembled them into contigs using SPAdes v. 3.10.1 [42]. By comparing the assembly of short-reads and long-reads using Minimap2 [40], we preliminarily determined which contig was the putative mitochondrial molecule. The assembly contigs obtained above were corrected with the Illumina pairedend reads using NextPolish1.3.1 [43]. Draft mitochondrial contigs were processed further following the steps below. Firstly, we compared the sequences with those in GenBank using BLASTn program to determine whether they were mitochondrial reads [44]. Second, we annotated the mitogenomes using MITOFY to determine whether the sequences contained mitochondrial genes [14]. Analysis of Simple Sequence Repeats (SSRs), Tandem Repeats, Interspersed Repeats, and Segment Duplication Online website MISA (Available online: http://webblast.ipk-gatersleben.de/misa/ (15 January 2021)) was used to identify the SSRs in mitochondrial genome. These SSRs included mono-, di-, tri-, tetra-, penta-, and hexanucleotides with minimum numbers of 10, 6, 5, 5, 5 and 5, respectively. With the default parameters, Tandem Repeat Finder [47] was used to identify tandem repeats. In addition, REPuter was used to identify forward, reverse, palindromic and complementary repeat sequences [48]. The minimum repeat size was set to 30 bp and the identity of the repeat units was ≥90%. Segment duplications were identified by comparing the mitochondrial genome to itself using BLASTN with the parameter setting e-value = 1 × 10 −5 . All alignments with length > 5000 bp and score > 90% were considered segment duplication for calculations of segment-duplication number. TBtools was used to visualize the BLASTn results [46]. Phylogenetic Analyses and Estimation of Nucleotide-Substitution Rates For phylogenetic analyses, the DNA sequences of shared mitochondrial genome PCGs from 11 Lamiales species, including the Cistanche species in this work (Table S2), were used in the construction. The mitogenomes of other 8 Lamiales species were downloaded from GenBank Organelle Genome Resource database. PhyloSuite (v1.2.1) was used to extract shared mitochondrial PCGs from Lamiales species [49]. MAFFT (v7.450) was used to align the corresponding amino-acid sequences [50]. The aligned amino-acid sequences were concatenated and used to construct the phylogenetic trees through the maximum-likelihood (ML) method, using Solanum lycopersicum (MF034193) and Nicotiana tabacum (NC_006581.1) as outgroups. The bootstrap analysis was performed with 1000 replicates. We used the yn00 program in PAML v 4.9 [51] to calculate the nonsynonymous substitution rate (d N ) and synonymous substitution rate (d S ) for PCGs with the F3 × 4 codon model. Conclusions In conclusion, this study provides a first insight into the structural diversity and complexity of Cistanche mitogenomes. Our results answered the three scientific questions that were posed in the introduction. First, the complete mitogenomes of C. deserticola, C. salsa and C. tubulosa were successfully assembled, which was a significant accomplishment in the study of Cistanche mitogenomes. Second, the C. tubulosa was close to 4 Mb in size, indicating a significant expansion. Furthermore, the C. tubulosa mitogenomes differed significantly from those of the C. deserticola and C. salsa in terms of numbers of duplicated PCGs and segment duplication. Three Cistanche species were formed into one clade, close to the species of Orobanchaceae. Additionally, the topology of the Lamiales in the present study was highly similar to that in the APG IV system. Third, MTPT sequences were identified in three Cistanche species mitogenomes, with partial PCGs and ribosome RNA fragments, and complete tRNA from the plastomes. The results of this study therefore revealed many fascinating aspects of mitogenome diversity and complexity. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/genes13101791/s1. Figure S1: Mitochondrial genome map of Cistanche deserticola. The outside circle shows the GC content. The inside circle represents the protein coding gene, tRNA and rRNA; Figure S2: Mitochondrial genome map of Cistanche salsa. The outside circle shows the GC content. The inside circle represents the protein coding gene, tRNA and rRNA; Figure S3: Mitochondrial genome map of Cistanche tubulosa. The outside circle shows the GC content. The inside circle represents the protein coding gene, tRNA and rRNA; Figure S4: Genome size and protein-coding gene content of 9 parasitic plants mitochondrial genomes; Figure S5: The collinearity analysis among C. deserticola mitochondrial genome and plastid genome; Figure S6: The collinearity analysis among C. salsa mitochondrial genome and plastid genome; Figure S7: The collinearity analysis among C. tubulosa mitochondrial genome and plastid genome; Figure Table S1: Sampling and sequencing data information; Table S2: Mitogenome information of 8 Lamiales species and 2 outgroups; Table S3: Mitochondrial genome information of Chenopodiaceae species used in HGT events prediction; Table S4: Comparison of mitochondrial protein coding genes copy with 11 Lamiales mitogenomes; Table S5: Mitogenome information of parasitic plants; Table S6: Comparison of mitochondrial protein coding genes copy with 9 parasitic plant mitogenomes; Table S7: List of potential MTPTs of C. deserticola; Table S8: List of potential MTPTs of C. salsa; Table S9: List of potential MTPTs of C. tubulosa; Table S10 Types and numbers of SSRs in the Cistanche mitogenome; Table S11: Interspersed repeat sequences identified in C. deserticola mitogenome; Table S12: Interspersed repeat sequences identified in C. salsa mitogenome; Table S13: Interspersed repeat sequences identified in C. tubulosa mitogenome; Table S14: Tandem repeat sequences identified in the mitogenome of C. deserticola; Table S15: Tandem repeat sequences identified in the mitogenome of C. salsa; Table S16: Tandem repeat sequences identified in the mitogenome of C. tubulosa; Table S17: Segment duplication sequences in C. deserticola mitogenome; Table S18: Segment duplication sequences in C. salsa mitogenome; Table S19: Segment duplication sequences in C. tubulosa mitogenome; Table S20: Pairwise Ka/Ks ratios in different mitochondrial genes of 11 Lamiales plants.
5,308.8
2022-10-01T00:00:00.000
[ "Biology" ]
Computational Intelligence in Data-Driven Modelling and Its Engineering Applications 2020 <jats:p /> due to their high nonlinearity and large disturbances and uncertainties introduced by them. In many cases, conventional mathematical models, such as differential equations that can accurately describe the complex systems and can be exploited in real-life applications, do not exist. However, with the fast development of advanced sensing, measurement, and data collection technologies, a large amount of data that represent input-output relationships of the systems become available. is makes data-driven modelling (DDM) possible and practical. Data-driven modelling aims at information extraction from data and is normally used to elicit numerical predictive models with good generalization ability, which can be viewed as regression problems in mathematics. It analyses the data that characterize a system to find relationships among the system state variables (input, internal, and output variables) without taking into account explicit knowledge about physical behaviors. Many paradigms utilized in DDM have been established based on statistics and/or computational intelligence. For instance, artificial neural networks (ANNs) and fuzzy rule-based systems (FRBSs) serve as fundamental model frameworks, which are alternatives to statistical inference methods, while evolutionary algorithms (EAs), swarm intelligence (SI), and machine learning (ML) methods provide learning and optimization abilities for calibrating and improving the intelligent or statistical models. In recent years, DDM has found widespread applications, ranging across machinery, manufacturing, materials, power and energy systems, transport, and so on. is special issue intends to bring together the state-ofthe-art research, applications, and reviews of DDM techniques. It aims at not only stimulating deep insights into computational intelligence approaches in DDM but also promoting their potential applications in complex engineering problems. is special issue has received 27 manuscripts and 9 high-quality papers have been accepted and published (33% acceptation rate). e accepted papers involve a variety of data-driven modelling and data analytics techniques and contribute to a wide range of application areas. A brief introduction for each contribution is provided in the following paragraphs. O. Meza-Cruz et al. applied the techniques of ANNs and mathematical symmetry groups in modelling a thermochemical reactor of a solid-gas cooling system, where barium chloride (BaCl 2 ) is the solid and ammonia (NH 3 ) is the refrigerant. It was found that using an alternating group of mathematical symmetry in the input data of the ANN helped improve modelling precision, and using the permutations of the mathematical symmetry group in the input data helped improve the convergence speed of the training algorithm. L. Matindife et al. designed a deep learning-based approach for a smart home application, i.e., classification of appliances, specifically for some equal or very close power specification electronic appliances (EVPSAs). ey evaluated three deep learning methods for nonintrusive load monitoring (NILM) disaggregation, including the multiple parallel structures convolutional neural networks (MPS-CNNs), the recurrent neural network (RNN) with parallel dense layers for a shared input, and the hybrid convolutional en, CNN and long short-term memory (LSTM) based networks were proposed for classification. J. Liu et al. applied the ANN and neuro-fuzzy models into a sport engineering problem, which relates to the modelling the rugby players' performance under different moisture conditions. e developed intelligent models showed good performance in accuracy though using only a small number of training data. It was anticipated that the models would help the design of training programmes and the better preparation for rugby games with wet conditions. In W. He et al.'s work, predictive models for nitrogen oxide emission were constructed and validated. In their models, CNN was employed to extract features among multidimensional data, in which the LSTM network was used to approximate the relationships among different time steps. e combination of CNN and LSTM showed better efficiency and accuracy than the baseline models. e developed models would be beneficial for providing reliable information for NO x risk assessment and management. In H. Zhao and B. Chen's work, a complex phenomenon of rockburst was studied. A data-driven method using CNN was proposed to predict the potential of rockburst. e method has been compared with the conventional ANNs and shown better performance. It was assumed that such a model would help evaluate the potential of rockburst for rock underground excavation. Y. Chen et al. tackled the problem of predicting the stock price using data-driven models. ey employed the light gradient boosting machine (LightGBM) algorithm and constructed the minimum variance portfolio of mean-variance model with conditional value at risk (CVaR) constraint. e proposed method was validated using China's stock market data between 2008 and 2018 and showed good accuracy. C. Cheng et al. employed the data-driven models into assessing the working condition of the running gear of highspeed trains, which is complicated due to the existence of random noise in the monitoring data. eir method was developed based on a slow feature analysis-support tensor machine (SFA-STM). It was shown the developed technique could accurately anticipate the actual health status of the running gear system and outperformed the other four types of traditional data-driven models. In the last two papers, W. Wu et al. conducted two pieces of studies for text detection. ey first proposed a pixelwise technique using instance segmentation for scene text detection. e proposed method showed good performance in the common text benchmark problems and did well in the cases including text instance with irregular shapes. In their second work, they proposed a new text detector based on weakly supervised learning. e validation results showed that the proposed method works well in scene text detection, especially for the curved texts. Conflicts of Interest e editors declare that there are no conflicts of interest regarding the publication of this article. Acknowledgments e guest editors sincerely thank all the authors for their quality contribution to this special issue. e lead guest editor would also like to express the deep gratitude to other coeditors for their great support and cooperation throughout the development of the special issue. Qian Zhang Jun Chen Trung anh Nguyen
1,376.2
2018-09-30T00:00:00.000
[ "Computer Science" ]
Convergence in distribution norms in the CLT for non identical distributed random variables We study the convergence in distribution norms in the Central Limit Theorem for non identical distributed random variables that is $$ \varepsilon_{n}(f):={\mathbb{E}}\Big(f\Big(\frac 1{\sqrt n}\sum_{i=1}^{n}Z_{i}\Big)\Big)-{\mathbb{E}}\big(f(G)\big)\rightarrow 0 $$ where $Z_{i}$ are centred independent random variables and $G$ is a Gaussian random variable. We also consider local developments (Edgeworth expansion). This kind of results is well understood in the case of smooth test functions $f$. If one deals with measurable and bounded test functions (convergence in total variation distance), a well known theorem due to Prohorov shows that some regularity condition for the law of the random variables $Z_{i}$, $i\in {\mathbb{N}}$, on hand is needed. Essentially, one needs that the law of $ Z_{i}$ is locally lower bounded by the Lebesgue measure (Doeblin's condition). This topic is also widely discussed in the literature. Our main contribution is to discuss convergence in distribution norms, that is to replace the test function $f$ by some derivative $\partial_{\alpha }f$ and to obtain upper bounds for $\varepsilon_{n}(\partial_{\alpha }f)$ in terms of the infinite norm of $f$. Some applications are also discussed: an invariance principle for the occupation time for random walks, small balls estimates and expected value of the number of roots of trigonometric polynomials with random coefficients. Introduction We consider a sequence of centred independent random variables Z k ∈ R d , k ∈ N with covariance matrixes σ i,j k = E(Z i k Z j k ) and we look to Our aim is to obtain a Central Limit Theorem as well as Edgeworth developments in this framework. The basic hypotheses are the following. We assume the normalization condition n k=1 σ k = I d (1.2) where I d ∈ M d×d is the identity matrix. Moreover we assume that for each p ∈ N there exists a constant C p ≥ 1 such that Let f k,∞ denote the norm in W k,∞ , that is the uniform norm of f and of all its derivatives of order less or equal to k. First, we want to prove that where γ d (x) = (2π) −d/2 exp(− 1 2 |x| 2 ) is the density of the standard normal law. This corresponds to the Central Limit Theorem (hereafter CLT). Moreover we look for some functions (polynomials) ψ k : R d → R such that for N ∈ N and for every f ∈ C (1.5) This is the Edgeworth development of order N . In the case of smooth test functions f (as it is the case in (1.5)), this topic has been widely discussed and well understood: such development has been obtained by Sirazhdinov and Mamatov [21] in the case of identically distributed random variables and then by Götze and Hipp [16] in the non identically distributed case. A complete presentation of this topic may be found in the book of Battacharaya and Rao [12]. It it worth to mention that the classical approach used in the above papers is based on Fourier analysis. In particular, the coefficients ψ k in the above development are given as inverse Fourier transform of some suitable functions, so the expression of ψ k is not completely transparent and its explicit computation requires some effort. In our paper we use a different approach based on the Lindemberg method for Markov semigroups (this is inspired from works concerning the parametrix method for Markov semigroups in [9]). This alternative approach is convenient for the proof of our main result concerning "distribution norms" (see below). But, even in the case of smooth test functions, this allows to obtain slightly more clear and precise results: we prove that ψ k are linear combination of Hermite polynomials of order less or equal to k, whose coefficients are explicit and computed starting with the moments of Z i and G i , G i denoting a Gaussian random variable with the same covariance matrix as Z i . So the computation of these coefficients is easier. Moreover, our estimates hold for each fixed n (in contrast with the ones in the above papers, which are just asymptotic). A second problem is to obtain the estimate (1.5) for test functions f which are not regular, in particular to replace f (N +1)(N +3),∞ by f ∞ . This amounts to estimate the error in total variation distance. In the case of identically distributed random variables, and for N = 0 (so at the level of the standard CLT), this problem has been widely studied. First of all, one may prove the convergence in Kolmogorov distance, that is for f = 1 D where D is a rectangle. Many refinements of this type of result has been obtained by Battacharaya and Rao and they are presented in [12]. But it turns out that one may not prove such a result for a general measurable set D without assuming more regularity on the law of Z k , k ∈ N. Indeed, in his seminal paper [20] Prohorov proved that the convergence in total variation distance is equivalent to the fact that there exists m such that the law of Z 1 + · · · + Z m has an absolutely continuous component. In [3] Bally and Caramellino obtained (1.5) in total variation distance, for identically distributed random variables, under the hypothesis that the law of Z k is locally lower bounded by the Lebesgue measure. We assume this type of hypothesis in this paper also. More precisely we assume that there exists r, ε > 0 and there exists z k ∈ R d such that for every measurable set A ⊂ B r (z k ) P(Z k ∈ A) ≥ ελ(A) (1.6) where λ is the Lebesgue measure. This condition is known in the literature as Doeblin's condition. Under this hypothesis we are able to obtain (1.5) in total variation distance. It is clear that (1.6) is more restrictive than Prohorov's condition. However we prove that in the framework of the CLT for identically distributed random variables, if we have Prohorov's condition we may produce doubling condition as well, just working with the packages Y k = 2(k+1)m i=2km+1 Z i . This allows us to prove Corollary 3.12 which is a stronger version of Prohorov's theorem. Let us finally mention another line of research which has been strongly developed in the last years: it consists in estimating the convergence in the CLT in entropy distance. This starts with the papers of Barron [11] and Johnson and Barron [14]. In these papers the case of identically distributed random variables is considered, but recently, in [13] Bobkov, Chistyakov and Götze obtained the estimate in entropy distance for the case of random variables which are no more identically distributed as well. We recall that the convergence in entropy distance implies the convergence in total variation distance, so such results are stronger. However, in order to work in entropy distance one has to assume that the law of Z k is absolutely continuous with respect to the Lebesgue measure and have finite entropy and this is more limiting than (1.6). So the hypothesis and the results are slightly different. A third problem is to obtain the CLT and the Edgeworth development with the test function f replaced by a derivative ∂ γ f. If the law of S n (Z) is absolutely continuous with respect to the Lebesgue measure, this means that we prove the convergence of the density and of its derivatives as well (which corresponds to the convergence in distribution norms). Unfortunately we fail to obtain such a result in the general framework: this is moral because we do not assume that the laws of Z k , k = 1, ..., n are absolutely continuous, and then the law of S n (Z) may have atoms. However we obtain a similar result, but we have to keep a "small error". Let us give a precise statement of our result. For a function f ∈ C m p (R d ) (m times differentiable with polynomial growth) we define L m (f ) and l m (f ) to 3 be two constants such that Our main result is the following: for a fixed m ∈ N, there exist some constants C N ≥ 1 ≥ c N > 0 (depending on r, ε from (1.6) and on C p from (1.3)) such that for every multi-index γ with |γ| = m and for every f ∈ C m p (R d ) (1.8) If the random variables Z k , k ∈ N are identically distributed we succeed to obtain exactly the same result under the Prohorov's condition (see Corollary 3.12). So this is a strictly stronger version of Prohorov's theorem (for m = 0 we get the convergence in total variation). Moreover, such result is used in [6] in order to give invariance principles concerning the variance of the number of zeros of trigonometric polynomials. However we fail to get convergence in distribution norms because L m (f )e −c N ×n appears in the upper bound of the error and L m (f ) depends on the derivatives of f . But we are close to such a result: (1.9) Another way to eliminate L m (f )e −c N ×n is to assume that the law of Z i , i = 1, ..., m are absolutely continuous with the derivative of the density belonging to L 1 . This is done in Proposition 4.2: we prove that for every k ∈ N and every multi-index α so, under these stronger conditions, we succeed to obtain convergence in distribution norms. But the most interesting consequence of our result is given in Theorem 4.1: there we give an invariance principle for the occupation time of a random walk. More precisely we take ε n = n − 1 2 (1−ρ) with ρ ∈ (0, 1) and we prove that, for every ρ ′ < ρ with W s a Brownian motion (so 1 0 1 εn 1 (−εn,εn) (W s )ds converges to the local time of W ). Here the test function is f n = 1 εn 1 (−εn,εn) and this converges to the Dirac function. This example shows that (1.8) is an appropriate estimate in order to deal with some singular problems. The paper is organized as follows. In Section 2 we prove the result for smooth test functions (that is (1.5)) and in Section 3 we treat the case of measurable test functions. In order to do it we use some integration by parts technology which has already been used in [3] and which is presented in Section 3.1. We mention that a similar approach has been used by Nourdin and Poly [18], by using the Γ-calculus settled in [10]. The main result in Section 3 is Theorem 3.8. In Section 4 we treat the two applications mentioned above. Finally we leave for Appendix A the explicit calculus of the coefficients ψ q from (1.5) for q = 1, 2, 3 and in Appendix B we prove a technical result which is used in our development. Although many ideas in our paper come from previous works (mainly from Malliavin calculus), at the end we finish with an approach which is fairly simple and elementary -so we try to give here a presentation which is essentially self contained (even if some cumbersome and straightforward computations are just sketched). Smooth test functions 2.1 Notation and main result We fix n ∈ N and we consider n centred and independent random variables We denote by σ k the covariance matrix of Z k that is We look to Our aim is to compare the law of S n (Z) with the law of S n (G) where G = (G k ) 1≤k≤n denotes n centred and independent Gaussian random variables with the same covariance matrices: This is a CLT result (but we stress that it is not asymptotic). And we will obtain an Edgeworth development as well. We assume that Z k has finite moments of any order and more precisely, In particular, for i = 2 the inequality (2.2) gives Since the covariance matrix of G k is equal to that of Z k , the inequality (2.2) holds for the G k 's as well, so we can resume by writing Without loss of generality, (from Hölder) we can assume that 1 ≤ C i (Z) ≤ C i+1 (Z) and more in general Remark 2.1. Although it is not explicitly written, we are assuming that we fix n and that the laws of Z k and G k , as well as σ k , are all depending on n. In our applications, we take a sequence Y = {Y k } k of i.i.d. centred r.v's taking values in R m and we consider Z k = 1 √ n C k Y k , where C k denotes a d × m matrix. Therefore, we actually study in which c i (Y ) denotes a constant depending only on (the law of ) the Y k 's, so that (2.4) actually holds. We will specialize the results to this case. But in order to relax the notation and the proofs, it is much more useful to consider a general Z k instead of 1 √ n C k Y k . In order to give the expression of the terms which appear in the Edgeworth development we need to introduce some notation. We say that α is a multiindex if α ∈ {1, . . . , d} k for some k ≥ 1, and we set |α| = k its length. We allow the case k = 0, giving the void multiindex α = ∅. Let α be a multiindex and set k = |α|. For for x ∈ R d and f : , the case k = 0 giving x ∅ = 1 and ∂ ∅ f = f . In the following, we denote with C k (R d ) the set of the functions f such that ∂ α f exists and is continuous for any α with |α| ≤ k. and l k (f ) to be some constants such that Moreover, for a non negative definite matrix σ ∈ M d×d we denote by L σ the Laplace operator associated to σ, i.e. For r ≥ 1 and l ≥ 0 we set Notice that D (l) r ≡ 0 for l = 0, 1, 2 and, by (2.4), for l ≥ 3 and |α| = l then |∆ α (r)| ≤ 2C l (Z) n l/2 , r = 1, . . . , n. (2.8) We construct now the coefficients of our development. Let N be fixed: this is the order of the development that we will obtain. Given 1 ≤ m ≤ k ≤ N we define Then, for 1 ≤ k ≤ N, we define the differential operator (2.10) By using (2.2) and (2.8), one easily gets the following estimates: where L 3k (f ) and l 3k (f ) are given in (2.5) and C, C 3k are positive constants. We introduce now the Hermite polynomials. We refer to Nualart [19] for definitions and properties, here we just give the shortest way to introduce them by means of the integration by parts formula. Given a multi-index α, the Hermite polynomial H α on R d is defined by where W is a standard normal random variable in R d . Moreover for a differential operator Γ = |α|≤k a(α)∂ α , with a(α) ∈ R, we denote H Γ = |α|≤k a(α)H α so that (2.14) Finally we define The main result in this section is the following (recall the constants L k (f ) and l k (f ), f ∈ C k p (R d ), defined in (2.5)): 16) in which N = [N/2], N = N (2N + N + 5), H N is a positive constant depending on N and W denotes a standard normal random variable in R d . As a consequence, taking f (x) = x β with |β| = k, one gets Basic decomposition and proof of the main result Let N ∈ {0, 1, . . .}. We define r ≡ 0 for l = 0, 1, 2, the above sum actually begins with l = 3 and of course this is the basic fact. Then, with the convention 2 l=3 = 0, we have We also define For a matrix σ ∈ M d×d we recall the Laplace operator L σ associated to σ in (see (2.6)) and we define In (2.21), W stands for a standard Gaussian random variable. Then we define We now put our problem in a semigroup framework. For a sequence X k , k ≥ 1, of independent r.v.'s, for 1 ≤ k ≤ p we define We use P Z k,p and P G k,p . By using independence, we have the semigroup and the commutative property: P X k,p = P X r,p P X k,r = P X k,r P X r,p k ≤ r ≤ p. (2.24) Moreover, for m = 1, . . . , N we denote N,k,n = k≤r 1 <···<rm≤n P G rm+1,n P G r m−1 +1,rm · · · P G r 1 +1,r 2 P G k,r 1 Q Notice that in the first sum above the conditions q i , q ′ i ∈ {0, 1} and q 1 + · · · + q m + q ′ 1 + · · · + q ′ m > 0 say that at least one of q i , q ′ i , i = 1, ..., m is equal to one. We notice that the operators T 1 N,r i and U 1 N,σr i represent "remainders" and they are supposed to give small quantities of order n − 1 2 (N +1) . So the fact that at least one q i or q ′ i is non null means that the product has at least one term which is a remainder (so is small), and consequently R (m) N,k,n is a remainder also. 8 Finally we define We are now able to give our first result: N,k,n , m = 1, . . . , N + 1, be given through (2.18), (2.20), (2.25), (2.26). Then for every 1 ≤ k ≤ n + 1 and f ∈ C Proof. Step 1 (Lindeberg method) We use the Lindeberg method in terms of semigroups: for 1 ≤ k ≤ n + 1 Then we define (2.28) and the above relation reads We will write (2.29) as a discrete time Volterra type equation (this is inspired from the approach to the parametrix method given in [9]: see equation (3.1) there). For a family of operators F k,p , k ≤ p we define AF by and we write (2.29) in functional form: By iteration, By the commutative property in (2.24), straightforward computations give (2.32) Step 2 (Taylor formula) The drawback of (2.31) is that A depends on P Z also, see (2.28). So, we use now the Taylor's formula in order to eliminate this dependance. We use (2.4) and we consider a Taylor approximation at the level of an error of order n − N+2 2 . We use the following expression for the Taylor's formula: Then we have, with D (l) r defined in (2.7), By using the independence property, one can apply commutativity and by using (2.32) we have Notice that the operator in (2.33) acts on f ∈ C m(N +3) . In particular, the chain P G rm+1,n · · · P G r 1 +1,r 2 P G k,r 1 contains all the steps, except for the steps corresponding to r i , i = 1, ..., m (remark that for each i, P G r i ,r i +1 is replaced with T 0 N,r i + T 1 N,r i ). In order to "insert" such steps we use the backward Taylor formula (B.3) up to order N = [N/2] (see next Appendix B). So, we take h 0 N,σr and h 1 N,σr as in (2.20) and (2.21) respectively and we have U 0 N,r 1 and U 1 N,r 1 being given in (2.22). We use this formula in (2.34) for every i = 1, 2, ..., m and we get Notice that the above operator acts on C Our aim now is to isolate the principal term, that is the sum of the terms where only U 0 N,r i and T 0 N,r i appear. So we write N,k,n in (2.25). In order to compute the first one we notice that for every r ′ < r < r ′′ we have Then, for m = 1, ..., N We treat now A N +1 P Z . Using (2.33) we get We give now some useful representations of the remainders. where a n (α) ∈ R are suitable coefficients with the property Proof. In a first step we construct the measures µ α r 1 ,...,rm and the operators θ α r 1 ,...,rm and in a second step we prove that the corresponding coefficients a r 1 ,...,rm n (α) verify (2.37). We start by representing T 0 N,r defined in (2.18). Set (2.40) Hereafter γ denotes a non negative power. Concerning for every Borel set A. Then we have We represent now the operator U 0 So, by denoting ρ 0 σr the law of G r , we have (2.42) We now obtain a similar representation for h 1 N,σ f (x) defined in (2.21). Set in which φ σ 1/2 √ s W denotes the density of a centred Gaussian r.v. with covariance matrix sσ. Then we write (2.43) Using (2.40), (2.41), (2.42) and (2.43) we obtain (2.36) with the measure µ α r 1 ,...,rm from (2.39) constructed in the following way: where η i is one of the measures ν q,β r i , q = 0, 1, andη i is one of the measures ρ q σr i , q = 0, 1. Let us check that the coefficients a r 1 ,...,rm n (α) which will appear in (2.36) verify the bounds in (2.37). 1} and at least one of them is equal to one. And a r 1 ,...,rm n (α) is the product of coefficients which appear in the representation of U We finally prove (2.38). We have N,r 1 ,...,r N+1 is clearly the same. We give now the representation of the "principal term": with Γ k defined in (2.10) and Proof. Let Λ m and Λ m,k be the sets in (2.9). Notice that, for fixed m, the Λ m,k 's are disjoint which is a differential operator of the form (2.45). Moreover, the coefficients c n (α) can be bounded as follows: and the estimate in (2.45) holds as well. We are now ready for the Proof of Theorem 2.2 We denote P X n = P X 1,n+1 , with X = Z or X = G, so that We have proved that with so it is sufficient to study the remaining terms I 1 , I 2 and I 3 above. Consider first m ∈ {1, ..., N }. We use Lemma 2.4 (recall N m given therein) and in particular (2.36): Since the G k 1 k / ∈{r 1 ,...,rm} 's are centred and independent, we can use the Burkholder inequality (see next (3.26), which gives and by inserting, we get We use now this inequality with g = θ α r 1 ,...,rm ∂ α f : by applying (2.38) we get Moreover, using (2.37) , H N denoting a constant depending on N only. Since the set {1 ≤ r 1 < ... < r m ≤ n} has less than n m elements, we get The estimate for I 2 (f ) is analogous. Concerning I 1 f , we use (2.45) in order to obtain in which we have again used the Burkholder inequality (3.26). By using C p ( with N = N (2N + N + 5), and statement (2.16) follows. Concerning (2.17), it suffices to notice that for f (x) = x β with |β| = k then L N (f ) = 1 and l N (f ) = k. Differential calculus based on a splitting method In this section we use the variational calculus settled in [2,1,7,8] in order to treat general test functions. Let us give the definitions and the notation. We say that the law of the random variable Y ∈ R d is locally lower bounded by the Lebesgue measure if there exists y Y ∈ R d and ε, r > 0 such that for every non negative and measurable function f : We denote by L(r, ε) the class of the random variables which verify (3.1). Given r > 0 we consider the functions a r , ψ r : R → R + defined by The advantage of ψ r (|y − y Y | 2 ) is that it is a smooth function (which replaces the indicator function of the ball) and (it is easy to check) that for each l ∈ N, p ≥ 1 there exists a universal constant C l,p ≥ 1 such that where a (l) r denotes the derivative of order l of a r . Moreover one can check (see [3]) that if Y ∈ L(2r, ε) then it admits the following decomposition (the equality is understood as identity of laws): where χ, U, V are independent random variables with the following laws: P(χ = 1) = εm(r) and P(χ = 0) = 1 − εm(r), We are now able to present our calculus. We fix r, ε > 0 and we consider a sequence of independent random variables Y k ∈ L(2r, ε), k ∈ N. Then, using the procedure described above we write the law of χ k , U k and V k being given in (3.5). We assume that χ k , U k , V k , k ∈ N, are independent. We define G = σ(χ k , V k , k ∈ N). A random variable F = f (ω, U 1 , ..., U n ) is called a simple functional if f is G × B(R d×n ) measurable and for each ω, f (ω, ·) ∈ C ∞ b (R d×n ). We denote S the space of the simple functionals. Moreover we define the differential operator D : S → l 2 := l 2 (R d ) by D (k,i) F = χ k ∂ u i k f (ω, U 1 , ..., U n ). Then the Malliavin covariance matrix of F ∈ (F 1 , ..., F m ) ∈ S m is defined as We introduce now the Ornstein-Uhlenbeck operator L. We denote , p U k being the density of U k , and we define Using elementary integration by parts on R d one easily proves the following duality formula: for Finally, for q ≥ 2, we define |F | q,p = F q,p + LF q−2,p . (3.12) We recall now the basic computational rules and the integration by parts formulae. For φ ∈ C 1 (R d ) and F = (F 1 , ..., F d ) ∈ S d we have and for F, G ∈ S L(F G) = F LG + GLF − 2 DF, DG . (3.14) The formula (3.13) is just the chain rule in the standard differential calculus and (3.14) is obtained using duality. Let H ∈ S. We use the duality relation and We give now the integration by parts formula (this is a localized version of the standard integration by parts formula from Malliavin calculus). Proof. We give here only a sketch of the proof, a detailed one can be found e.g. in [4] and [7]. Using the chain rule Dφ(F ) = ∇φ(F )DF so that It follows that, on the set det σ F > 0,we have ∇φ(F ) = γ F Dφ(F ), DF l 2 . Then, by using (3.15) we get and (3.15)-(3.16) hold. By iteration one obtains the higher order integration by parts formulae. We give now useful estimates for the weights which appear in (3.17): Lemma 3.2. Let m, q ∈ N, F ∈ S d and G ∈ S. There exists a universal constant C ≥ 1 (depending on d, m, q only) such that for every multi index α with |α| = q one has In particular we have Proof. A rather long but straightforward computation (see [7] or [4] Theorem 3.4, more precise details are given in [5]) gives Notice that Moreover, on the set Ψ η (det σ F ) = 0 we have det σ F ≥ η/2. So Taking now m = 0 and using Schwartz inequality we obtain (3.19). We go now on and we give the regularization lemma. We recall that a super kernel φ : R d → R is a function which belongs to the Schwartz space S (infinitely differentiable functions which decrease in a polynomial way to infinity), φ(x)dx = 1, and such that for every multi indexes α and β, one has y α φ(y)dy = 0, |α| ≥ 1, (3.20) |y| m |∂ β φ(y)| dy < ∞. As usual, for |α| = m then y α = m i=1 y α i . Since super kernels play a crucial role in our approach we give here the construction of such an object (we follow [17] Section 3, Remark 1). We do it in dimension d = 1 and then we take tensor products. So, if d = 1 we take ψ ∈ S which is symmetric and equal to one in a neighborhood of zero and we define φ = F −1 ψ, the inverse of the Fourier transform of ψ. Since F −1 sends S into S the property (3.21) is verified. And we also have 0 = ψ (m) (0) = i −m x m φ(x)dx so (3.20) holds as well. We finally normalize in order to obtain φ = 1. We fix a super kernel φ. For δ ∈ (0, 1) and for a function f we define the symbol * denoting convolution. For f ∈ C k p (R d ), we recall the constants L k (f ) and l k (f ) in (2.5). Lemma 3.3. Let F ∈ S d and q, m ∈ N. There exists a constant C ≥ 1, depending on d, m and q only, such that for every f ∈ C q+m p (R d ), every multi index γ with |γ| = m and every η, δ > 0 As a consequence, we have (3.24) Proof A. Using Taylor expansion of order q Using (3.20) we obtain I(x, y)φ δ (x − y)dy = 0 and by a change of variable we get 20 Using integration by parts formula (3.17) (with G = 1) and the upper bound from (3.19) (with p = 2) we get CLT and Edgeworth's development In this section we take F = S n (Z) = n k=1 Z k defined in (2.1). It is convenient for us to write We assume that Y k ∈ L(2r, ε) so we have the decomposition (3.7). Consequently We will use Lemma 3.3, so we estimate the quantities which appear in the right hand side of (3.22). Proof. We will use the following easy consequence of Burkholder's inequality for discrete martingales: if M n = n k=1 ∆ k with ∆ k , k = 1, ..., n independent centred random variables, then Using this inequality and (2.4) we obtain S n (Z) p ≤ C × C p (Z). We look now to the Sobolev norms. It is easy to see that, S n (Z) i denoting the ith component of S n (Z), and D (l) S n (Z) = 0 for l ≥ 2. Since n k=1 |σ k | ≤ C 2 (Z) it follows that We prove that 27) C depending on k, p but being independent of n. Let k = 0. The duality relation gives E(LZ k ) = E( D1, DZ k l 2 ) = 0. Since the LZ k 's are independent, we can apply (3.26) first and (2.4), so that We take now k = 1. We have so that, using again (2.4), We notice that D (q,j) A r (U q ) is not null for r < |U q − y Y | 2 < 2r and contains the derivatives of a r up to order 2, possibly multiplied by polynomials in the components of U q − y Y of order up to 2. Since |U q − y Y | 2 ≤ 2r, by using (3.3) one obtains E(|DLF | p l 2 ) ≤ Cr −2p × C 1/2 2 (Z), so (3.27) holds for k = 1 also. And for higher order derivatives the proof is similar. We give now estimates of the Malliavin covariance matrix. We have σ Sn(Z) = n k=1 χ k σ k . Lemma 3.6. Let q, m ∈ N. There exists some constant C ≥ 1, depending just on q, m, such that for every δ > 0, every multi index γ with |γ| = m and every f ∈ C m p (R d ) one has c p being given in (3.23). Proof. We will use Lemma 3.3. Notice first that, by (3.25), the constant C q+m (S n (Z)) defined in (3.23) is upper bounded by CC 2d(q+m)(q+m+3) 4d(q+m)(q+m+3) (Z)r −(q+m+1) , C depending on d and q + m. And by using the Burkholder inequality (3.26), one has S n (Z) c p (f ) being given in (3.23). We take now η = ( λ n m(r) 2(1+λn) ) d and we use (3.29) in order to obtain We are now able to characterize the regularity of the semigroup P Z n : Proof. We take η = ( λ n m(r) 2(1+λn) ) d and the truncation function Ψ η and we write We estimate first In order to estimate J we use integration by parts and we obtain J = E(f (x + S n (Z))H γ (S n (Z), Ψ η (det σ Sn ))) Then using (3.19) and (3.25) We are now able to give the main result. Theorem 3.8. We look to S n (Z) = n k=1 Z k = n k=1 C k Y k and we assume that Y k ∈ L(2r, ε) for some ε > 0, r > 0. We also assume that (1.2) and (1.3) hold (for every p ∈ N). Let N, q ∈ N be fixed. We assume that n is sufficiently large in order to have n 1 2 (N +1) e − m 2 (r) 128 ×n ≤ 1 and n ≥ 4(N + 1)C 2 (Z). There exists C ≥ 1, depending on N and q only, such that for every multi index γ with |γ| = q and every f ∈ C q p (R d ) c p being given in (3.23). Proof. So, taking all estimates, we obtain So, and (3.40) is proved in Case 1. Notice that P G r N +1,r N+1 · · · P G r 1 +1, where G is a centred Gaussian random variable of variance Now the proof follows as in the previous case. So (3.40) is proved. And, summing over r 1 < r 2 < · · · < r N +1 ≤ n we get Exactly as in Case 2 presented above (using standard integration by parts with respect to the law of Gaussian random variables) we obtain So, (3.36) is proved. Step 2. We now come back and we replace L q+(N +1)(N +3) (f ) by L q (f ) in (3.36). We will use the regularization lemma. So we fix δ > 0 (to be chosen in a moment) and we write and l m (f ) = l 0 (f ). So, We use now (3.30) with x = 0 and with some h to be chosen in a moment. We then obtain with Q h,q (Z) defined in (3.31) (In order to identify the notation from (3.31) we recall that q = |γ| was denoted by m in (3.31) and h, which we may choose as we want, was denoted by q in (3.31)). And we also have A ′′ δ (f ) ≤ CL 0 (f )δ h (the proof is identical to the one of (3.24) but one employs usual integration by parts with respect to the Gaussian law). We put all this together and we obtain We take now δ such that δ h = 1 δ q+(N +1)(N +3) e − m 2 (r)n 64 so that We take now n sufficiently large in order to have The statement now follows by observing that, with C * (Z) given in (3.34), The result in Theorem 3.8 holds under the following slightly weaker condition (which will be used in the proof of Corollary 3.12 below). Proposition 3.9. Assume that for some m < n one has Y k ∈ L(2r, ε) for k ≤ n − m and n−m k=1 σ k ≥ 1 2 I. Then (3.33) holds true. Proof. The idea is that, since n−m k=1 σ k ≥ 1 2 I, the random variables Y k , k ≤ n − m contain sufficient noise in order to give the regularization effect. We show the main changes in the estimate of I 2 (f ) (for I 1 (f ), I 3 (f ) the proof is analogues). We split P Z r N+1 +1,n = P Z r N+1 +1,n−m P Z n−m,n and we need to have sufficient noise in order that P Z r N+1 +1,n−m gives the regularization effect. Then, the two cases described in ( where W is a standard Gaussian random variable and Φ σn N is defined in (2.10) using Z k = σ ). And C σn * (Z) = (λ dq n /λ −q n )C * (Z) with C * (Z) given in (3.34) and λ n respectively λ n the lower respectively the larger eigenvalue of σ n . Finally r = r(λ n /λ n ) d . (3.45) Here γ d is the density of the standard normal law in R d . the last inequality being true by our choice of δ n . Moreover with |R ′ (n)| ≤ C n 1 2 (N+1) the last inequality being again a consequence of the choice of δ n . We now prove a stronger version of Prohorov's theorem. We consider a sequence of identical distributed, centred random variables X k ∈ R d which have finite moments of any order and we look to Following Porhorov we assume that there exist m ∈ N such that for some measurable non negative function ψ. Corollary 3.12. We assume that (3.46) holds. We fix q, N ∈ N. There exists two constants 0 < c * ≤ 1 ≤ C * (depending on N and q) such that the following holds: if then, for every multi-index γ with |γ| ≤ q and foe every f ∈ C q p (R d ) one has Proof. We denote Notice that we may take ψ in (3.46) to be bounded with compact support. Then ψ * ψ is continuous and so we may find some r > 0, ε > 0 and y ∈ R d such that ψ * ψ ≥ ε1 Br (y) . It follows that Y k ∈ L(2r, ε) and we may use the previous theorem in order to obtain (3.47) for n = 2m × n ′ with n ′ ∈ N. But this is not satisfactory because we claim that (3.47) holds for every n ∈ N. This does not follow directly but needs to come back to the proof of Theorem 3.8 and to adapt it in the following way. Suppose that 2mn ′ ≤ n < 2m(n ′ + 1). Then Since X k , 2mn ′ + 1 ≤ k ≤ n have no regularity property, we may not use them in the regularization arguments employed in the proof of Theorem 3.8. But Y k , 1 ≤ k ≤ n ′ contain sufficient noise in order to achieve the proof (see Remark 3.9). An invariance principle related to the local time In this section we consider a sequence of independent identically distributed, centred random variables Y k , k ∈ N, with finite moments of any order and we denote Our aim is to study the asymptotic behaviour of the expectation of L n (Y ) = 1 n n k=1 ψ εn (S n (k, Y )) with ψ εn (x) = 1 2ε n 1 {|x|≤εn} . So L n (Y ) appears as the occupation time of the random walk S n (k, Y ), k = 1, ..., n, and consequently, as ε n → 0, one expects that it has to be close to the local time in zero at time 1, denoted by l 1 , of the Brownian motion. In fact, we prove now that E(L n (Y )) → E(l 1 ) as n → ∞. Theorem 4.1. Let ε n = n − 1 2 (1−ρ) with ρ ∈ (0, 1). We consider a centred random variable Y ∈ L(r, ε) which has finite moments of any order and we take a sequence Y i , i ∈ N of independent copies of Y. We define N (Y ) = max{2k : E(Y 2k ) = E(G 2k )} − 1 ≥ 1 and we denote p N (Y ) = 8(1 + (N (Y ) + 1)(N (Y ) + 3))(4 + (N (Y ) + 1)(N (Y ) + 3)). For every η < 1 there exists a constant C depending on r, ε, ρ, η and on Y p N(Y ) such that (4.1) The above inequality holds for n which is sufficiently large in order to have Proof. All over this proof we denote by C a constant which depends on r, ε, ρ, η and on Y p N(Y ) (as in the statement of the lemma) and which may change from a line to another. Using Chebyshev's inequality and Burkholder's inequality we obtain for every p ≥ 2 And the same estimate holds with Y i replaced by G i . We conclude that We now write We will use (3.33) with f = h k n ,n and ∂ γ will be the first order derivative. Then, by (3.33) with Here C is the constant from (3.33) defined in (3.34). Notice that by (4.2), for k ≥ k n = n ηρ one has We recall now that (see (2.15)) with H Γ l (x) linear combinations of Hermite polynomials (see (2.10) and (2.14)). Notice that if l is odd then Γ l is a linear combination of differential operators of odd order (see the definition of Λ m,l in (2.9)). So H Γ l is an odd function (as a linear combination of Hermite polynomials of odd order) so that ψ εn × H Γ l is also an odd function. Since W 1 and −W 1 have the same law, it follows that and consequently Moreover, by the definition of N (Y ), for 2l ≤ N (Y ) we have E(Y 2l ) = E(G 2l ) so that H Γ 2l = 0. We conclude that We put now together the results from the first and the second step and we obtain (4.1). Step 3. We prove (4.3). Recall first the representation formula where l a 1 denotes the local time in a ∈ R at time 1, so that l 1 = l 0 1 . Since a → l a 1 is Hölder continuous of order ρ ′ 2 for every ρ ′ < 1, we obtain (4.5) We prove now that, for every ρ ′ < 1 and n large enough, (4.6) 35 To begin we notice that S n (k, G) has the same law as W k/n , so that we write As above, we take k n = n ρη and for k ≤ k n , we have Since P(|W s | ≥ ε n ) ≤ C exp(− ε 2 2s ), this immediately gives for n large enough. We consider now the case k ≥ k n . Using a formal computation, by applying the standard Gaussian integration by parts formula, we write in which we have used (4.4) and where H 3 denotes the third Hermite polynomial. The above computation is formal because ψ εn is not differentiable. But, since the first and the last term in the chain of equalities depends on ψ εn only (and not on the derivatives) we may use regularization by convolution in order to do it rigorously. Notice also that the first equality is obtained using Ito's formula and the last one is obtained using integration by parts. It follows that Convergence in distribution norms In this section we prove that, under some supplementary regularity assumptions on the laws of Z k , k ∈ N, Theorem 3.8 implies that the density of the law of S n (Z) converges in distribution norms to the Gaussian density. We write and we denote σ k = C k C * k . We assume that 0 < σ ≤ σ k ≤ σ < ∞, and sup k Y k p p < ∞. (4.7) In particular each σ k is invertible. We denote γ k = σ −1 k . Notice that the normalization condition is For a function f ∈ C 1 (R d ) and for k ∈ N we denote Proposition 4.2. A. We fix q ∈ N and we also fix a polynomial P. Suppose that Y i ∈ L(r, ε), i ∈ N and (4.7) holds. Moreover we suppose that P(Y i ∈ dy) = p Y i (y)dy with p Y i ∈ C 1 (R d ) for every for i = 1, ..., q. There exist some constants c ∈ (0, 1) (depending on r and on ε) and C q (P ) ≥ 1 (depending on q, σ, σ and on P ) such that, if n (q+1)/2 e −cn ≤ 1, then, for every f ∈ C q p (R d ), and every multi-index α with |α| ≤ q |E(P (S n (Z))∂ α f (S n (Z)) − E(P (S n (G))∂ α f (S n (G))| ≤ C q (P ) √ n q i=1 m 1,l 0 (f )+l 0 (P ) (p Y i ) × L 0 (f ). (4.9) B. Moreover, if p Sn is the density of the law of S n (Z) then, if n (d+q+1)/2 e −cn ≤ 1, we have where γ is the density of the standard normal law in R d . Proof A. We proceed by recurence on the degree k of the polynomial P . First we assume that k = 0 (so that P is a constant) and we prove (4.9) for every q ∈ N. We write Then we define and we have E(∂ α f (S n (Z)) = E(∂ α g(S (q) n (Z))). Now using (3.44) with N = 0 for S (q) n (Z) we get E(∂ α g(S (q) n (Z))) = E(∂ α g(S (q) n (G))) + R n = E ∂ α f with |R n | ≤ C 1 √ n L 0 (g) + e −cn L q (g) . Let us estimate L q (g). We recall that γ i = σ −1 i . For α = (α 1 , ..., α q ) we have (4.13) in which we have assumed that the Y i 's take values in R m . So p Y i (y i )dy 1 ...dy q = (−1) q n q/2 m β 1 ,...,βq=1 It follows that |∂ α g(x)| ≤ Cn q/2 L 0 (f ) We conclude that l q (g) = l 0 (f ) and L q (g) ≤ Cn q/2 L 0 (f ) q i=1 m 1,l 0 (f ) (p Y i ). The same is true for q = 0 and so (4.12) gives the last inequality being true if n q/2 e −cn ≤ n −1/2 . So (4.11) says that we succeed to replace Y i , q + 1 ≤ i ≤ n by G i , q + 1 ≤ i ≤ n and the price to be paid is CL 0 (f ) q i=1 m 1,l 0 (f ) (p Y i ) × 1 √ n . Now we can do the same thing and replace Y i , 1 ≤ i ≤ q by G i , 1 ≤ i ≤ q and the price will be the same (here we use C i G i , i = q + 1, ..., 2q instead of C i Y i , i = 1, ..., q). So (4.9) is proved for polynomials P of degree k = 0. We assume now that (4.9) holds for every polynomials of degree less or equal to k − 1 and we prove it for a polynomial P of order k. We have Since |β| ≥ 1 the polynomial ∂ β P has degree at most k − 1. Then the recurrence hypothesis ensures that (4.9) holds for ∂ β P × ∂ γ f. Moreover, using again (4.9) for g = P × f we obtain (4.9) in which L 0 (g) ≤ L 0 (P )L 0 (f ) and l 0 (g) ≤ l 0 (P ) + l 0 (f ) appear. So A. is proved. Remark 4.3. We would like to obtain Edgeworth expansions as well -but there is a difficulty: when we use the expansion for S (q) n (Z) we are in the situation when the covariance matrix of S (q) n (Z) is not the identity matrix. So the coefficients of the expansion are computed using a correction (see the definition of ∆ k in the Remark 3.10). And this correction produces an error of order n −1/2 . This means that we are not able to go beyond this level (at least without supplementary technical effort). A Computation of the first three coefficients We explicitly write the expression of Γ k for k = 1, 2, 3 (for larger values of k the term Γ k is difficult to explicitly compute). Recall formulas (2.10) for Γ k and formula (2.9) for the set Λ m,k appearing in (2.10). We consider now a sequence of independent centred Gaussian random variables G k with covariance matrix σ k and we denote S p = p k=1 G k . Moreover, for a matrix σ ∈ M d×d we define the operators where W is a d−dimensional Brownian motion independent of S p .
12,089
2016-06-06T00:00:00.000
[ "Mathematics" ]
Enhancing semantic segmentation in chest X-ray images through image preprocessing: ps-KDE for pixel-wise substitution by kernel density estimation Background In medical imaging, the integration of deep-learning-based semantic segmentation algorithms with preprocessing techniques can reduce the need for human annotation and advance disease classification. Among established preprocessing techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) has demonstrated efficacy in improving segmentation algorithms across various modalities, such as X-rays and CT. However, there remains a demand for improved contrast enhancement methods considering the heterogeneity of datasets and the various contrasts across different anatomic structures. Method This study proposes a novel preprocessing technique, ps-KDE, to investigate its impact on deep learning algorithms to segment major organs in posterior-anterior chest X-rays. Ps-KDE augments image contrast by substituting pixel values based on their normalized frequency across all images. We evaluate our approach on a U-Net architecture with ResNet34 backbone pre-trained on ImageNet. Five separate models are trained to segment the heart, left lung, right lung, left clavicle, and right clavicle. Results The model trained to segment the left lung using ps-KDE achieved a Dice score of 0.780 (SD = 0.13), while that of trained on CLAHE achieved a Dice score of 0.717 (SD = 0.19), p<0.01. ps-KDE also appears to be more robust as CLAHE-based models misclassified right lungs in select test images for the left lung model. The algorithm for performing ps-KDE is available at https://github.com/wyc79/ps-KDE. Discussion Our results suggest that ps-KDE offers advantages over current preprocessing techniques when segmenting certain lung regions. This could be beneficial in subsequent analyses such as disease classification and risk stratification. Background With recent advances in artificial intelligence, deep learning (DL) has emerged as a leading machine-learning technique in medical imaging analysis, playing a transformative role in tasks such as image segmentation [1][2][3].This capability extends to various applications, including the segmentation of breast lesions [4,5], classification of pulmonary cancer stages [6], tissue characterization [7], detection of cardiomegaly [8], and many more.The improved performance for these intricate tasks suggests the potential of computer-aided techniques to improve diagnosis via segmentation. Within radiology, the segmentation of organs and tumors in medical images holds promise for disease diagnosis and treatment [7].One approach was the fully convolutional network (FCN), pioneering pixel-to-pixel semantic segmentation [9].FCN's innovation lies in replacing the last fully connected layer with a deconvolutional layer.Building upon this foundation, the U-Net model as a modification of FCN increases the number of deconvolutional layers and therefore effectively captures more context while requiring smaller training samples [10].Notably, U-Net has found a widespread application in segmenting medical images across various modalities, including X-rays, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and histopathology [3,11,12]. Current research has focused much on the development of pipelines for the automatic segmentation of medical images, leveraging both preprocessing techniques and the U-Net architecture.A popular generalizable segmentation tool, nnU-Net which was ranked 1st place in Medical Segmentation Decathlon, demonstrated the importance of preprocessing to model performance [13].Remarkably, even the other simpler U-Net architectures with self-configured preprocessing procedures outperformed more intricate model architectures [14].Contrast enhancement (i.e., enhancing brightness difference between objects and backgrounds), as a pivotal step for X-ray and CT preprocessing, plays a crucial role in providing human viewers and computer-aided algorithms with crucial features to facilitate analysis [15]. Related works in contrast enhancement Histogram equalization.Histogram equalization (HE) is a widely used digital image processing method to enhance the contrast of images.It expands an image's distribution range, as some images might only occupy a small portion of the entire value range.The resulting distribution of the pixel value would become more similar to a uniform distribution.However, since most images usually use the whole range of intensity (for instance, 0-255 for a standard RGB image), the HE method would not have much impact on those images [16]. Adaptive Histogram Equalization (AHE).The AHE method, as a result, was developed to address this limitation [16].In AHE, images are divided into subsections, and each subsection is equalized separately.Compared to regular HE, AHE enhances local contrast but with the risk of over-amplifying noise in some regions.Nonetheless, AHE emerged as a popular image preprocessing method in medical imaging applications [15,17,18]. Constrast Limited Adaptive Historgam Equalization (CLAHE). An enhancement upon traditional AHE methods, CLAHE, was introduced by clipping histograms to constrain the contrast [16].It clips the outliers in histograms and redistributes the values across the value range [16].Recently, several studies have shown the advantages of CLAHE on DL-related tasks, such as predicting five stages of diabetic retinopathy [19], segmenting temporomandibular joint articular disks from MRI [20], and classification of COVID-19 and other pneumonia cases [21]. Deep-Learning-Based contrast enhancement.Neural-network-based image enhancement has emerged in recent years.Anand et al. introduced a contrast diffusion model that learned different contrast levels from low-and high-contrast CXR images [22].Wei et al. proposed an unsupervised, deep Retinex model for low-light image enhancement via a Decom-Net for decomposition and an Enhance-Net for illumination adjustment [23]. While current methods do exist, there remains a need to pioneer more efficient contrastenhancement techniques with adequate interpretability.Furthermore, the scarcity of large datasets in real clinical environments poses a challenge to the development of deep-learningbased contrast-enhancement methods that can be generalized effectively. In this study, we propose a novel, histogram-based, contrast-enhancing method termed ps-KDE.We apply this contrast-enhancement method along with deep-learning segmentation algorithms (e.g., U-Net with ResNet backbones) to the various anatomic structures in a small dataset of X-ray images.We then assess its impact on the performance of deep-learning segmentation based on multiple commonly used evaluation metrics, including Dice/F1-score, Intersection of Union (IoU), recall, and precision.ps-KDE brings three notable contributions: 1) it presents an end-to-end data enhancement method characterized by its simplicity of implementation and adaptability for fine-tuning to accommodate diverse datasets; 2) it demonstrates the efficacy of a density-based augmentation method in segmenting vital organs in chest X-rays; and 3) it establishes the robustness of segmentation algorithms through the interpretation of heatmaps generated by the model. Materials and methods We employed an openly accessible dataset of chest radiographs for our study.The dataset was split equally into a training and a testing set.The images in the training set were augmented through randomized data augmentation and resized to the same resolution for preparation.Then, we performed hyperparameter optimization to identify the optimal parameter configuration.These parameters were integral to the training of our models.After the training phase, the models' performance was evaluated using the test set.S1 Fig illustrates the project's overview in a flowchart format. Data We used a publicly available dataset with 247 posterior-anterior (PA) chest radiographs collected from 13 institutions in Japan and one in the United States.The original radiographs are provided by the Japanese Society of Radiological Technology (JSRT) Database [24] and the manual mask annotations are provided by the Segmentation in Chest Radiology (SCR) Database [25].The chest radiographs are in PNG format, and the labels are in the form of binary masks.Each image in the database was scanned from film to a size of 2048*2048.Among the 247 images, 154 of them showed solitary pulmonary lung nodules, while the remaining 93 images exhibited no signs of lung nodules.The ethnic representation is unknown. Among the subset of patients with nodules, gender distribution was observed as 68 males and 86 females.In contrast, among patients without nodules, the gender distribution consisted of 51 males and 42 females.The mean age for patients with nodules is 60 years old.Each image has five matching masks generated manually by expert radiologists.Each binary mask delineates the boundary of one of the five anatomical structures: heart, left lung, right lung, left clavicle, and right clavicle.Since the original images are in grayscale, with only one color channel, we replicate this single channel to create three channels.This adjustment is necessary to meet the requirement of our deep learning model, which expects inputs to have three color channels. This study utilizes exclusively publicly available data and thus does not require the Institutional Review Board (IRB) review per regulations set by the Office for Human Research Protections (OHRP) within the U.S. Department of Health and Human Services.The data was accessed on the third day of April 2022.The authors had no access to information that could identify individual participants during or after data collection. Data augmentation Large quantities of data are often needed to train most deep-learning algorithms successfully.Data augmentation is crucial when large datasets are not feasible in order to prevent overfitting and increase model performance.Five types of augmentation were simultaneously applied to each of the original images and its corresponding mask so that the masks correctly represent the anatomical structures on the augmented images.Augmentations include rotation, horizontal flip, vertical flip, a range for image zooms, and rescale.The rotation can occur between 90 degrees clockwise and counterclockwise of the original orientation.Horizontal and vertical flips occur at a probability of 0.5.The range of zoom is between 0.5 and 1.5 for the original images.All images are then rescaled from the red-green-blue scale [0, 255] to [0,1] and resized to 256x256 pixels to help the predictive models achieve faster convergence and higher stability. Image preprocessing Contrast Limited Adaptive Histogram Equalization (CLAHE). We applied CLAHE to our data.An example of chest X-rays preprocessing with CLAHE is shown in Fig 2a and 2b.The equalization of histograms can be visualized in Fig 3a and 3b.The distribution of pixel values became more uniform after CLAHE. Pixel-wise substitution by Kernel Density Estimation (ps-KDE). During the initial exploration of the data, we observed that the distribution of pixel values appeared to be different from organ to organ.We generated histograms of pixel values in different organs to validate our initial observation.We then performed kernel density estimation (KDE) to get a probability density function (PDF) for each organ (Figs 3c and 4a).The PDFs were calculated based on the training set and were stored as prior knowledge.For each image, we substitute each pixel with the density of that pixel value (Fig 4b).The image would then be mapped to a 0-1 range to ensure consistency among images.In other words, our proposed ps-KDE substitutes pixel value for frequency, so that more frequently occurring pixel values in an organ would have a higher value in the resulting plot.Similar to CLAHE, the results were visually appealing (Fig 2c). Model development We employed a deep learning method for the semantic segmentation of chest radiographs, leveraging the U-Net neural network with ResNet backbone designed for segmentation tasks [10]. Network architecture and implementation.The network architecture consists of a contracting path and an expansive path.The original design for the contracting path consists of unpadded convolutions with size 3x3, followed by rectified linear units with a 2x2 max-pooling layer, whereas the expansive path applies upsampling for each feature map from the contracting path to restore the original input size.The final layer maps the feature vector to the number of classes.Loss functions.A loss function is needed for machine learning models to learn through propagation.Multiple loss functions could be used for image segmentation tasks.For example, three loss functions were proposed to have good performances: binary cross entropy (BCE), binary cross entropy with Jaccard loss (BCE+JCD), and Dice loss (DL).BCE is one of the most commonly used loss functions for machine learning in binary classification tasks.For current work, the mask of each location is a zero or one matrix, which makes the task similar to a pixel-wise binary class classification.Therefore, BCE would be an appropriate loss function to use.The formula for BCE is shown below, considering the ground truth mask gt and the model predicted mask pr: Another widely used loss function in segmentation tasks is the numeric sum of binary cross entropy and IoU score (Jaccard loss). The Dice coefficient (DC) is a commonly used metric to calculate similarities between images.The Dice coefficient is defined similarly as IoU: Model training and hyperparameter optimization.For optimizing the hyperparameters, we used five-fold cross-validation with all possible combinations of hyperparameters, including the optimizer, loss function, batch size, and learning rate.The list of tuning spaces for each hyper-parameter is shown in Table 1.To search through the proposed space of hyper-parameters, we used a Bayesian optimization process through the scikit-optimize package.We first defined an objective function that took instances of hyper-parameters, trained the model, and returned the cross-validation scores (CV scores).We then passed the scores to the optimization function of the package.The optimization process assumed the objective function results to follow a multivariate Gaussian distribution.It would take all observed scores until the current iteration, calculate a posterior distribution and sample the next set of hyper-parameters instances out of the posterior distribution.The best combination of hyper-parameters chosen for the final model training. After obtaining the optimized hyperparameters, we fitted models using the original images and two distinct pre-processing techniques (i.e.CLAHE, and ps-KDE) onto the five anatomic structures (i.e., heart, left lung, right lung, left clavicle, right clavicle) with the corresponding best-performing hyperparameters for that task, for a total of 15 models.Our predictive models were then trained with 50 epochs, with # of training sample * 2 batch size samples in each step. Model evaluation and interpretability Evaluation metrics.We used intersection over union (IoU) and the Dice coefficient (i.e., F-score, Dice score) to evaluate our models.IoU, also known as Jaccard loss, is a commonly used metric in image segmentation tasks.Consider the ground truth mask gt and the model predicted mask pr: We assume that both masks are image matrices of 0's and 1's.Therefore, the area of the mask would be a count of 1's in the corresponding pixel matrix.A high IoU score indicates that more pixels are predicted correctly (more true positives) while fewer pixels are missed (less false negatives and false positives). F-score, on the other hand, represents a weighted average between precision and recall.In this study, we will report the F1/Dice score.Specifically, We evaluated models on the validation set and reported the mean and standard deviation for each evaluation metric.We performed the independent samples t-test assuming no equal variance to compare the distributions of the Dice scoring metrics between two preprocessing methods using R (version 4.2.3).The significance level (p = 0.01) was not corrected for multiple comparisons as none of the comparisons was tested more than once.No significance test was performed on precision, recall, IoU, or accuracy.Generation of constrast-enhaned images and probability heatmaps.To understand our models' classification, we randomly chose subjects and obtained the probability of each pixel being classified into the organs or clavicle.A heatmap was produced based on the probabilities using Matplotlib.In addition, we overlapped the model's prediction with the original chest Xray image to evaluate whether the segmentation has clinical merits. Model evaluation Table 2 demonstrates the results of model optimization based on the cross-validation scheme.The best loss function for all five locations was BCE+JCD, which considers pixel-wise information and intersection maximization. Table 3 illustrates the evaluation results for various anatomic structures utilizing three distinct image processing techniques.In terms of technique-specific model performance, our analysis revealed that when using the original images (i.e., without CLAHE or ps-KDE transformation), the heart demonstrated the highest segmentation performance, whereas the left clavicle exhibited the least favorable performance based on IoU and Dice scores.With CLAHE transformations, the heart model maintained its superior performance, albeit with the right clavicle registering the lowest scores.With ps-KDE transformation, the five models achieved a mean IoU ranging from 0.577 (SD = 0.06) in the right clavicle to 0.927 (SD = 0.05) in the heart, with Dice scores ranging from 0.275 (SD = 0.17) in the right clavicle to 0.926 (SD = 0.070) in the heart (Fig 6).Across all three techniques, it is noteworthy that the best-performing model differed significantly from the worst-performing one (p < 2.2 × 10 −16 ) when assessed by the Dice score.Precision and recall closely mirrored the ranking pattern observed in the Dice score, as anticipated.Accuracy is the highest-performing metric for all three image processing techniques. Since there is no difference in the ranked order of model performances between the Dice score and mean IoU metrics we will exclusively present the Dice score to assess organ-specific model performances.This decision is made as the five models achieved a lower performance compared to that measured by mean IoU, providing a conservative estimate of the effectiveness of ps-KDE.Notably, significant differences in model performance were observed between the regions classified using CLAHE and ps-KDE.Specifically, in the left lung region, CLAHE had a Dice score of 0.717 (SD = 0.19), and ps-KDE had a Dice score of 0.780 (SD = 0.13), p = 0.0026 (Table 3).We observed no differences between the two datasets in heart Model interpretation Examples of model predictions with both processing techniques (CLAHE and ps-KDE) are shown in Fig 7 .The probability heatmaps showed a decrease in confidence around the edges of the segmentation object.This is more prevalent in the heart and the left clavicle model.Visually, the overlap of the predicted segmentation from ps-KDE and the original x-ray pinpoints the regions that radiologists typically focus on.The partial misclassification in the right lung from the CLAHE technique is discussed in later sections. Discussion In this study, we proposed a novel method, ps-KDE, to substitute the pixel value based on a normalized histogram distribution.Our investigation focused on evaluating the performance of the ResNetUnet architecture in the context of segmentation tasks, specifically applied to 247 chest X-rays with PA projection.We assessed each model's segmentation capabilities across five distinct anatomic structures, considering the impact of preprocessing techniques such as ps-KDE and CLAHE.We present ps-KDE as an end-to-end data augmentation method, which transforms raw X-ray images into augmented versions.As an overview, implementing ps-KDE involves traversing a representative image pool to calculate the frequency and density of each pixel value, which is then stored as prior knowledge.Both frequency and density computations can be achieved through a single programming language function call.Subsequently, users only need to assign each pixel value to its corresponding density, making the implementation available to most researchers with minimal programming experience.Crucially, this adaptability allows fine-tuning of ps-KDE to match the representation nuances of various datasets, whether at departmental, institutional, or national scales.We first compared results within each technique (i.e.original, CLAHE, and ps-KDE), revealing a substantial gap between the highest and lowest Dice scores.These fluctuations should be concerning as test images came from the same dataset.This suggests that although U-Net is supposedly designed for end-to-end biomedical image segmentation with very few samples, this algorithm with validation on electronic microscopy image stacks may not generalize to radiographs [10].We found that, in general, the model predicting lung regions and heart has the highest Dice scores, whereas in the clavicle regions, the Dice Score may drop below 0.6.The higher performance in large regions suggests the model could recognize larger patterns but fell short of smaller ones within the X-Ray. We then evaluated the efficacy across the three processing techniques on models targeting the same organ.We observed that models preprocessed with CLAHE have higher IoU and Dice scores (Fig 6a and 6b) in the left clavicle regions compared to the original image models.The ps-KDE method, on the other hand, showed better performance in the left lung model than CLAHE.This means the combined use of both preprocessing techniques through a dynamic voting algorithm could be useful by harnessing the advantage of CLAHE in smaller regions and that of ps-KDE in larger regions.The novelty of the ps-KDE method lies within utilizing histogram values not only to generate density estimations but also to execute substitutions.Therefore, such combination allows the pixel substitution to benefit from CLAHE which has a more uniform overall distribution.By enabling accurate and consistent identification of anatomical structures, our proposed technique stands to enhance the precision of subsequent disease detection algorithms.Furthermore, given its demonstrated superior performance in segmenting specific anatomical structures in chest X-rays, we hypothesize that more advanced imaging techniques, such as CT scans, could potentially benefit from a similar approach. While ps-KDE showed superior performance in some regions, it is also essential to examine why it may have underperformed in others.Specifically, in the right clavicle region, there is a notable difference in dice score between ps-KDE and CLAHE.Given that the dice score combines precision and recall, examining both metrics reveals that precision shows roughly double the difference compared to recall between ps-KDE and CLAHE.Lower precision indicates a reduced ability to distinguish between true and false positives.Considering that the clavicles are much smaller anatomical structures compared to others, the algorithm might overly contour these regions, leading to reduced precision.This further suggests that ps-KDE may only be suitable for segmenting larger areas.As a potential improvement, it might be worth considering incorporating the density distribution of both the left and right clavicles to double the number of points used for estimating density distribution.Evaluation of performance in the right lung region shows comparable results in terms of precision, recall, and accuracy.However, the reason for the observed superior performance in the left lung but not the right remains unclear.Future investigations could explore how density distributions might adversely affect neural network models [29], potentially guiding improvements to the ps-KDE method by incorporating additional smoothing and density estimation techniques. We would also like to factor in computational efficiency given its potential impact on the integration of algorithms into clinical practice [30,31].CLAHE operates by dividing the input image into tiles and applying histogram equalization to each, thereby obviating the need to modify all pixel values throughout the image [16].Conversely, ps-KDE relies on a pre-defined frequency table of pixel values, thus necessitating only the referencing of corresponding frequency values for all pixels-a computationally trivial step.While it is worth noting that CLAHE exhibits slightly faster processing times at 0.05(0.01)sand ps-KDE at 0.20(0.11)s, in the context of clinical practice, such discrepancies in turnaround time for radiologists may be negligible.This implies that ps-KDE, alongside other existing contrast enhancement methods offering rapid, on-demand processing speeds, holds promise for seamless integration into existing imaging systems [32], benefiting both clinicians and patients.Moreover, through the establishment of robust and representative pixel frequency tables, this preprocessing method could potentially mitigate systematic biases related to demographics, disease representation, and data management [33,34]. The incorporation of heatmaps offers invaluable insights into areas of interest and uncertainty during the segmentation process.Notably, we observed a consistent decrease in probability around object edges in the majority of images.This gradual phasing out of probability as the model progresses into negative pixels is ideal, as models exhibiting abrupt switches between high and low confidence levels may lack stability.The visualization of heatmaps also serves to pinpoint regions requiring further investigation.For instance, in CLAHE models, a few misclassifications of the right lung were observed when predicting left lung regions (Fig 7).This may be attributed to image augmentation techniques such as horizontal flips and rotation ranges applied before inputting the images.We hypothesize that, given the small size of our dataset, the spatial distribution of the ground truth significantly influences segmentation outcomes.This suggests that ps-KDE may exhibit greater robustness against substantial image augmentation and small datasets.Future studies could investigate the potential of applying transfer learning to ResNetUnet to mitigate the unintended impacts of augmentation [35,36]. It's worth noting that the predicted lungs still adhere to the clinical expectation that the left lung is narrow and long.Even in cases of misclassification, we can still observe that the model accurately outlines the shape and conforms to the expected characteristics of the right lung.Heatmaps present clinicians with a valuable tool, providing visual assessments of segmentation accuracy and quality.This facilitates interpretation and enables informed clinical decisionmaking. Limitation Our current dataset contains exclusively PNG images, whereas clinical practices heavily rely on the DICOM format for medical image analysis.While PNG is suitable for research and imaging information in DICOM can be easily converted to PNG format, it lacks the crucial metadata and standardized structure that DICOM would offer.This disconnection hinders the model's direct applicability in clinical settings where DICOM's comprehensive patient information and imaging details are essential. To mitigate this limitation, the model needs further adaptation for DICOM data format.This involves adjusting the data processing pipeline to handle DICOM images and accounting for metadata intricacies.The model's effectiveness must be re-validated using DICOM data to ensure its reliability in clinical workflows.Addressing this constraint is vital to bridge the gap between research-oriented PNG images and the practical demands of medical professionals who predominantly rely on DICOM for accurate diagnosis and treatment. We also recognize that the size of our dataset is small for a deep learning algorithm.We also only trained ResNetUnet on 50 epochs because of computing resource constraints.Higher performance may be achieved in larger epochs.In addition, the smoothed histogram takes account of only the pixel distribution for this dataset.An additional limitation of our study is the absence of external validation for our models.From a dataset perspective, it remains uncertain how effectively the smoothed histograms can extend to external radiographs, especially those with low quality and contrast.Moreover, there is a potential for another enhanced U-Net architecture [37] to provide further validation regarding the applicability of the ps-KDE technique across various model architectures. Conclusion In conclusion, we significantly improved semantic segmentation of the left lung in chest radiographs using ps-KDE.ps-KDE is easy to implement, adaptive across diverse datasets, and enhances the robustness of segmentation algorithms.The introduction of the ps-KDE preprocessing technique contributes to the available image contrasting methods for segmentation but should be treated with caution and further validations. Fig 7 . Fig 7. Prediction of a randomly selected subject.From left to right, the input of the model, the ground truth, the predicted segmentation overlap with the original x-ray, and the heatmap of the predicted probability.A) CLAHE processed, B) ps-KDE processed.https://doi.org/10.1371/journal.pone.0299623.g007 Table 2 . Optimization results. The best combination for each location is shown in the table.Note that for batch size, the actual batch size used in cross-validation and training models was the above batch size x 5.This multiplier was a result of the data augmentation, as we are loading the original images and augmented images all at the same time.https://doi.org/10.1371/journal.pone.0299623.t002segmentation.CLAHE transformation achieved a significant result than ps-KDE in the left clavicle, right clavicle, and right lung. Table 3 . Model performance after applying preprocessing methods (CLAHE and ps-KDE) evaluated by IoU and Dice scores. IoU and Dice scores are shown as mean (SD).CLAHE: Contrast Limited Adaptive Histogram Equalization; Ps-KDE: Pixel-wise substitution by Kernel Density Estimation; IoU: Intersection over Union.*:p<0.01 in model performance when comparing between CLAHE and ps-KDE for each segmentation region pair.https://doi.org/10.1371/journal.pone.0299623.t003
6,063.4
2024-02-17T00:00:00.000
[ "Medicine", "Computer Science" ]
Research on the Supervision Mode of Competitive State-owned Enterprises by State-owned Capital Investment and Operation Companies . In the process of transformation of supervision of competitive state-owned enterprises, the problems that there is a blind spot in the supervision object, the lack of joint forces of the supervisory bodies, the lack of links in the supervision process, and the difficulty in appraising the supervision results have appeared in the supervision of competitive state-owned enterprises. Based on the above issues, this paper designs on the supervision of competitive state-owned enterprises,including coordinating supervision objects through governance supervision and assessment, linking supervisory entities through function coordination and achievement sharing, linking the supervision process through information supervision and platform cooperation and implementing supervision results through risk control and evidence inspection. Introduction In the current reform of the state-owned enterprise supervision system that focuses on "capital management", some problems hinder the effective advancement of the transition of the supervision system. As the authorized operating entity of the SASAC and the investor of state-owned enterprises (SOEs) , how state-owned capital investment and operation companies can solve the regulatory problems and effectively supervise the competitive state-owned enterprises that they hold have become an important research topic. The current research on the supervision of competitive state-owned enterprises is mainly based on the functional role of various supervision methods, and seldom combines the new situations and new problems that have emerged in the current state-owned enterprise supervision reform process; it is mainly based on the supervision of state-owned enterprises by the SASAC, and lack of State-owned capital investment and operation company's research on the supervision model of competitive state-owned enterprises. Qi Zhen et al. (2017) [1] pointed out that in the current environment of gradual transformation, the state-owned sector is controlled by the government on the one hand, and on the other hand is facing competition from the increasingly powerful non-state-owned sector, so it is difficult for a single regulatory system to adapt to the heterogeneity of SOEs. Regulation of commercial enterprises is more difficult and more complicated. The supervision model, assessment mechanism and policy design need to be classified and gradually implemented according to the specific situation of the enterprise [2] . For commercial state-owned enterprises implementing mixed-ownership reforms, whether state-controlled or state-owned enterprises, they should accept the same conditions of shareholder supervision and maintain the highest supervisory power in corporate governance [3] and the proportion of supervisors appointed by non-state-owned shareholders is allowed to be 1/3 to 1/2. Competitive state-owned enterprises can encourage supervisors appointed by non-state-owned shareholders to serve as chairman of the board of supervisors [4] . This article is based on the new situation in the reform of SOEs' supervision and reform which focuses on "managing capital", combined with the functional positioning and development goal of competitive state-owned enterprises, conduct research on the supervision model of state-owned capital investment and operation companies on competitive state-owned enterprises. The research is expected to provide a reference for the transformation of the supervision system based on "capital management". Analysis on the Supervision and Management of Competitive State-Ow -ned Enterprises First, there is a blind spot in the supervision object. Mainly reflected in: First of all, insufficient supervision of state-owned share-holding companies, some state-owned share-holding companies were originally subject to the current supervision of the original board of supervisors. However, after the institutional reform of the overseas board of supervisors, the original board of supervisors was revoked and its functions were merged into the Audit Office. It was only supervised afterwards and lead to lacking of supervision; Secondly, the imbalance between decentralization and management of state-owned enterprises, many powers have been delegated to state-owned enterprises, but the supporting management system has not been established. Second, the regulatory bodies lack synergy. Mainly reflected in: First of all, the lack of resonance of supervisory entities, although different supervisory entities supervise state-owned enterprises based on different content and methods, the internal and external supervision of state-owned enterprises is in a decentralized pattern, so it is difficult to form a strong supervisory force; Secondly, the lack of coordination of the working mechanism, it is reflected in the lack of resource coordination among various supervisory bodies, which is prone to problems such as repeated supervision and supervision vacuum. Third, the regulatory process lacks links. Mainly reflected in: First of all, insufficient coverage of state-owned assets, under the hierarchical authorization and supervision system of state-owned assets, it is difficult to achieve full-process and full-coverage supervision of the state-owned enterprise asset management business, and there is a lack of "penetrating" supervision models and supervision methods [4] . Secondly, the lack of supervisory information sharing, different supervisory bodies have different sources of information and lack of channels for information communication with each other. Fourth, the results of supervision are difficult to appraise. It is difficult to identify the violating subject and determining the responsibility for the violation. Many losses in practice are not formed in a short time, there are many links, long time, and personnel turnover. It is difficult to identify the specific responsible person, so collectively take responsibility which due to the unclear rights and responsibilities of the parties. Design of Competitive State-owned Enterprise Supervision Model Under the background of " capital management " , competitive state-owned enterprises focus on maintaining and increasing the value of state-owned assets, with the mission requirements and functional positioning of "market profitability". Therefore, the supervision of competitive state-owned enterprises should be based on the principle of ensuring its operational autonomy and improving the internal efficiency of it. According to the problems in the supervision of competitive state-owned enterprises, design the following regulatory model: Coordinate Supervision Objects through Governance Supervision and Assessment This mode includes two aspects: On the one hand, state-owned capital investment and operation companies conduct governance supervision over SOEs based on their shareholding ratio; On the other hand, strengthen the assessment and restraint, as shown in Figure 1. Governance and supervision include:external contingent governance, financial and audit supervision of state-owned holding companies, and supervision of the appointment of directors to state-owned holding companies and shareholding companies. If the company has signals such as negative business matters, major decision errors, and important financial crisis, the external contingent governance mechanism will be activated--one is post-event audit and supervision; the other is the management change of linkage. The financial and audit supervision includes: assigning a chief financial officer, a unified approval system, and controlling major issues. Establish a financial audit and supervision system, and assign financial directors to SOEs invested in to reduce the information asymmetry. Conduct business-period audits of SOEs, the audit content includes SOEs' economic benefits, property rights changes,internal control levels, compliance with fiscal regulations and performance of leaders' economic responsibilities; formulate a unified audit system for the investment departments of SOEs; and at the same time, in order to make decisions and control the major issues ,key links and core issues in the operation of SOEs, the investment and operation company group company can limit the operation boundary, investment direction and bottom line of the state-owned enterprise. For state-owned shareholding companies after the mixed ownership reform,state-owned capital investment and operation companies select directors based on the number of shares they hold, and do not interfere with the production and operation of them. There are measures in the supervision of state-controlled and share-holding companies by sending directors:establish a full-time dispatched director system; establish a docking platform between state-owned capital investment and operation companies and dispatched directors; improve directors' performance reports and the board's annual work report system. In the assessment and restriction of SOEs, first, the value preservation and appreciation rate assessment is carried out for the purpose of expanding and strengthening state-owned capital;secondly, the asset-liability ratio of "one enterprise, one policy" is carried out for the purpose of risk pre-control. Strengthen the asset-liability management of SOEs, strictly manage the asset mortgages, pledges, etc., regularly rate the assets. Clearly define the functions of the supervisory body The internal supervision bodies of competitive state-owned enterprises include the board of directors, the board of supervisors, the employee representative assembly, and internal audit. The external supervision bodies include state-owned capital regulatory agencies, state-owned capital investment and operation companies, external audits, disciplinary inspections, and inspections. Coordinat the functions of supervisory bodies Combine supervisory forces with complementary functions to form coordination and complementarity of internal and external supervisory. Make the effectiveness of supervision in the decision-making and execution links, strengthen audit supervision and disciplinary inspection and supervision in the pre-and in-process supervision of SOEs, and avoid the loss of state-owned assets caused by property rights transactions, decision-making errors, and abuse of power in advance. Key aspects of operation (asset disposal, material procurement, etc.) are subject to key supervision. SOEs' internal control, internal audit, and disciplinary inspection and supervision are coordinated to share the results of supervision and strengthen the depth and effect of supervision: internal audit uses the results of internal control and monitoring to carry out special audits for outstanding problems in corporate management or as the focus of daily audits; the problems found in internal audit are the focus of internal control work, and the cause of the problems is further searched from the business process. Discipline inspection and supervision through sharing the results of internal control monitoring and internal audit, carry out performance monitoring or investigation and punishment of violations of regulations and disciplines, such as auditing the economic responsibility of leading cadres record the results, establish management personnel files, and provide important basis for managerial job changes and performance evaluation; internal control, internal audit and disciplinary inspection are carried out in a joint office, which is highly professional, widely involved, especially centralized funds and centralized rights. In high-risk areas, it can be the lead organization to carry out relevant supervision activities. Discipline inspection and supervision and inspection supervision implement joint supervision based on the logic of "discovering problems-tracing clues-stakeholders and other supervisory bodies-situation assessment". Coordination of supervisory entities' behavior Coordinating supervision behaviors through supervision information collection, supervision work consultations,supervision key consultations,and supervision results sharing, form a regulatory synergy. A supervisory committee and an information collection mechanism can be established.Each supervisory entity timely submits supervisory information to other supervisory entities, and supervisory results can be shared to form a closed-loop supervision. Establish a joint meeting system for the supervisory committee, and the supervisory committee regularly convenes joint meetings to notify the progress of supervisory work and supervise the rectification of problems Link the supervision process through information supervision and platform cooperation This mode is embodied in the establishment of a state-owned enterprise supervision vertical and horizontal linkage information platform, the implementation of intelligent full-process dynamic supervision, and penetrating supervision of SOEs, as shown in Figure 3. In the vertical direction, state-owned capital investment and operation companies obtain state-owned enterprise information through the information platform, and there is feedback on the information; in the horizontal direction, it includes the overall links of information integration of regulatory business management and corporate finance, risk monitoring, and information release and so on. Establish a modular and professional information collection 、 analysis and reporting mechanism, strengthen information sharing, and enhance the pertinence and timeliness of supervision. Establish an abnormal information early warning system for state-owned assets supervision, when an abnormal situation in public information is discovered, the state-owned capital investment and operation company sends abnormal information reminders to relevant departments, and cooperates with relevant departments to investigate the causes, links and effects of abnormal information. Establishing an efficient reporting method for supervisory information, information technology could be introduced into the supervisory work report of the board of directors and improve the efficiency of supervisory information reporting by the board of directors. Implement supervision results through risk control and evidence inspection This mode is based on the logic of risk prevention, risk monitoring, and risk accountability, as shown in Figure 4. Regarding risk prevention and risk monitoring, the state-owned capital investment and operation company,as an investment enterprise,does not directly participate in the commercial operation of the state-owned enterprise,and has the right to participate in the corporate governance of the invested enterprise. In terms of risk prevention,state-owned capital investment and operation companies conduct comprehensive risk management on SOEs.Construct a comprehensive risk management system based on the strategic objectives, scale, and business system of SOEs, and urge SOEs to adopt a combination of qualitative and quantitative methods to identify, measure, evaluate, control or mitigate the business risks;and make annual reports on risk management.Establish a risk isolation system between state-owned capital investment and operation companies and the state-owned companies, and reasonably isolate business transactions between them. In terms of risk monitoring, the whole process of state-owned enterprise operation is monitored, and the enterprise's risk capacity is determined according to the internal and external environment of it.During the monitoring process, possible risks are identified and evaluated,and risk response measures are established. Improve the methods of investigating responsibility for violations of corporate management Personnel.The four steps of distinguishing violations, investigating the evidence of violations through auditing, improving the reporting efficiency of violations, and setting the time limit for the accountability of violations should be taken to establish a responsibility investigation work system with different levels and up and down connected.In the accountability for violation of operation and investment, a distinction is made between normal operation and operation investment behavior that violates the rules to be held accountable. Use auditing methods as the evidence basis for investigating responsibility for illegal operations and investments. During the audit, focus on theloss of state-owned assets or the issue of idle state-owned assets. At the same time, optimize the reporting mechanism for illegal operation and investment behavior. In April 2020, the State-owned Assets Supervision and Administration Commission of the State Council issued the "Notice on Relevant Matters Concerning Strengthening the Reporting of Major Business Risk Events", stating that "For major business risk events discovered by enterprises or reflected by external regulatory agencies and media networks, internal control (risk) management departments should report the relevant situation in writing to the Comprehensive Supervision Bureau of the State-owned Assets Supervision and Administration Commission of the State-owned Assets Supervision and Administration Commission within 2 working days of the occurrence of the risk event; for particularly urgent major business risk events, it should be reported to the SASAC Comprehensive Supervision Bureau by telephone, etc. as soon as possible." In the accountability work, an information-based supervision platform can be used to combine pre-prevention, mid-event control, and post-event accountability to form a complete system of short-term, long-term and lifetime accountability. Conclusion This paper studies the regulatory model of state-owned capital investment and operation companies on competitive state-owned enterprises, and finds that the current state-owned capital investment and operation companies ' supervision of competitive state-owned enterprises has some unresolved problems.The specific manifestations are the absence of regulatory objects,regulatory bodies lack synergy, the supervision process lacks links and the supervision results are difficult to appraise.Combining with the functional positioning of competitive state-owned enterprises which are focusing on maintaining and increasing value,with the principle of ensuring the operational autonomy of competitive state-owned enterprises and improving the internal efficiency of the enterprise,four regulatory models have been designed , including coordinate supervision objects through governance supervision and assessment,link supervisory entities through function coordination and achievement sharing,link the supervision process through information supervision and platform cooperation and implement supervision results through risk control and evidence inspection. In the follow-up research, the relevant conclusions of this article can be verified through empirical methods; the factors in the supervision model can also be quantitatively measured, and then the influence of different supervision modes on the operating performance and supervision efficiency of competitive state-owned enterprises can be studied.
3,635.8
2021-01-01T00:00:00.000
[ "Business", "Economics" ]
Prenylation Defects and Oxidative Stress Trigger the Main Consequences of Neuroinflammation Linked to Mevalonate Pathway Deregulation The cholesterol biosynthesis represents a crucial metabolic pathway for cellular homeostasis. The end products of this pathway are sterols, such as cholesterol, which are essential components of cell membranes, precursors of steroid hormones, bile acids, and other molecules such as ubiquinone. Furthermore, some intermediates of this metabolic system perform biological activity in specific cellular compartments, such as isoprenoid molecules that can modulate different signal proteins through the prenylation process. The defects of prenylation represent one of the main causes that promote the activation of inflammation. In particular, this mechanism, in association with oxidative stress, induces a dysfunction of the mitochondrial activity. The purpose of this review is to describe the pleiotropic role of prenylation in neuroinflammation and to highlight the consequence of the defects of prenylation. Introduction The biosynthetic pathway of mevalonic acid or mevalonate is essential in both eukaryotic and prokaryotic organisms because it leads to the formation of organic compounds of enormous physiological importance, involved in many cellular processes. It is in fact an anabolic pathway that, starting from acetyl-CoA, leads to the synthesis of a family of molecules both of steroid nature, including cholesterol, and of non-steroidal nature, the isoprenoids or terpenes. Isoprenoids constitute a heterogeneous class of lipophilic molecules, being the widest family of natural molecules. They have both functional and structural properties in diverse biological processes, which range from cell membranes' organization, gene expression regulation, post-translational modification of proteins, control of signal transduction, involvement in photosynthesis and electron transport chain, synthesis of cholesterol and its derivatives, pheromones, reproductive hormones in mammals, vitamins, and even defense against infections in plants [1,2]. Long-chain isoprenoids include ubiquinone and heme A, important for mitochondrial electron transport; the dolichol, necessary for the glycosylation of proteins; the isopentenyl group of t-RNAs; and carotenoids, which are part of the photosynthetic system of phototrophic organisms. Short-chain isoprenoids are farnesyl pyrophosphate (FPP) and geranylgeranyl pyrophosphate (GGPP), which mediate one of the most important post-translational modifications of proteins, namely prenylation. This type of ubiquitous and irreversible modification, also known as lipidation, involves the post-translational addition of hydrophobic isoprenoids to proteins and represents a crucial step as it ensures correct localization and functionality of numerous proteins essential for cellular activity. Among the prenylated proteins, there are the small proteins belonging to the family of GTP-ases, such as Ras, Rac, and Rho, as well as the nuclear laminae [3]. This modification is essential for numerous biological functions, such as cell targeting, the processes of cellular life and death (growth, differentiation, movement, autophagy), the localization of proteins in the anchoring phase to the membrane, and the regulation of their activity (protein-protein/protein-membrane interactions). As cells need a constant supply of isoprenoid compounds, they must finely tune the mevalonate pathway while avoiding excessive build-up of potentially toxic molecules, such as cholesterol itself [4]. Through numerous experimental and clinical studies, it seems that isoprenoids, essential for cell growth and differentiation, may be potential therapeutic targets in many research fields, including tumors, autoimmune diseases, atherosclerosis, and Alzheimer s disease [5]. Moreover, an altered flux through the mevalonate pathway is involved in the pathophysiology of the Hyperimmunoglobulin D syndrome (HIDS) and Mevalonic Aciduria (MA), autoinflammatory disorders together known as mevalonate kinase deficiency (MKD) disorders, which are precisely due to a hereditary deficiency of the Mevalonate Kinase (MVK), one of the first enzymes of the mevalonate pathway [6]. The Mevalonate Pathway The biosynthesis of the different products of the mevalonate pathway begins in the cytosol with the condensation by the thiolase of two molecules of acetyl-CoA into acetocetyl-CoA, which reacts with another acetyl-CoA molecule to form, by HMG-CoA synthase (HMGS), 3-hydroxy-3-methylglutaryl-CoA, or HMG-CoA. HMG-CoA is then reduced to mevalonic acid thanks to the action of HMG-CoA reductase (HMGR), an oxidoreductase localized in the smooth endoplasmic reticulum (ER) that uses NADPH as a cofactor ( Figure 1). Subsequently, the synthesis of mevalonate-5-phosphate by MVK occurs in the cytosol followed by the decarboxylation and transformation into a compound with five carbon atoms (C5), ∆3-isopentenyl-5-pyrophosphate (IPP), which is the basic isoprene unit for the synthesis of all other isoprenoids, such as geranyl pyrophosphate (GPP, C10), FPP (C15), and GGPP (C20), through a series of head-to-tail condensations of isoprene units catalyzed by the prenyltransferases [4]. The FPP represents the link between the synthetic pathways of non-sterols/isoprenoids and sterols. In the isoprenoid pathway, the addition of another unit of IPP to the FPP leads to the formation of GGPP. The elongation process, by incorporating further portions of IPP, generates longer isoprenoids, which have a key biological relevance, such as dolichol (essential for proteins N-glycosylation), ubiquinone (Coenzyme Q10), and heme A [7][8][9]. In particular, Coenzyme Q10 is placed in the membranes of the endoplasmic reticulum, peroxisomes and lysosomes, in the vesicles and within the membrane of the mitochondria, where it plays an important role in the electron transport chain. The enzyme farnesyl pyrophosphate synthase (FPPS) instead catalyzes the addition of dimethylallyl pyrophosphate (DMAPP) and geranyl diphosphate to IPP to form the isomer E of FPP [10]. In the sterol branch of the metabolic pathway, in the ER, the condensation of two FPP moieties, catalyzed by the enzyme squalene synthase (SQS), produces a molecule of squalene (a molecule with 30 carbon atoms). The last stages of cholesterol biosynthesis involve the cyclization of squalene to lanosterol, which already contains the four characteristic rings of cholesterol. From lanosterol, through a series of other reactions (demethylations and isomerizations), first desmosterol and then cholesterol (at 27 carbon atoms) are formed [11] (Figure 1). Critical Points to Regulate the Metabolic Pathway of Cholesterol In mammalian cells, most of the mevalonate is converted into cholesterol, while the remaining mevalonate is transformed into isoprenoids; therefore, the regulation of the whole pathway is fundamental. Under physiological conditions, the levels of cholesterol and its main metabolites depend on the amount of cholesterol introduced with the diet in a homeostatic balance between processes of synthesis, absorption, transport, catabolism, and excretion. Alterations in cholesterol homeostasis, due to genetic and/or environmental factors, are thus involved in various diseases such as obesity [12], atherogenesis and cardiovascular disorders [13,14], gallstones, and some inherited neuro-metabolic diseases. In humans, the brain is the organ that presents the highest percentage of cholesterol in the whole organism, and it is located in the myelin sheath. The presence of the blood brain barrier prevents the exchange of lipoproteins and free cholesterol between plasma and cerebrospinal fluid, so that brain tissue regulates cholesterol homeostasis autonomously [15]. Consequently, cerebral cholesterol constitutes a cholesterol pool independently regulated with respect to those present in all other parts of the body [16]. The regulation of cholesterol levels, and consequently of all of the other products that derive from this biosynthetic pathway, results from the control of the HMGR enzyme, the most finely regulated enzyme of the pathway, and the rate-limiting one This mechanism is irreversible, and it represents the main system for regulating the process [17]. When HMGR sterol-sensing domain (SSD) perceives a high cholesterol content inside the cell, its conformation changes, causing enzyme proteolysis. It has been reported that several polymorphisms in the HMGR gene determine a failure of this mechanism. Such polymorphisms have been associated with statins' efficacy, obesity, lipid metabolism, Parkinson disease, cardiovascular adverse events, and other pathologies [18,19]. Several ER proteins are able to sense cholesterol levels including HMGR, sterol regulatory-element-binding protein (SREBP), and squalene epoxidase (SQLE). A recent paper pointed to the role of mitochondrial dysfunction on the mevalonate pathway, through the reduction of pathway intermediates and downregulation of the expression of the gene pathway in an SREBP2 dependent mechanism [20]. Moreover, MVK plays an essential regulatory role in the pathway; indeed, in the two MKD pathologies, the loss of its activity causes both the accumulation of mevalonic acid and, consequently, a deficiency of the isoprenoid products downstream, demonstrating the peculiar role of this enzyme throughout the pathway [21]. The MVK enzyme is regulated at the transcriptional level in the same way as HMGR; in fact, the MVK promoter contains a sterol-regulated element (SRE) capable of inducing gene transcription following a deficit of the downstream products of the pathway (positive feedback) through SREBP2 [22]. In addition, the MVK enzyme is also subject to posttranslational regulation with negative feedback by the isoprenoids GPP, FPP, and GGPP. This inhibition is of the competitive type and occurs at the binding site of the enzyme for ATP [23]. It has been recently observed that NF-E2-related factor 3 (NRF3), a transcription factor that binds ER and is involved in lipid metabolism, upregulates the expression of GGPP synthase in an SREBP2-dependent manner [24]. The Prenylation Process Prenylation, catalyzed by a prenyltransferase, involves the addition of FPP or GGPP (with 20 carbon atoms), with the formation of a thioether covalent bond, with a thiol residue of cysteine at the C-terminal end of target proteins [35]. The bound lipid is necessary for the correct functioning of the protein itself, as it is responsible for both membrane attachment and peculiar protein-protein interactions. In all tissues, there are three intracellular cytosolic prenyltransferases: farnesyltransferase (FTase), geranylgeranyltransferase-I, and geranylgeranyltransferase-II (GGTase-I and II). FTase and GGTase-I are metallo-enzymes that contain a zinc atom, with 30% identity, especially in the central portion. It is unknown what the real function of zinc is in the process; perhaps it participates in catalysis, making the cysteine of the target protein more nucleophilic, or perhaps it has only a structural role [36]. FTase and GGTase-I recognize a sequence made by four amino acids, the CAAX motif in which C is a cysteine residue, A is usually an aliphatic residue, and X is specific for each enzyme. Indeed, FTase has a preference for Cys, Ala, Gln, Met, or Ser as the X residue, while GGTase-I prefers Leu, Ile, or Phe [37,38]. Genetic screening in yeast has highlighted a longer sequence target for FTase, that is, C(x)3X, expanding the list of possible human proteins that contain this motif and could thus be farnesylated [39]. GGTase-II, on the other hand, recognizes C-terminal motifs such as CC, CXC, CCX, CCXX, and CCXXX, and generally transfers GGPP to both the Cys amino acids in such sequences [40]. Prenylation increases hydrophobicity in the C-terminal domain and facilitates binding to the membrane of the ER, where the -AAX motif is cut [41]. The farnesylated and geranyl-geranylated proteins, in fact, usually are subject to a proteolytic step, catalyzed by proteases, for example, the CAAX endopeptidase 1 (RCE1), which removes the residues of -AAX downstream of the prenylated Cys [42]. The modified cysteine is then methylated by a methyl transferase, such as isoprenylcysteine carboxymethyl transferase (ICMT), to produce a protein containing a C-term farnesyl cysteine methyl ester [43]. The farnesyl group confers a weak affinity for the membrane, so other modifications are necessary for the correct localization of the proteins [44]. For example, several proteins are subject to further lipid modifications following prenylation, such as palmitoylation (palmitic acid transfer on the Cys residue with thioester bond formation) [45], useful for traffic control and anchoring to the membrane through electrostatic interactions with the anionic phospholipids positioned on the inner side of the membrane [46]. The typical targets of FTase and GGTase-I are members of the Ras superfamily, which includes a wide variety of proteins, such as Ras, Rho, and Rab, which show great functional diversification in the context of a preserved structural framework and a characteristic binding domain to GTP (Figure 2) [47]. GGTase-II, on the other hand, has a rigorous specificity for the protein substrate compared with the other two prenyltransferases; in fact, it binds with great affinity to the C-terminal residues of a complex, which also includes the Rab protein, its effective substrate [48]. All small GTPases that belong to these families must bind to membranes to activate the downstream signaling pathway, and this is possible through prenylation (or other lipid modifications) [49]. It has been recently observed in Caenorhabditis elegans that intracellular lipid homeostasis depends on the sequestration of the nuclear hormone receptor NHR-49 into endosomes through a specific interaction with geranylgeranylated Rab11.1. Lipid depletion, and thus a reduced flux through the mevalonate pathway, reducing Rab11.1 geranylgeranylation, induces NHR-49 translocation to the nucleus and the activation of a transcription program that leads to increased nutrient absorption [50]. Consequences Caused by Prenylation Defects To date, it is known that prenylation defects in key enzymes of the mevalonate pathway are the basis of the pathogenesis of multiple diseases, and the two enzymes most involved in these mechanisms are FPPS and MVK [51]. FPPS is a key enzyme in the regulation of the flow of carbon atoms from the pathway of mevalonate and is responsible, among other things, for the prenylation of proteins involved in cell cycle regulation [52,53]. Among the effects of reduced FPPS activity, there is the lack of prenylation of lamin B, a protein involved in maintaining the integrity of the nuclear membrane, and the localization of the same protein in the cytoplasm. Experimental evidence suggests that G0/G1 cell cycle phase arrest is likely induced by the reduction of isoprenoid derivatives linked to the nuclear membrane protein [54]. In vitro and in vivo studies indicate that inhibition of the mevalonate pathway shows effects on the growth and progression of prostate cancer (PC) [55]. The expression of FPPS appears, in fact, increased in patients with PC and nitrogen-containing bisphosphonates, inhibitors of FPPS, represent the elective treatment for bone metastases of this carcinoma. In vitro studies on PC cells have found that bisphosphonates affect tumor invasion and angiogenesis, and that zoledronic acid, a drug belonging to this group, inhibits the survival and proliferation of cancer cells with effects that appear to be the result of a lack of prenylation of small GTPases [56]. GTPases, such as Ras, among the most frequent oncoproteins mutated in human tumors, also need to be prenylated and are localized on the inner surface of the cell membrane so that proliferation cell pathways, such as those of PI3K/Akt and Raf/Mek/ERK, can be activated [57]. In particular, in the central nervous system (CNS), the Ras farnesylation plays a crucial role in regulating synaptic plasticity and determining synapse identity, while Rho GTPase carries out neuroprotection activity [58,59]. Impaired activity of MVK leads to a reduction in the production of isoprenoid molecules and to a defective protein prenylation, with consequent cytosolic accumulation of nonprenylated proteins [60]. It has been proposed that the excessive production of IL1β observed in patients with MKD, suffering from the congenital deficiency of the enzymatic activity of MVK [61], could be caused by the loss of protein prenylation, in particular when GGPP is missing; this event favors an overactivation of the inflammasome, thus triggering the systemic inflammatory peaks characteristic of the pathology [62,63]. In in vitro experiments, MKD models were obtained by treating cells with statins (HMGR inhibitors), bisphosphonates (FPPS inhibitors), or specific GGTase inhibitors to mimic protein prenylation blocking [64]. Recently, studies conducted by Skinner et al., have clearly shown that the loss of prenylation of some GTPases, such as Rac1 or RhoA, leads to the activation of inflammasome and thus of caspase 1, with increased production of IL1β; this has been observed in both human monocytic cells treated pharmacologically with statins, and directly in cells of patients suffering from MKD, upon stimulation with LPS [65]. Other studies have shown that reduced RhoA prenylation may be the basis for the excessive production of IL1β observed in MKD [66]. In cases of isoprenoid deficiency, there is increased RhoA activity, which leads to a further increase in the gene expression of pro-IL1β, as well as the activation of Rac1, which induces pro-caspase-1 in the inflammasome [67]. Similarly, the lack of isoprenoids also impairs mitochondrial function and stability, as well as autophagic clearance of damaged mitochondria, further promoting hypersecretion of IL1β [67]. Similar mitochondrial disorders and pro-inflammatory cell death have also been observed in statin-treated neuronal cells, suggesting that these events may contribute to neurological damage observed in patients severely affected by MA [67]. The results obtained from these studies are especially important from the clinical point of view, as they could help to overcome the deficit associated with the mevalonate pathway, restoring the normal prenylation of proteins that play a fundamental role in the activation of inflammation. Neuroinflammation, Oxidative Stress, and Fever as a Consequence of Altered Mevalonate Pathway Flux It is well known that the process of inflammation is mediated by the cells of the immune system and by specific chemical factors such as pro-inflammatory molecules; once the damaging agent is recognized, leukocytes and proteins are called back through chemical mediators from the bloodstream to the damaged site, where, once activated, they intervene in different ways [68]. Initially, monocytes/macrophages are the first cell population that releases pro-inflammatory cytokines and chemokines and induce phagocytosis [69]. Neuroinflammation plays a fundamental role in the CNS, exerting both possible beneficial and harmful effects on nervous tissue: a mild and rapid inflammatory state has a neuroprotective action, while the presence of a chronic inflammatory process could lead to negative effects [70]. As in the case of classical inflammation, at the nervous system level, it is also possible to make a distinction between acute neuroinflammation, which is basically a defensive response of the body to a harmful insult, resulting in repair of the damaged site, and chronic neuroinflammation, characterized by persistent damaging stimuli, which can result in neurodegeneration [71]. The neuroimmune system plays a particularly important role because it is involved in normal functioning, development, and aging, and intervenes in the case of CNS lesions. The homeostasis of this anatomical area is based on the good functioning of the blood-brain barrier and the presence of a large variety of cells: neurons, astrocytes, oligodendrocytes, pericytes, and microglia cells interact with each other, and their activity is essential for a multitude of brain functions [72,73]. Microglia cells represent 5-12% of all cells present in the CNS and play the fundamental role of the first line of defense of the CNS [70,74]. The essential role attributed to microglia cells is the sentinel function, which is the ability to constantly detect changes in their environment; the cleaning function, which promotes neuronal well-being; and the aforementioned defense function, providing neuroprotection [75]. Microglia cells are involved in maintaining CNS homeostasis; controlling synaptic density, connectivity, and plasticity; eliminating myelin debris and apoptotic cells; and affecting germination, migration, anastomosis, and improvement of the increasing vascularization of the CNS [76]. In addition, these cells, as all the other macrophages present in the body, perform phagocytosis, activate cytotoxicity mechanisms, and contribute to the inflammatory response through the production of signal molecules [77]. However, prolonged activation of microglia involves the acquisition of a harmful phenotype, with the release of inflammatory mediators that promote protein aggregation and neuronal damage [78]. An imbalance of these microglial functions may trigger the onset or exacerbation of neurodegeneration, a severe and debilitating neuroinflammatory disease that may occur as a result of specific and persistent stimuli, with progressive degeneration and death of neurons. All of this is because of microglia cells, which could also damage and kill neurons, depending on the type of inflammatory response, resulting in psychomotor damage, which characterizes the phenotype of neurodegenerative diseases [79]. Noteworthy, MKD syndromes are characterized, among the other clinical features, by recurrent episodes of fever, one of the clinical signs that unites autoinflammatory diseases with other inflammatory symptoms and neurological involvement especially in the most severe forms (mental and psychomotor retardation, progressive cerebellar ataxia, visual impairment, epilepsy) [6]. Fever represents an adaptive, temporary, and reversible reaction, implemented systematically by the body in response to an inflammatory stimulus that can be caused by substances, called pyrogens, of both the exogenous kind, such as viruses, bacterial agents, and their products, or the endogenous kind, such as various cytokines and pro-inflammatory molecules [80]. From a physiopathological point of view, fever is the result of the action of prostaglandin E2 (PGE2), a metabolite of arachidonic acid, which acts on the thermoregulatory center of the hypothalamus [81]. As a result of the phlogistic insult, pyrogen cytokines, such as IL1β, IL6, and TNFα, are produced by macrophages; they interact indirectly on the thermoregulatory neurons of the hypothalamus, because they stimulate the endothelial cells of the hypothalamic vessels to produce PGE2, which in turn acts on the neurons. Finally, the concentration of cyclic AMP (cAMP) at the hypothalamic level increases and the body temperature increases above the threshold [82]. In general, oxidative stress is a condition in which there is an imbalance between the production of reactive oxygen species (ROS) and the action of antioxidant defenses [83]. Overproduction of ROS leads to progressive damage to cellular molecules, such as DNA, and a mitochondrial dysfunction that in turn generates a further increase in ROS production, compromising cell integrity and viability. Brain cells, in particular, are very sensitive to the effects of oxidative stress and, in such conditions, microglia and astrocytes are stimulated to release inflammatory mediators such as iNOS and to trigger cyclooxygenase 2 (COX-2), causing a neuroinflammatory response [84][85][86]. In particular, oxidative stress in neuroinflammation is a process characterized by the activation of the glia, which underlies a continuous cycle of inflammatory events with the release of cytokines and other neurotoxic mediators [84]. To understand the role of oxidative stress in the biogenesis of neuroinflammation, it should be considered that the inflammation at the molecular level sees the involvement of inflammasome, a multi-protein complex to which NLRP1, NLRP3, NLRP6, and NLRPC4 belong. These proteins are part of the super-family of cytoplasmic receptors called NODlike receptors (NLR) and are activated in the presence of molecular patterns associated with pathogens (PAMPs) or stress/cell damage (DAMPs) [87,88]. One of the best known components is the NLRP3-inflammasome, which is able to recruit and activate the proinflammatory caspase 1, belonging to a family of molecules responsible for the apoptotic cell process. Activated caspase-1, in turn, allows the activation of three pro-inflammatory cytokines: IL-1β, IL-18, and IL-33. NLRP3-inflammosoma also induces the activation of nuclear factor kB (NF-kB), the main orchestrator of gene transcription during the inflammatory process, the resolution phase of which occurs through a particular form of apoptosis, called pyroptosis [89]. The delicate balance between these two forms of programmed death is fundamental in sustaining inflammation, both systemic and nervous, through the pathway dependent on caspase-9, further confirming the mitochondrial involvement in these processes [90]. In this regard, several literature data have shown that an impaired mitochondrial function is associated with the release of ROS or nitric oxide (NO), which in turn determines the activation of inflammasome [91][92][93]. Finally, in the pathogenesis of neurodegenerative diseases, NLRP3-inflammasome plays a crucial role, also thanks to the fact that the literature data indicate that it is expressed in the cells of the immune system and in the CNS [94,95]. Mitochondrial damage represents a pivotal event of apoptosis in response to various conditions of intracellular stress (DNA damage, cytotoxic damage, oxidative stress, and infections). These stimuli act by inhibiting or activating members of the Bcl-2 family, such as Bak, Bax, Bad, Bcl-xl, and Bim [96]. These pro-apoptotic proteins play a fundamental role because, in the presence of overproduction of ROS, they induce the formation of channels in the mitochondria, as reported in Coenzyme Q10 deficiency [97,98]. Recent landmark works have demonstrated that both a limited as well as a permanently increased flux through the mevalonate pathway trigger alarms and lead to distinct inflammatory and immune responses [99,100]. Increased levels of brain isoprenoids FPP and GGPP have been detected in hyperglycemia, where RhoC is induced in the liver by proinflammatory cytokines and in male Alzheimer patients [101,102]. These sex-dependent alterations in the mevalonate flux have been reported in both the liver and brain, where marked differences in regions involved in memory and learning functions could be at the basis of clinically relevant differences among males and females in both neurodevelopmental and neurodegenerative diseases [103]. Interestingly, FPP has been recently reported to act as a danger signal in the brain, inducing neuronal death in a mouse stroke model through the activation of transient receptor potential melastatin 2 [104]. In diseases associated with prenylation defects such as MKD, the typical neurodegeneration has been reported to be linked to both caspase-9/3dependent apoptosis, triggered by mitochondrial damage, and to pyroptosis mediated by caspase-1, which in turn activates cytokines and pro-inflammatory chemokines, playing a crucial role in neuroinflammatory mechanisms [90,105,106]. The overproduction of IL-1β in MKD syndromes, linked to neuroinflammation, fever, and oxidative stress, is thus considered a causal factor and is the reason anti-IL-1 therapeutic approaches (anakinra, canakinumab) have been approved for MKD treatment and have been reported to be at least partially effective in some patients [25,61,107,108]. Coenzyme Q10: The Fine Regulation of Its Antioxidant Properties Coenzyme Q10 (Coq10), also improperly called vitamin Q, is a lipid-soluble molecule with powerful antioxidant properties, identified for the first time only in 1957, and produced naturally by the human organism in which it has an ubiquitous distribution, that is, it is present in all its cells [109]. Coenzyme Q10 is located in particular in cell membranes and mitochondria, highly differentiated structures present in the cytoplasm of plant and animal cells with aerobic metabolism. It plays a decisive role in the synthesis of ATP at the level of the electron transport chain in the mitochondria, allowing the production of ATP, and thus of energy; without the key role of Coq10, the chain would be interrupted by preventing the production of ATP. As mitochondria are present in greater numbers in tissues characterized by a particularly active oxidative metabolism, such as the heart, brain, liver, pancreas, skeletal muscles, and brown adipose tissue, the role of Coq10 is particularly crucial in these anatomical districts. Moreover, Coq10 protects LDL (low-density lipoprotein), sometimes called "bad" cholesterol, from oxidation, and oxidized LDL is particularly harmful because it triggers inflammatory processes in the blood vessels, contributing to the creation of atherosclerotic plaques [110][111][112]. Thanks to these properties, supplements of Coenzyme Q10 are proposed in the case of deficiency or from a prevention perspective, used as nutraceutical, in heart disease, hypertension, neurodegenerative pathologies, cellular aging, and photo-aging. In addition, the integration of Coq10 has also been proposed as a support to drug therapies, and as protection from oxidative stress in the case of intense exercise, for the reduction of fatigue, and improvement of sports performance. Coq10 supplementation is often associated with the pharmacological treatment of statins. Statins, in fact, are generally very well tolerated, but can induce some kind of muscle toxicity, characterized by various clinical manifestations, and the main reason is associated with the biochemical function of statins as a hypocholesterifying that cause a reduction in the physiological level of Coq10. Statin myopathy includes muscle weakness or pain (myalgia), hypersensitivity and muscle stiffness, cramps, and arthralgia, and it is detected by plasma creatine kinase levels [113,114]. The prevalence of this complication in patients treated with statin varies between 7% and 29%; this muscle toxicity is caused by an accumulation of statin in myocytes and can be caused by defects in the metabolism of the statin, as well as by muscle factors such as mitochondrial damage and production of ROS. Coq10 has been shown to be very useful in counteracting these effects, as it has a myo-protective action and promotes muscle well-being. In addition, it helps to reduce the sense of fatigue and it is essential to maintain a good physical efficiency and a proper cellular metabolism [115,116]. Symptomatology resulting from a Coq10 synthesis is an induced and transient phenomenon that must be distinguished from rare genetic neurometabolic disease, referred to as Coq10 deficiency (OMIM #607426, #614652). It is an autosomal recessive transmission disease and the damage caused by this condition can affect all organs; however, depending on the form of the pathology, the organs more affected are the kidneys, cerebellum, skeletal, and heart muscles [117]. When the kidneys are involved, nephrotic syndrome is established, which can lead to renal failure [118,119]. If the cerebellum is mainly affected, the disease manifests itself with difficulty in walking and coordinating movements or convulsions [120,121], while the third form affects the muscles and manifests itself with muscle weakness [117,122]. Typically, symptoms appear during childhood, but may also occur later. To date, the best pharmacological treatment for this pathology is represented by the oral administration of Coq 10. The study of the wide-ranging effects of this deficit has made possible the understanding of the antioxidant and protective role played by Coq10. Conclusions The manipulation of the metabolic pathway of cholesterol by inhibitors is aimed at regulating the synthesis of the final product. The blocking of activity of the HMG-CoA reductase and the FPP by inhibitors is focused on acting specifically at different levels of the pathway to counteract the overproduction of cholesterol. The mechanisms implemented by these compounds, which are the basis of the pharmacological active principles commonly used in the treatment of hypercholesterolemia such as statins and bisphosphonates, have highlighted transduction pathways signals triggered by the block of the metabolic pathway. One of the most important regulatory mechanisms is undoubtedly the reduction of regulatory proteins that bind GTP through the blocking of the production of farnesylpyrophosphate. The main substrates of post-translational prenylation modifications are represented by G proteins such as Rho and Rac, which function as molecular switches and, through the transduction of extracellular signals, act on cell survival, growth, and programmed cell death. These inhibitors also act at the endothelial level through the upregulation of nitric oxide synthase, and this system is certainly the basic mechanism to counteract the formation of atherosclerotic plaque. The prenylation defects are considered responsible for the inflammatory process owing to the deficiency of the intermediates of the metabolic pathway such as isoprenoids. This condition is the basis of the pathogenesis of rare pediatric diseases such as MKD, the most severe form of which experiences a significant state of neuroinflammation and clinically manifests itself with a psychomotor delay in patients. The neuroinflammatory response is characterized by a series of changes that mainly involve the role of microglial cells in the maintenance of cerebral homeostasis. An imbalance at this level involves activation, by specific brain mediators, of microglia that is in turn responsible for amplifying and maintaining the inflammatory state. Moreover, oxidative stress has been shown to be one of the main causes of mitochondrial dysfunction, resulting in alteration of ATP production, as in Coq10 deficiency. The effects of prenylation deregulation and oxidative stress converge to determine specific morphological changes, especially on the mitochondrial compartment, and the deepening of the molecular mechanisms underlying these modifications is at the heart of considerable scientific interest as the potential spillovers of such knowledge are aimed at identifying transversally innovative therapeutic targets. Conflicts of Interest: The authors declare no conflict of interest.
7,147
2022-07-25T00:00:00.000
[ "Biology", "Chemistry" ]
Parasite proteostasis and artemisinin resistance The continued emergence and spread of resistance to artemisinins, the cornerstone of first line antimalarials, threatens significant gains made toward malaria elimination. Mutations in Kelch13 have been proposed to mediate artemisinin resistance by either reducing artemisinin activation via reduced parasite hemoglobin digestion or by enhancing the parasite stress response. Here, we explored the involvement of the parasite unfolded protein response (UPR) and ubiquitin proteasome system (UPS), vital to maintaining parasite proteostasis, in the context of artemisinin resistance. Our data show that perturbing parasite proteostasis kills parasites, early parasite UPR signaling dictate DHA survival outcomes, and DHA susceptibility correlates with impairment of proteasome-mediated protein degradation. These data provide compelling evidence toward targeting the UPR and UPS to overcome existing artemisinin resistance. The exact role of Kelch13 in artemisinin resistance is an ongoing area of study. Two non-mutually exclusive hypotheses have been put forward to explain Kelch13-mediated artemisinin resistance: (1) decreased artemisinin activation via reduced hemoglobin digestion, and (2) enhanced stress response to counter artemisinin-mediated protein and lipid damage. Knock sideways studies point to a role for Kelch13 in endocytosis of hemoglobin, as parasites in which > 60% 32 of Kelch13 is mislocalized display both reduced uptake of uorescent dextran 33 and lower abundance of hemoglobin-derived peptides 34 . Omics studies point to a role for Kelch13 in the parasite stress response. Transcriptomics of artemisinin resistant clinical isolates were found to upregulate genes involved in protein folding, protein repair, and proteasome subunits 35 . In addition, the lab-adapted clinical isolate Cam3.II Kelch13 R539T and its isogenic counterparts Cam3.II Kelch13 WT and Cam3.II Kelch13 C580Y were examined by transcriptomics and proteomics, revealing that artemisinin-resistant (Kelch13 mutant) parasites express higher levels of genes involved in the ubiquitin proteasome system (UPS), redox, and intracellular vesicles (DHA), the active metabolite of all clinical artemisinins, and the related peroxide OZ439 (also known as artefenomel) 55 . This increase in sensitivity was not only observed at the early ring stage where artemisinin resistance is classically observed, but also throughout the asexual life cycle 55 . These data suggest that the proteasome is critical for parasites to survive artemisinins and acts in a manner distinct from Kelch13. We and others have shown that proteasome inhibitors synergize with DHA to potently kill artemisinin-resistant P. falciparum in vitro and in vivo 56, 57 . Aside from DHA, proteasome inhibitors also synergized with distinct antimalarial compounds such as the peroxide OZ439, the deubiquitinase inhibitor b-AP15, and the redox inhibitor methylene blue, which are structurally diverse and possess distinct antimalarial modes of action 57 . Given the crucial role of proteasomes in restoring proteostasis, we were intrigued if the observed synergy was due to additional perturbation of proteostasis mechanisms by these synergistic compounds. To interrogate the role of proteostasis mechanisms in parasite artemisinin response and resistance, we examined UPR kinetics and proteasome activity in Kelch13 mutants and proteasome mutants. Our data show that Kelch13 WT and Kelch13 mutant parasites display distinct stage-dependent UPR kinetics. Importantly, early responses of hyperactivation and a concomitant unresolved UPR dictate eventual death in artemisinin-sensitive Kelch13 WT parasites. Finally, we show that a well-functioning proteasome promotes parasite survival to artemisinin, independent of the canonical K13-mediated resistance pathway. Antimalarial compounds synergistic with proteasome inhibitors disrupt proteostasis We previously showed that the P. falciparum-speci c proteasome inhibitors WLL and WLW synergize with four of sixteen candidate and clinically used antimalarials 58 . The four synergistic compounds (DHA, OZ439, b-AP15, and methylene blue) are structurally diverse and have distinct modes of action. DHA and OZ439 non-speci cally alkylate nearby proteins 5,6 , b-AP15 inhibits a proteasome-associated deubiquitinase 59 , and methylene blue interferes with redox homeostasis 60,61 . We were curious why these different classes of antimalarials were synergistic with proteasome inhibitors, and hypothesized that they may perturb proteostasis. To this end, Cam3.II Kelch13 WT parasites were synchronized to 26-30 hpi trophozoite stages and treated for 6 h with a 5x IC 50 concentration of the proteasome inhibitor WLL 56 , the synergistic compounds DHA, OZ439, b-AP15, and methylene blue, the antagonistic compound chloroquine, or the vehicle control, DMSO. UPR activation was determined by levels of p-eIF2α, a marker of UPR activation, normalized to total eIF2α levels 62 . Proteasome dysfunction was determined by levels of K48-linked ubiquitination 63 normalized to BiP, because in Cam3.II strain parasites BiP does not increase in response to DHA 36 . Treatment with the synergistic compounds DHA, OZ439, and b-AP15 all resulted in UPR activation with OZ439 resulting in the greatest UPR activation followed by DHA and b-AP15 treatment yielding similar levels of UPR activation (Fig. 1a, b, and Supplementary Fig. 1). These three compounds led to an accumulation of K48-linked ubiquitination, and the effect on ubiquitination from each of these compounds was similar (Fig. 1a, c, and Supplementary Fig. 1). Methylene blue, which was synergistic with proteasome inhibitors in ring stages but additive in trophozoite stages 58 , did not activate the UPR but led to a 2-fold increase in K48-linked ubiquitination, although this was not statistically signi cant (p = 0.2014; Fig. 1a-c and Supplementary Fig. 1). In contrast, the antagonistic compound chloroquine 58 , which inhibits heme detoxi cation 64 , did not alter levels of p-eIF2α or K48linked ubiquitination relative to the DMSO-treated control (Fig. 1a-c and Supplementary Fig. 1). As a positive control for proteasome inhibition, parasites were treated with WLL. Indeed, WLL-treated parasites accumulated high levels of K48-linked ubiquitination (Fig. 1a, c, and Supplementary Fig. 1). A more moderate UPR activation was observed with WLL treatment, corroborating the primary effect on proteasome inhibition leading to the secondary effect of UPR activation 65,66 . Together, these data indicate that compounds that synergize with proteasome inhibitors to potently kill malaria parasites disrupt proteostasis, and suggest that the proteasome is important for parasite proteostasis restoration. Kelch13 WT and Kelch13 mutant parasites differentially regulate the UPR The UPR is an exquisitely well-regulated process, and we were interested in understanding the kinetics of UPR activation and resolution in artemisinin-sensitive and artemisinin-resistant parasites. To do so, Cam3.II Kelch13 WT (hereon referred to as WT; Table 1) and Cam3.II Kelch13 R539T parasites (hereon referred to as R539T; Table 1) were tightly synchronized to 0-3 hpi rings and treated with the physiologically-relevant concentration of 700 nM DHA for 3 h, mimicking conditions of the RSA used to delineate artemisinin resistance in vitro 28 (Fig. 2a, top and middle panels). In response to DHA, levels of p-eIF2α increased 1.5-fold in both parasites. However, only DHA-treated WT parasites had signi cantly higher levels of p-eIF2α compared to mock-treated controls and relative to the DHA-treated R539T mutant ( Fig. 2b, c, Supplementary Fig. 2a). Next, UPR resolution was monitored in these parasites following drug removal. Levels of p-eIF2α declined over time in both parasites following DHA washout (Fig. 2d, e, f, Supplementary Fig. 2b). However, by 6 h post washout, levels of p-eIF2α in WT parasites remained elevated relative to the mock-treated control ( Fig. 2d and e), suggesting that these parasites were unable to resolve the UPR and remained in a state of stress. In contrast, at 6 h post-washout, levels of p-eIF2α in R539T parasites returned to basal levels ( Fig. 2d and Fig. 2c-e). Interestingly, C580Y parasites with an additional β2 C31Y mutation, which sensitized parasites to DHA 68 , also had elevated levels of p-eIF2α at 6 h post-washout compared to mock-treated counterparts ( Supplementary Fig. 2c-e). Together, the data suggest that parasites sensitized to DHA are unable to resolve DHA-mediated UPR activation despite removal of the drug. To determine if these phenotypes would be maintained at the trophozoite stage, UPR activation was monitored in WT and R539T parasites synchronized to 26-30 hpi trophozoites ( Fig. 2A, bottom panel). Upon treatment with 50 nM DHA, a 5x IC 50 concentration, the UPR was activated in both parasites in a time-dependent manner. Of note, levels of p-eIF2α were signi cantly higher at 3 h in R539T vs. WT parasites (Fig. 2g, h, and Supplementary Fig. 2f), suggesting a more robust UPR activation in the R539T mutant. By 6 h post-treatment, levels of p-eIF2α were similar between the parasites examined ( Fig. 2g, h, and Supplementary Fig. 2f). These data show that the kinetics of UPR activation and resolution are dependent on both Kelch13 genotype and the parasite stage during its intraerythrocytic development cycle. Peroxides DHA and OZ439 inhibit parasite proteasome activity Previously it was shown that DHA inhibits β5 proteasome catalytic activity in artemisinin-sensitive Kelch13 WT parasites and leads to an accumulation of ubiquitinated proteins 8,69 . Since artemisininresistant and Kelch13 mutant parasites have been shown to express higher levels of proteasome subunits 35,36 , we sought to determine whether Kelch13 mutations impacted DHA-mediated proteasome inhibition. Although the β5 catalytic activity is responsible for the majority of protein degradation, we were also interested in the effect of DHA on the other two catalytic subunits of the proteasome as they play a role in protein degradation, in addition to the effect of the related peroxide OZ439 on proteasome catalytic activity. Proteasome activity in DHA-treated WT, R539T, and C580Y parasites was examined using two orthogonal approaches. For both approaches, trophozoite stages were assayed since the UPS is upregulated at the trophozoite stage 46,70 and artemisinin treatment does not produce a detectable increase in ubiquitination at the early ring stage 69 . In the rst approach, proteasome subunit catalytic activity was examined in DHA-treated and OZ439-treated trophozoites using the uorogenic substrates Ac-nLPnLD-AMC, Ac-RLR-AMC, or Suc-LLVY-AMC to assess caspase-like, trypsin-like, and chymotrypsinlike activity, respectively 71 . Though these uorogenic substrates can react with other proteases in the parasites, for simplicity we will refer to caspase-like activity as β1 activity, trypsin-like activity as β2 activity, and chymotrypsin-like activity as β5 activity. WLL, a P. falciparum-selective proteasome inhibitor with activity against β2 and β5 active sites 56 was used as a positive control for inhibiting these two catalytic sites. No known inhibitor of plasmodial β1 exists, though high concentrations of WLL have been shown to moderately inhibit plasmodial β1 activity 56 . DHA inhibited β1 (Fig. 3a), β2 (Fig. 3b), and β5 (Fig. 3c) activity in WT, R539T, and C580Y trophozoites in a statistically signi cant and concentration-dependent manner. β1 and β2 activity were inhibited by approximately 30% and 40% following treatment with 50 nM DHA and 700 nM DHA, respectively ( Fig. 3a and b). β5 activity was inhibited to the greatest extent, with approximately 40% and 60% inhibition upon treatment with 50 nM DHA and 700 nM DHA, respectively (Fig. 3c). In addition to comparing treated to untreated counterparts as detailed above, we also tested for differences in DHA-mediated inhibition depending on Kelch13 genotype but no signi cant difference in catalytic inhibition was detected between Kelch13 WT and Kelch13 mutant parasites. The DHA-related peroxide OZ439 did not inhibit β1 activity of proteasomes isolated from all tested parasite strains (Fig. 3d). Intriguingly, OZ439 modestly inhibited β2 activity (10-15% inhibition) of proteasomes derived from R539T and C580Y but not WT parasites (Fig. 3e). In addition, OZ439 selectively inhibited β5 activity (approximately 25% inhibition) of proteasomes derived from Kelch13 mutants, which was determined to be statistically signi cant at the peak plasma concentration of 3 µM OZ439 72,73 (Fig. 3f). Although uorogenic substrate assays accurately determine proteasome catalytic activity, these assays are unable to measure proteasome-mediated protein degradation. Thus, in a second approach to measure proteasome activity, we examined the accumulation of K48-linked ubiquitination, which is a hallmark of proteasome dysfunction. Synchronized WT, R539T, and C580Y strain parasites at the 26-30 hpi trophozoite stages were treated with 50 nM DHA for up to 6 h, then lysates were examined for protein ubiquitination. In response to DHA, all parasites showed a statistically signi cant accumulation of K48linked ubiquitination in a time-dependent manner (Fig. 3g, h, Supplementary Fig. 3a). At each timepoint, levels of ubiquitination was similar across all parasites tested regardless of Kelch13 genotype ( Supplementary Fig. 3b), re ecting results obtained from proteasome catalytic activity assays. Collectively, these data show that DHA equally inhibits proteasomes from WT, R539T, and C580Y. In contrast, OZ439 selectively inhibits the β5 catalytic activity of proteasomes derived from R539T and C580Y parasites. Mutations in 19S proteasome subunits increase parasite susceptibility to DHA Previously, we reported that an additional mutation in the 20S β2 proteasome subunit at either C31Y or C31F in the context of a C580Y background increased parasite susceptibility to DHA 55 . These parasites were generated via in vitro selection studies with the P. falciparum-speci c proteasome inhibitor WLW 58 . In the same WLW selection study, three 19S proteasome mutants were also selected for in the Cam3.II background: Cam3.II Kelch13 WT Rpt4 E380*, Cam3.II Kelch13 WT Rpn6 E266K, and Cam3.II Kelch13 C580Y Rpt5 G319S 58 (hereon referred to as Rpt4, Rpn6, and Rpt5, respectively; Table 1). Rpt4 and Rpt5 are ATPase subunits in the 19S RP base, which mediate gate opening to allow substrates into the 20S 49 . Rpt4 is also in contact with the 19S lid 74 . Rpn6 acts as a scaffolding protein that stabilizes the interaction between the 19S and 20S 75 (Fig. 4a). The 19S RP is important for regulating protein processing prior to proteolytic degradation within the 20S CP chamber 49 . Thus, we were interested in determining if mutations in 19S subunits in the context of a C580Y background would compromise parasite resistance to DHA and if such mutations evolved on a WT background would further hypersensitize parasites to DHA. To this end, dose response assays were performed using early ring (0-3 hpi), trophozoite (26-30 hpi), and asynchronous cultures ( Supplementary Fig. 4). (Fig. 4f). These data demonstrate that indeed, mutations in 19S subunits increase parasite susceptibility to DHA and can even hypersensitize parasites to DHA when 19S mutations occur on a Kelch13 WT background. Parasites with increased peroxide susceptibility have impaired proteasome-mediated protein degradation Given the increased peroxide susceptibility of parasites harboring 19S or 20S mutations, we hypothesized that the proteasome is essential for parasite survival in the face of artemisinin and similar compounds, and that the observed sensitivity of proteasome mutants to peroxides is due to a dysfunction in proteasome-mediated protein degradation. To test this hypothesis, we examined the proteasome catalytic activity of DHA-and OZ439-treated 26-30 hpi trophozoites derived from the C580Y parental strain and cognate 20S CP mutants: Cam3.II Kelch13 C580Y β2 C31Y, Cam3.II Kelch13 C580Y β2 C31F, and Cam3.II Kelch13 C580Y β5 A20S (hereon referred to as β2 C31Y, β2 C31F, and β5 A20S, respectively; Table 1). Treatment with DHA signi cantly inhibited β1 (Fig. 5a), β2 (Fig. 5b), and β5 ( Fig. 5c) activity in all parasites tested in a concentration-dependent manner. In addition, relative to its parent, the β2 C31Y mutant displayed greater inhibition of β1 activity upon treatment with 50 nM DHA (C580Y = 20% inhibition; β2 C31Y = 49% inhibition), but no other signi cant difference was observed between DHAtreated parent and proteasome mutant parasites. OZ439 did not inhibit β1 activity (Fig. 5d), but did inhibit β2 and β5 activities in all parasites tested (Fig. 5e, f). Note that at 3 µM OZ439, the β5 catalytic site of the β2 C31F mutant was signi cantly more inhibited compared to that of the parental C580Y parasite (C580Y = 28% inhibition; β2 C31F = 52% inhibition; Fig. 5f). No other signi cant difference in catalytic inhibition was observed between parental and 20S proteasome mutants treated with OZ439. Next, inhibition of proteasome-mediated protein degradation was evaluated in 20S mutant (Fig. 5g, h) and 19S mutant parasites (Fig. 5i-l) by assessing accumulation of K48-linked ubiquitination. DHA-treated C580Y and β2 mutants led to signi cantly increased K48-linked ubiquitination compared to mock-treated parasites (Fig. 5g, h, and Supplementary Fig. 5a). In addition, relative to their parent C580Y, both β2 C31Y and β2 C31F mutants had 1.5-to 2-fold higher levels of K48-linked ubiquitination in response to 3 h treatment of DHA (Fig. 5g, h, and Supplementary Fig. 5a). The β5 A20S mutant, which did not display altered sensitivity to DHA or OZ439 55 , had minor and statistically insigni cant increases in ubiquitination ( Fig. 5g, h, and Supplementary Fig. 5a). Derived on a genetic background expressing Kelch13 C580Y, Rpt5 mutants displayed a statistically signi cant 2-fold increase in K48 ubiquitination compared to the parental strain at basal levels without any drug treatment and at 3h DHA treatment (Fig. 5i, j, and Supplementary Fig. 5b). For the 19S mutants that were derived on a Kelch13 WT background, Rpt4 and Rpn6 mutants also showed a statistically signi cant 2-fold increase in ubiquitination compared to WT at basal levels and upon drug treatment (Fig. 5k, l, and Supplementary Fig. 5c). In addition, DHA-treated Rpt4 and Rpn6 mutants accumulated signi cantly more ubiquitination compared to mock-treated counterparts (Fig. 5k, l, and Supplementary Fig. 5c). Since we observed that the UPR was differentially activated in Kelch13 WT vs. Kelch13 mutant parasites, we were also interested in determining UPR activation kinetics in proteasome mutants. However, no signi cant difference in UPR activation was observed between parental and proteasome mutant parasites at the early ring stage ( Supplementary Fig. 2c-e) or trophozoite stage ( Supplementary Fig. S6). We note that in trophozoite stages, C580Y parental strain as well as β2 C31Y, β2 C31F, β5 A20S, and Rpt5 proteasome mutants signi cantly induced UPR activation compared to untreated counterparts, albeit the level of UPR activation was similar for parent and proteasome mutants ( Supplementary Fig. S6). Collectively, these data indicate that a defect in proteasome-mediated protein degradation underlies the heightened sensitivity of proteasome mutants to peroxides, and that this defect is not mediated by increased inhibition of proteasome catalytic subunits. The β2 C31Y proteasome mutant is sensitized to proteasome-related inhibitors Given our observation that β2 mutants exhibited proteasome dysfunction compared to the parental C580Y as well as the β5 A20S mutant, we reasoned that in addition to DHA and OZ439, β2 mutants should also selectively display increased susceptibility to compounds that inhibit proteasome-mediated protein degradation. To test this hypothesis, C580Y, β2 C31Y, and β5 A20S strain parasites were subjected to dose response assays with epoxomicin, a non-parasite selective inhibitor of the proteasome 76 , and b-AP15, an inhibitor of a proteasome-associated deubiquitinase 59 . As negative controls, we included chloroquine and methylene blue, both of whose mechanisms of action are unrelated to the proteasome 60,61,64,77 . The C580Y parent and the β5 A20S mutant, which have similar sensitivity pro les to DHA and OZ439 55 , displayed almost identical dose-response curves in response to epoxomicin (Fig. 6a), b-AP15 (Fig. 6b), chloroquine (Fig. 6c), and methylene blue (Fig. 6d). Accordingly, C580Y and β5 A20S also had similar IC 50 Dose response curves of slow growing parasites may be shifted left due to a defect in parasite tness that is unrelated to parasite susceptibility to a particular compound. Given that the β2 C31Y mutant and its parent had similar dose response curves for chloroquine and methylene blue, it is unlikely that the increased susceptibility of this parasite to DHA, OZ439, epoxomicin, and b-AP15 is due to a tness cost. However, to rule out this possibility, parasite tness competition assays were conducted with C580Y, β2 C31Y, β2 C31F, and β5 A20S parasites. No signi cant difference was found between C580Y and the 20S proteasome mutants examined (Supplementary Fig. 7). Thus, these data demonstrate that the β2 C31Y mutant is selectively sensitive to compounds that target the proteasome. Discussion As the prevalence of artemisinin resistance continues to rise, it becomes increasingly urgent to delineate a mechanism of resistance to inform future drug discovery and implementation of antimalarial combination therapies. In addition to the widespread artemisinin resistance in the Southeast Asian region, recent reports of Kelch13-mediated artemisinin resistance in Rwanda and Uganda detected in the past three years is of particular concern 14,19,25 26,27,78 . We have previously shown that proteasome inhibitors effectively kill artemisinin-resistant parasites and strongly synergize with DHA 57,79 . In addition, parasites moderately resistant to proteasome inhibitors are sensitized to DHA 68 . Importantly, proteasome inhibitors are effective against Ugandan parasite isolates 80 . The proteasome is intimately involved in the UPR and protein degradation, two pillars of proteostasis. In a well-functioning cell, UPR activation will lead to upregulation of proteasome-mediated protein degradation, and inhibition of proteasome-mediated protein degradation will lead to UPR activation 65,66 . Exploration of the kinetics of UPR activation and resolution as well as proteasome activity in isogenic parasites only differing in the loci of Kelch13 or proteasome subunits yielded some surprising results. Firstly, our data con rmed our hypothesis that antimalarials that synergize with proteasome inhibitors such as DHA, OZ439, and b-AP15 perturb proteostasis by upregulating the UPR and inhibiting proteasome-mediated protein degradation. In contrast, antimalarials that antagonize with proteasome inhibitors such as chloroquine had no effect on these measurements of proteostasis perturbations. Interestingly, methylene blue was additive with proteasome inhibitors, and had an intermediate increase in UPR activation and ubiquitination. These data suggest that directly interfering with proteostasis mechanisms is a promising antimalarial therapeutic strategy. Secondly, we found that early parasite responses to DHA dictate eventual survival outcomes. Transcriptomics and proteomics data point to a role for Kelch13 mutants in broadly enhancing the parasite's stress response 35,36 . However, the molecular stress response pathways involved, and a wellde ned mechanism of resistance have not been elucidated. Here we show that artemisinin-sensitive Kelch13 WT parasites hyperactivate the UPR at early ring stages, indicating that these parasites are either (1) experiencing increased levels of stress and/or (2) the UPR is dysfunctionally regulated. Mislocalization studies suggest Kelch13 mutant parasites have reduced hemoglobin uptake and digestion 33,34 , and it is hypothesized that as a consequence these parasites have reduced artemisinin activation. However, the role of Kelch13 in hemoglobin uptake appears to be restricted to the ring stage 33 . Accordingly, it would be expected that the misfolded protein load in Kelch13 mutants would be lower and less prone to trigger the UPR at the ring stage. This hypothesis is consistent with our observations that early ring stage R539T parasites display little UPR activation in response to DHA and are also able to completely resolve the UPR following drug removal. In contrast, DHA-treated WT early rings display robust UPR activation and are unable to completely resolve the UPR, as seen by residual eIF2α phosphorylation 6 h after DHA removal. This is consistent with ndings that following a 3 h pulse with 700 nM DHA on ring-stage parasites, R539T parasites begin to resume protein turnover as early as 9 h after drug removal while WT parasites do not, and the differences become more pronounced at 15 h post-drug withdrawal 34 . However, at the trophozoite stage, where hemoglobin digestion is increased 81 and Kelch13 is not involved in hemoglobin uptake 33 , we found that the UPR was activated earlier in DHA-treated R539T parasites. Activating the UPR more quickly while in the trophozoite stages could be advantageous to Kelch13 mutants, giving them a jumpstart on mitigating protein damage, given that metabolic processes and protein abundance is greatly increased during these stages compared to ring stages 34 . However, a direct molecular link between Kelch13 and the UPR remains to be identi ed and are addressed in ongoing studies. Previous studies demonstrate con icting data regarding UPR activation in the early ring stages of Kelch13 WT vs. Kelch13 mutants. Consistent with what we observed, Dd2 Kelch13 WT 0-3 hpi rings treated with 700 nM DHA for 15 min displayed more robust UPR activation than Dd2 Kelch13 C580Y 0-3 hpi rings 41 . However, the authors observed that Kelch13 mutants displayed elevated basal UPR activation, which is in contrast to our observation 41 . This disparity could be attributed to differences in the genetic backgrounds of the parasites examined. Of note, Dd2 was adapted to the laboratory in the 1970s prior to widespread artemisinin usage, while Cam3.II was adapted in 2010s and originated from an artemisinin-resistant isolate. In a separate study, it was observed that relative to Cam3.II Kelch13 WT 0-8 hpi rings, Cam3.II Kelch13 R539T parasites had elevated UPR activation under basal conditions and in response to a 3 h treatment with 700 nM DHA 34 . It is possible that differences between early-and midring stages could explain discrepancies between these data, and the less tightly synchronized rings in 34 could be behaving more similarly to the trophozoite stage parasites in our study. Isogenic Kelch13 mutant vs. Kelch13 WT parasites 36 and artemisinin-resistant clinical isolates 35 have been shown to have increased levels of proteasome subunits by transcriptomics and proteomics. However, we observed no noticeable difference in proteasome activity between isogenic Kelch13 WT vs. Kelch13 mutants at basal levels or when DHA-treated when we assessed model substrate cleavage as well as cellular protein degradation. Since the proteasome is a multi-subunit complex with particular stoichiometry and assembly of subunits, upregulation of some proteasome subunits may be insu cient to modulate proteasome activity. It is also possible that the assays used here are unable to detect slight differences in proteasome activity which may be biologically relevant. Collectively, these data suggest that Kelch13 does not mediate artemisinin resistance by modulating proteasome activity but rather by modulating UPR activation and resolution. It was recently reported that Kelch13 mutant parasites undergo higher levels of autophagy than Kelch13 WT parasites under basal conditions 82 , which would aid in disposing of damaged proteins thus complementing any de ciencies in proteasome-mediated protein degradation. Yet, the proteasome may play a critical role in non-Kelch13-mediated artemisinin response. The third major nding of our study is that parasite susceptibility to DHA, mediated by mutations in the proteasome, correlated with a dysfunction in proteasome-mediated protein degradation. Previous studies showed that upon artemisinin treatment, the artemisinin-sensitive parasites 3D7 and PL2 (artemisinin-sensitive; Kelch13 WT) had a 2-fold increase in ubiquitination while the artemisinin-resistant PL7 strain (artemisinin-resistant; Kelch13 mutant) only accumulated ~1.2-fold increased ubiquitination 69 . Note that none of these three strains are isogenic, and there are multiple genetic differences between 3D7, PL2, and PL7, including at known drug resistance modulators such as P. falciparum multidrug resistance protein 1 (PfMDR1), P. falciparum multidrug resistance protein 2 (PfMDR2), and P. falciparum chloroquine resistance transporter (PfCRT) 69 . In our study, we corroborate these earlier data and show that parasites susceptible to DHA and isogenic except for mutations in proteasome subunits display increased ubiquitination. Not all proteasome mutations and resultant proteasome dysfunction affect DHA susceptibility equally across asexual blood stages. For example, while all 19S and β2 mutants tested displayed a defect in proteasome-mediated protein degradation, 19S mutants only displayed increased susceptibility to DHA in synchronized cultures, whereas the β2 proteasome mutants displayed increased sensitivity at ring, trophozoite, and asynchronous stages 55 . These data could indicate that the 20S plays an outsized role in parasite artemisinin response. Perhaps in addition to the 20S-19S complex, the 20S-PA28 complex contributes to resolving artemisinin-mediated protein damage. This is supported by previous ndings that 3D7 parasites in which PA28 is knocked out display a 2-fold lower DHA IC 50 values at the early ring stage 54 . Although Rpt5, Rpt4, Rpn6, and β2 proteasome mutants showed increased ubiquitinated polypeptides in response to DHA compared to parental strains, these differences were not detected when we assayed for proteasome catalytic activity as measured by cleavage of uorogenic peptidyl model substrates. One reason for this discrepancy is that the uorogenic substrates can freely diffuse into the 20S CP without processing by the 19S RP, whereas detection of K48-linked ubiquitinated proteins assesses the ability of the 26S proteasome as a whole to process and degrade proteins. Interestingly, peptidyl substrate cleavage showed that at peak plasma concentrations, OZ439 signi cantly inhibits the β5 activity of R539T and C580Y proteasomes but does not inhibit WT proteasomes. This could explain why these artemisinin-resistant parasite strains do not exhibit cross-resistance to OZ439 83,84 . OZ439 also inhibited the β5 catalytic activity of β2 C31F signi cantly more than in the parental C580Y strain. These results are in concordance with our previous data showing that β2 C31F showed the greatest decrease in RSA values in response to OZ439 68 . It remains unknown to what degree the proteasome mutations tested here affect proteasome activity physiologically. Based on the cryo-EM structure of the P. falciparum 20S proteasome, β2 C31Y and β2 C31F were mapped near the S1 binding pocket of the β2 active site and were predicted to impair WLW binding via steric hinderance 58 . The Rpt4 E380* and Rpn6 E266K mutations fall outside of conserved domains, while G319S is located within the AAA domain of Rpt5 (Supplementary Fig. 8). This could indicate the Rpt5 mutation is more detrimental to proteasome activity than Rpt4 and Rpn6 mutations. Consistent with this hypothesis, the Rpt5 mutant displays increased sensitivity to DHA at ring and trophozoite stages in comparison to the Rpt4 and Rpn6 mutants, which were only sensitized at the ring stage. However, without the generation of transgenic parasites, the degree of DHA sensitization conferred by particular proteasome mutations and the in uence of Kelch13 cannot be determined. In summary, the data presented here indicate that (1) antimalarial compounds that synergize with proteasome inhibitors perturb parasite proteostasis, (2) early parasite UPR signaling in response to DHA dictate eventual survival outcomes, and (3) parasite susceptibility to DHA correlates with a dysfunction in proteasome-mediated protein degradation. We show here and previously that chemical inhibition of the proteasome and mutations in the proteasome increase parasite susceptibility to DHA regardless of Kelch13 genotype 57,68 , highlighting the crucial role of the proteasome in parasite survival to artemisinin. These data point to the UPR and UPS, two pillars of proteostasis, as pathways that can be targeted to overcome existing artemisinin resistance. Parasites were grown at 37°C in a Heracell™ VIOS 160i CO 2 Incubator (Thermo Fisher Scienti c) at 5% O 2 , 5% CO 2 , and 90% N 2 (Matheson Gas, Irving, Texas). Stage synchronization For dose response assays, early ring stages (0-3 hpi) were obtained as previously described 55 . Brie y, cultures were exposed to 5% sorbitol (Acros Organics) at 37°C for 10 min and then cultured for 33 h. Then, cultures were incubated with RPMI 1640 supplemented with 14.3 U/mL sodium heparin (Merck, Kenilworth, NJ) at 37°C for 30 min with intermittent vortexing. Cultures were then layered on a 75% Percoll (GE Healthcare, Chicago, IL) density gradient and centrifuging at 4000 rpm (3100 x g) for 15 min. The schizont layer (layer immediately above the Percoll) was harvested and washed once with RPMI hematocrit for 3 h. Then, 0-3 hpi rings were obtained following an additional treatment with 5% sorbitol. To obtain a higher protein yield for Western blot experiments, early rings were obtained as described in 86 . Brie y, cultures were treated with 5% sorbitol a total of three times. Cultures were incubated 12 h between rst and second treatments, and then 36 h between second and third treatments. Trophozoite stage parasites (26-30 hpi) were obtained using two treatments with 5% sorbitol 12 h apart. Following the second treatment, parasites were cultured for an additional 12 h. Drug treatments and lysate preparation Parasites were synchronized as described above and treated with the indicated compound for the indicated time under hypoxic conditions. DMSO concentration did not exceed 0.2%. Parasites were released from RBCs using 0.15% saponin (Acros Organics) then washed three times with 1 x PBS at 4°C. For Western blots, parasites were lysed with 1% Triton X-100 (Thermo Fisher Scienti c), 5% glycerol other primary antibodies were obtained from Cell Signaling Technologies (Danvers, MA). All secondary antibodies were obtained from Invitrogen (Waltham, MA). After washing 4 times with 1x TBS-T, blots were visualized using Immobilon Western Chemiluminescent HRP substate (Millipore Sigma). Blots were stripped with Restore PLUS Western Blot Stripping Buffer (Thermo Fisher Scienti c) between antibodies of the same species. Densitometry was performed with ImageJ version 1.53K. Statistical signi cance was analyzed with GraphPad version 9 using a two-tailed paired t-test. Readings were taken every 3 min for 2 h or until uorescence exceeded the detection maxima. To determine activity, relative uorescence was plotted over time and the slope of the line was determined in Microsoft Excel. At least 3 biological replicates were performed for each substrate. Student t-tests were used to determine differences in relative activity. Competition assays Prior to starting competition assays, 159-2 parasites were grown in media containing 2 µg/mL blasticidin for a minimum of 2 weeks to ensure that > 90% of parasites were EGFP positive. Blasticidin selection pressure was removed prior to and through the duration of competition assays. Parasites of interest were adjusted to 1% parasitemia, and mixed 3:1 with the 159-2 parasite strain. A 1:1 ratio was not used since an initial experiment revealed that 159-2 parasites outcompete Cam3.II parasites within one week. Parasites were cultured in drug-free media at 5% hematocrit and maintained between 0.2 and 7% parasitemia. As a control, wells containing Cam3.II Kelch13 C580Y alone or 159-2 alone were grown concurrently to control for background EGFP uorescence or loss of EGFP expression in the absence of blasticidin, respectively. No loss in EGFP expression was noted in 159-2 parasites grown in the absence of blasticidin through the duration of competition assays. with DMSO or 700 nM DHA for 3 h, then lysates were subject to Western blot and immunoblotted with antibodies against p-eIF2α and eIF2α. Shown is a representative blot from three independent experiments (see Supplementary Fig. 2a for replicates). (c) Densitometry analyses was performed using Image J and UPR activation determined as described in Fig.1. (d) WT and R539T parasites were synchronized to 0-3 hpi rings and treated with DMSO or 700 nM DHA for 3 h. Then drug was washed off and parasites were harvested at the indicated times to monitor UPR resolution. Western blot was performed as described in Suc-LLVY-AMC to assess β1, β2, and β5 activity, respectively. Fluorescence was plotted over time and % activity was quanti ed by calculating the slope of the line and normalizing to the slope of DMSO-treated parasites. Bar graphs indicate mean % activity ± S.E.M. A two-tailed Student's t-test was performed between DMSO and drug-treatment counterparts, and statistical signi cance is indicated above the bars as vertical asterisks. Comparisons were also performed between Kelch13 WT and Kelch13 mutants for each treatment condition, but no signi cant difference was found (only signi cant comparisons between WT and mutant parasites are denoted here). (d-f) Parasites were synchronized as described above but treated with DMSO, 300 nM OZ439, 3 µM OZ439, or 2.5 µM WLL for 3h. Then, protein was harvested and proteasome activity was assessed as described above. (g) WT, R539T, and C580Y parasites were treated with DMSO or 50 nM DHA for the indicated times. Lysates were subjected to Western blot and immunoblotted with antibodies against K48-linked ubiquitin and BiP. Shown is a representative blot of four independent experiments. (see Supplementary Fig. 3 for replicates). (h) Densitometry analyses was performed with Image J and levels of K48-linked ubiquitination was normalized to the loading control BiP. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download.
8,191.4
2023-05-15T00:00:00.000
[ "Medicine", "Biology" ]
On Uniqueness of New Orthogonality via 2-HH Norm in Normed Linear Space This paper generalizes the special case of the Carlsson orthogonality in terms of the 2-HH norm in real normed linear space. Dragomir and Kikianty (2010) proved in their paper that the Pythagorean orthogonality is unique in any normed linear space, and isosceles orthogonality is unique if and only if the space is strictly convex. This paper deals with the complete proof of the uniqueness of the new orthogonality through the medium of the 2-HH norm. We also proved that the Birkhoff and Robert orthogonality via the 2-HH norm are equivalent, whenever the underlying space is a real inner-product space. Introduction Different notions of orthogonality in normed linear spaces have been developed by various mathematicians. As a generalization of orthogonality from inner product space to normed linear space "x is orthogonal to y if and only if ∥x + λy∥ = ∥x − λy∥ identically in λ" was suggested by Robert ( [1,2]). However, it has the weakness that for some normed linear space, at least one of every pair of orthogonal elements would have to be zero, i.e., ∥x + λy∥ = ∥x − λy∥ for all λ only if x = 0 or y = 0. This difficulty is not experienced in the isosceles, Pythagorean, and Birkhoff orthogonalities. To study the difference of orthogonality in the complex case in comparison with the real case, Paul et al. in 2018 came with a new concept of Birkhoff-James orthogonality by introducing new definitions on complex reflexive Banach spaces and introduced more than one equivalent characterization of Birkhoff-James orthogonality of compact linear operators in the complex case [3]. In 1945, James came with the concept of the Pythagorean and isosceles orthogonalities, which characterize inner product space via their homogeneity and additivity [4]. James also discussed the existence property of isosceles orthogonality type. The property of the uniqueness of isosceles orthogonality was not discussed until Kapoor and Prasad's paper was published. They proved that the Pythagorean orthogonality is unique in any normed linear space; however, the isosceles orthogonality is unique if and only if the space is strictly convex [5]. Carlsson introduced a more general type of orthogonality treating the isosceles and Pythagorean orthogonalities are special cases [6]. Martini and Wu showed many interesting connections between the Birkhoff and isosceles orthogonality. They proved that if a linear map preserves the Birkhoff orthogonality, then it also preserves the isosceles orthogonality [7]. In 2007, Alsina and Tomas gave a different characterization of the inner product space with the help of weaker linearity axioms of the scalar product and Pythagoras/isosceles orthogonality [8]. Using the concept of the p-HH norm as described in the paper [9], Kikianty and Dragomir came up with a new notion of orthogonality with the help of the 2-HH norm, which is closely related to the Pythagorean and isosceles orthogonalities [10]. They proved that the Pythagorean orthogonality via 2-HH norm satisfies the nondegeneracy, continuity, and symmetry properties; however, it is neither additive nor homogeneous in normed linear space, but it satisfies the property of existence and uniqueness in any normed linear spaces. Isosceles orthogonality via the 2-HH norm also satisfies the nondegeneracy, continuity, and symmetry properties but neither additive nor homogeneous in general. If the normed linear space X is convex, then the isosceles orthogonality via 2-HH norm satisfies the property of uniqueness, but the existent property holds in any normed linear space [10]. According to Carlsson's result described in [6], the isosceles and Pythagorean orthogonalities are special cases of the generalized Carlsson orthogonality. We introduced a new special case of the Carlsson orthogonality which satisfies all requirements as stated in Carlsson's orthogonality as well as the nondegeneracy, simplification, and continuity property of the inner product space. Furthermore, we proved that such orthogonality is homogeneous if and only if the underlying space is an inner product space [11]. Motivated by the results of Kikianty and Dragomir, and our previous result, we have attempted to introduce a new notion of orthogonality through the medium of the 2-HH norm, which we denote by the 2-HH-N orthogonality. We have proved that the 2-HH-N orthogonality is unique in any normed linear space. If the norm on X is induced by an inner product, then the Robert and Birkhoff orthogonality via the 2-HH norm is equivalent. Definition Notation and Preliminary Results Let us first establish the notations and terminologies used in this paper. Let X be the normed linear space which we consider to be real. For any x, y ∈ X, 2-HH-N denotesx as the orthogonal toy via the 2-HH norm, which we defined with the help of the new special case of the Carlsson orthogonality discussed in [11]. The Pythagorean orthogonality plays an important role in describing new orthogonality through the medium of the 2-HH norm. Given any two elements x, y ∈ X, we say that x is Pythagorean orthogonal to y, written as x⊥ P y, if and only if ∥x + y∥ 2 = ∥x∥ 2 + ∥y∥ 2 [4]. Kikianty and Dragomir introduced the Pythagorean orthogonality via the 2-HH norm and using a similar idea to that of Kapoor and Prasad, they proved that " the Pythagorean orthogonality via 2-HH norm is unique in any normed space" [9]. Besides that, they also define the Carlsson orthogonality via the 2-HH norm in the paper [12]. For any ðx, yÞ ∈ X 2 , Kikianty and Dragomir defined the p-norm on X 2 as follows [10]: From (1), it is obvious that ∥ðx, yÞ∥ p = ∥ðy, xÞ∥ p , and therefore, the p-norm is symmetric. Using the concepts of Hermite-Hadamard's inequality, we have With the help of (2), they defined the p-HH norm on X 2 in the following ways [10]: For all x, y ∈ X, it is obvious that ∥ðx, yÞ∥ p−HH = ∥ðy, xÞ ∥ p−HH . Therefore, the p-HH norm is symmetric. They proved that ðX 2 , kð:, :ÞkÞ is a normed linear space because the nondegeneracy and homogeneity of the norm can be derived from (3) and the triangle inequality follows from Minkowski's inequality. If the norm on X is induced by an inner product ð:, :Þ, then as a special case of the p-HH norm, it is denoted by the 2-HH norm. It is defined in the paper [9] as follows: For any p ≥ 1, the p-norm and p-HH norm are equivalent in X 2 . If the inequality ( (5)) is strict whenever x ≠ y and 0 < λ < 1, it is called strictly convex. To study the properties of orthogonality in normed linear space, it is interesting to investigate the following properties of orthogonality in ordinary Euclidean space as applied to normed linear space. For any Euclidean space X, let x, y, z ∈ X. Then, the following are considered as the main properties of orthogonality [9]. (iii) Continuity: if fx n g, fy n g ⊂ X such that x n ⊥y n for every n ∈ ℕ, x n → x and y n → y, then x⊥y. In this paper, we mainly focus on the last property. Kapoor and Prasad proved that the Pythagorean orthogonality is unique in any normed linear space, but the isosceles orthogonality is unique if and only if the normed linear space is strictly convex [5]. Regarding the Robert orthogonality, the property of existence is satisfied only in an inner-product space [1]. Definition 2 [4]. A vector x is said to be isosceles orthogonal to y if and only if Definition 3 [13]. A vector x is said to be the Birkhoff-James orthogonal to y if and only if Main Result Definition 4 [11]. A vector x is orthogonal to y if If the underlying space X is a real inner product space and the relation (8) holds a. e on ½0, 1. Then, using the concept of the 2-HH norm, we have Now, the left-hand side of relation (9) Again, the right-hand side of relation (9) Now, we consider a notion of orthogonality as follows: let ðX:k:kÞ be a normed space. A vector x ∈ X is said to be 2-HH-N orthogonal to y ∈ X (denoted by x⊥ 2−HH−N y) if and only if Kikianty and Dragomir in [9] proved that " the Pythagorean orthogonality via 2-HH norm is unique in any normed space X". To prove this, they use the following lemma by omitting the proof. We give a detailed proof of the lemma as they stated in the paper [9]. which shows that f is a convex function. Similarly, for the function we can show that and we conclude that g is also a convex function. Also, we know that the sum of two convex functions is also convex. Proof. The proof has a similar idea to that of Kapoor and Prasad (pp. 406) and Kikianty and Dragomir (pp. 41). Suppose 2-HH-N orthogonality is not unique. Then, we must have elements x ≠ 0 and y ∈ X, and a λ > 0 such that x⊥ 2−HH−N y and x⊥ 2−HH−N λx + y. Define a convex function Now, and h β + 1 ð Þ=
2,139.2
2020-11-20T00:00:00.000
[ "Mathematics" ]
Exploration of the structural, optoelectronic and vibrational behavior of Sb2S3 through the first principles approach for phenomenal applications in solar cells In order to study any material the first principles approach has been considered an appropriate forum and deemed the best remedy to ensure the validity of the achievements/results obtained either theoretically or experimentally. We are, therefore, motivated to use this approach to explore the structural, optoelectronic and vibrational properties of Sb2S3 while utilizing the plane-wave pseudopotential technique and conjugate gradient method employed through the CASTEP simulation code. The crystal structure of Sb2S3 is optimized in orthorhombic phase having space group Pnma with lattice parameters a = 11.31 Å, b = 3.84 Å and c = 11.23 Å. It is noticed that magnitude of these lattice parameters approximately replicate the formerly reported theoretical as well as experimental results. The energy band gap is found to be 1.012 eV, which is utter evidence that the studied compound belongs to semiconductor category of the materials. The optical parameters unveil that Sb2S3 is capable to absorb wide range of radiations from the ultraviolet (UV) portion of the spectrum. The dynamical analysis through density functional perturbation theory (DFPT) shows that there is no soft mode which further ensures dynamical stability of Sb2S3. The optical analysis of the studied compound are enough to declare it a potential material for applications in solar cells. Introduction So far as the population has been increased tremendously in the whole world, the reason being there is an acute shortage of energy resources. It is further suspected that resources of oil, gasoline as well as coal are being extinct gradually. To overcome such limitations, researchers are motivated to cultivate such devices which could utilize sunlight for producing useful electrical energy, eventually establishing solar cell industries (Khalil et al. 2021;Wang et al. 2019). Sb 2 S 3 is an important V-VI main group semiconductor compound, and recently has gained significant attention in the fabrication of numerous photonic, and optoelectronics devices, and in the industry of nanotechnology (Nwofe 2015). In the literature, it has been extensively reported that the antimony trisulphide (Sb 2 S 3 ) compound can be widely used in thermoelectric cooling devices and also effectively utilized in the solar cells as window layers/absorbers due to its outstanding optoelectronic properties (Nezu et al. 2010;Messina et al. 2009;Maghraoui-Meherzi et al. 2010) in IR region. Among various metal sulfides, Sb 2 S 3 has numerous applications in electronic devices as a target material in television camera tubes, optoelectronic applications (Godel et al. 2015) and microwave devices as well as in various switching devices (Nwofe 2015;Koc et al. 2012;Killedar et al. 1997). For clinical applications, Sb 2 S 3 has been vested its services for labeling of the radio (Billinghurst 2001). Solar cells constructed by Sb 2 S 3 thin films have also been reported in literature (Meng et al. 2013). Savadogo and Mandal (1992) have reported the photoconductivity of the Sb 2 S 3 thin films. Sb 2 S 3 thin films are more abundant and environmentally friendly as compared to other Cd-based chalcogenides, commonly used in the manufacturing of some photonic and optoelectronic devices such as in solar cells. Antimony trisulfide thin films have good electrical properties, an optimum band gap and good optical transmittance suitable for their solar cells (Aousgi et at. 2015) applications. Although metal chalcogenides materials are highly efficient, but they are very expensive, as well as high toxicity, which limit their applications (Candelise et al. 2012;Razykov et al. 2011). Therefore, more abundant, low cost, and less-toxic materials are required (Todorov et al. 2011). A recent study revealed that Sb 2 S 3 compound have efficient charge extraction which makes it suitable candidate for photovoltaic (Nasr et al. 2011) applications, due to its reasonable band gap and high absorption coefficient in the visible region (Chang et al. 2010). Therefore, detailed knowledge of optical and electronic properties is important for understanding its optoelectronic behavior such as light absorption and band gap. Due to these types of candid problems, nowadays, two compounds that is Sb 2 Se 3 and Sb 2 S 3 have attained the concentric attention of the researchers (Wang et al. 2017;Choi et al. 2014). Amongst, antimony tri selenide, having low energy gap (1.2 eV), is very efficient material for light absorption. While Sb 2 S 3 compound offers an energy gap ranging from 1.5 to 1.7 eV, which is a more appropriate one to be considered for absorption of electromagnetic radiations upto the extent of 1.8 × 10 5 cm −1 , leading it to be a more desirous compound for utilizing in solar cell applications (Lan et al. 2018;Chang et al. 2012). This potential material is much capable to absorb photons even in cloudy weather as well as in weak light illumination conditions (Lojpur et al. 2018). In addition, by using Sb 2 S 3 in solar applications one can overwhelm many hurdles (high cost, toxicity and rare materials etc.) that are facing our industrialists while manufacturing solar cells implementing theoretical as well as experimental outcomes. It is less toxic, plentiful and cheap absorber of sun rays. And that its band gap is very suitable for solar cell applications. Besides, Cd and Te based solar cells can't serve greatly owing to be toxic and rare materials (Asim et al. 2012). Recently, researchers have recommended As 2 S 3 , Sb 2 S 3 and Bi 2 S 3 like compounds rather than silicon for solar applications because these materials have shown more absorptions with a high value of refractive index (Green 2007;Yesugade et al. 1995). Before this work, few studies have been done on the structural and electronic properties of this compound due to their applications in solar cells (Kondrotas et al. 2018;Validzic et el. 2014;Tang et al. 2018). The present study has excellent correspondence with previous available theoretical and experimental work. However, no theoretical comprehensive work has been done on the vibrational properties of Sb 2 S 3 compound before this work. Moreover, no theoretical work has been done on the optical properties of this compound using PBE-GGA functional within the CASTEP simulation code. Research methodology The simulation of the Sb 2 S 3 compound is conducted here by using DFT formalism and GGA approach as parameterized by Perdew, Burkee and Eznerhof (Perdew et al. 1996) for the xc-energy through CASTEP code (Clark et al. 2005;Hohenberg and Kohn 1964) by using the conjugate gradient method. The atomic pseudopotentials are produced individually for S atom and Sb atom using the 3s 2 3p 4 and 5s 2 5p 3 configurations, respectively. The norm-conserving pseudopotential along with the conjugate gradient method has been applied for Sb and S atoms (Troullier and Martins 1991). The optimization is carried out with an energy cut-off value of 400 eV and k-point sampling (Monkhorst and Pack 1976) of 2 × 6 × 2 is used for the Brillouin zone. Here the optical analysis is performed by solving Kramer-Kronig (Kronig 1926) relations. Using the DFPT approach (Baroni et al. 2001), vibrational behavior has been discussed here to determine the various mode of dynamics. Structural properties The crystal structure of Sb 2 S 3 compound is shown in Fig. 1, where the green color represents Sulfur atoms, while red color designates atoms of the Antimony. This Sb 2 S 3 structure consists on 8 atoms of Sb and 12 S atoms. It is stable in the orthorhombic phase having space group Pnma (No. 62), point group D 2h and lattice parameters a = 11.31 Å, b = 3.84 Å, c = 11.23 Å. Whereas the experimentally reported lattice parameters by Micke and co-workers (Micke and Mcmullan 1975) are a = 11.30 Å, b = 3.83 Å and c = 11.22 Å. It has been noticed that our results of lattice parameters are approximately equal to the formerly reported experimental results with only 0.4% deviation. Our results regarding lattice parameters are compared with formerly reported theoretical as well as experimental values as summarized in Table 1. Electronic properties To determine the electronic behavior of Sb 2 S 3 compound, the electronic band structure elongated the density of states within Brillion zone (BZ) across the high symmetry is shown in Fig. 2a, b. From the band structure plot, the energy gap is found to be 1.012 eV, leading to a narrative that the studied compound falls in semiconducting 779 Page 4 of 14 Fig. 2b, is approximately analogous to that reported previously (Koc et al. 2012). Moreover, in the valence band many peaks are seen while in the conduction band only one peak is dominant. The maximum peak occurred in the valence band at the energy of − 12.38 eV, whereas in the conduction band the maximum states are noticed at 2.44 eV energy. It is predicted from Table 2 that the energy band gap presented in our work is approximately 14% less than earlier theoretical value reported by Koc et al. (2012). In this study, WIEN2K (Blaha et al. 2001) code is also utilized to calculate the total and partial electronic density of states for the considered compound are displayed in Fig. 3. The total density of states as shown in Fig. 3a clearly endorse value of the energy gap extracted from the electronic band structure (Fig. 2a), which on the other hand can also be envisioned from partial density states (Fig. 3b, c). These figures also illustrate that S-3p states in the valence band exist near the Fermi level, while in the higher energy range (conduction band) the contribution of Sb-3p states is away from the Fermi level. These states have contributed dominantly in the conduction region, however its significant share has been seen in the valence band as well. The origin of these anisotropic effects is associated with lone pair formation of stereochemically cation Sb 5s orbitals which distorts the Sb coordination environment. The asymmetric electronic density of sates that appear at the top of the valence band corresponds to a bonding interaction (Chen et al. 2008) 1.56 Fig. 3 The calculated a total density of states, b, c partial density of states for Sb 2 S 3 compound between Sb 5s and S3p states due to stereochemically lone of antibonding states (Wang et al. 2022;Ganose et al. 2016;Walsh et al. 2011). Optical properties As depicted in Fig. 4a, we firstly discuss the reflection phenomenon of Sb 2 S 3 in the frequency ranging 0-20 eV where single peak has been noticed in the plot. It ascends gradually on increasing energy, which however rises briskly on the further escalation of energy upto 9.64 eV and maximum value of the reflectivity is found to be 0.91 at 9.72 eV. Afterwards it decreases on increasing frequency. After the frequency range of 19.62 eV, the reflection remains constant. High reflectance in the ultraviolet region ⁓ 9.16 eV indicates that it is a promising reflecting compound. As regard the absorption coefficient, it can be reported as under: (1) = 1n (1/T) = 1n 1 o ∕1 Fig. 4 a Reflectivity, b absorption coefficient, c real and imaginary parts of the dielectric function, d refractive index and extinction coefficient, e conductivity and f optical loss function for Sb 2 S 3 compound The absorption spectra of Antimony tri Sulfide is indicated in Fig. 4b which exhibits semiconducting nature of this compound. The absorption coefficient specifies how far light can pass through the material before absorption (Rahman et al. 2016). The absorption rises from zero energy and after various transition rates the absorption reaches to a maximum value in an energy region where the reflection is minimum (Fig. 4a). Sb 2 S 3 have high absorption coefficients which could absorb photons more effectively and generate electron-hole pairs for photovoltaic applications (Zeng et al. 2016). The dielectric constant as a function of energy is presented in Fig. 4c. The static value of the dielectric function ε 1 (0) is noted to be 12.98, which is very close to the experimentally (Ghosh and Varma 1979) reported value such as 12.0. This value of the dielectric function demonstrates that the studied compound is good dielectric material. The dielectric constant of Sb 2 S 3 is anisotropic and its static value is relatively large, which is common in lone-pair containing crystals. Large dielectric constants indicate the potential for strong screening of charged defects and low recombination losses (Kavanagh et al. 2021). ε 1 (ω) rises from critical values to a maximum polarization at photonic energy of 1.73 eV. It becomes negative at 4.49 eV due to reflection of the light striking on surface of the material where the materials behaves like metallic. It is seen that real dielectric function becomes zero at energy value of 13 eV expressing that antimony tri Sulfide compound is transparent above 13 eV. The extinction coefficient (k) and Absorption coefficient (α) can be interlinked with each other as given below (Soliman 1998;Wooten 1972). The refractive index (n) is interconnected with the extinction coefficient (k) by the following relation (Ziang et al. 2015). The refractive index (real part) n(ω) and the extinction coefficient (imaginary part) k(ω) of Sb 2 S 3 compound is depicted in Fig. 4d. The value of the static refractive index reported for Sb 2 S 3 compound is 3.60 which is higher as compared to former studies such as 3.29 (Radzwan et al. 2017) and 2.089 (Wypych 2016). We note two peaks in this spectrum, one for real refractive index and the other for extinction coefficient. The value of the real refractive index starts to increase from the critical value and its first sharp peak appears at the energy of 2.023 eV, showing its excitonic nature. This means that Sb 2 S 3 compound show maximum transparency at 2.023 eV. Thereafter the transparency decreases up to 9.82 eV. Whereas the highest peak for its imaginary part is noticed at the energy of value 3.62 eV. From Fig. 4d, it can also be seen that the refractive index is high in the visible region. The reported spectra of n(ω) and k(ω) have alike trends as ɛ 1 (ω) and ɛ 2 (ω) since these can be connected through the following relations: Moreover, the static values n(0) and ε 1 (0) are linked with each other through a relation: Figure 4e depicts a typical conductivity versus frequency plot for the Sb 2 S 3 compound. It rise because of the absorbing photons. The real conductivity increases and attains (3) n c = n + ik (4) n 2 − k 2 = 1 ( ) maximum value at 3.26 eV. The conductivity spectrum indicates that imaginary conductivity is maximum at 6.39 eV. In accordance with Fig. 4e, the conductivity is maximum in ultraviolet region. As the absorption is small in energy range from 13 to 18 eV, so the conductivity is also small in this region. The relation between absorption coefficient (α) and optical conductivity (σ) is given below (Soliman 1998;Wooten 1972). The energy loss spectra shown in Fig. 4f is the key parameter to express loss of energy of fast moving electrons whenever pass over the material and that maximum energy is lost at a plasma frequency (Hossain et al. 2012). The sharp maxima ⁓11.71 eV indicated in the plot can be related to the presence of plasma oscillations. The optical loss function indicates the loss of energy either by heating, dispersion, or scattering. It is revealed that absorption and transmission is minimum at 11.71 eV where loss of energy is maximum. It can be seen that the plasma response condition is fulfilled at the energy level where real dielectric function crosses zero level. The loss of energy is minimum at low frequency but it is noted maximum when frequency is increased to 12 eV. It is thus noted that the studied material has shown minimum energy losses in the visible region where absorption is maximum, so, it might a promising material for optoelectronic applications. Vibrational properties This theoretical study gives accurate predictions for some quantities which are not easy to calculate experimentally, like vibrational eigenvectors and silent mode frequencies. Moreover, phonon dispersion analysis assumed to be useful for examining thermal properties and structural stability. Usually it is explored by using various experimental techniques, like Raman spectroscopy which encounters phonons interaction with the wave or particles (Dove 1993) and neutron scattering. However, in the present study, IR and Raman spectroscopies are used for exploring phonons dispersion. In this regard DFPT (Baroni et al. 1987;Giannozzi et al. 1991) approach is considered an accurate and very energetic theory to calculate the vibrational properties of materials and compounds. It is reported that DFPT is mainly used for semiconductors in which phonon spectrum requires few wave vectors (k) (Giannozzi et al. 1991). We have investigated vibrational properties of Sb 2 S 3 via DFPT approach by using PBE-GGA functional in the CASTEP program (Segall et al. 2002). The phonon dispersion curve and phonon DOS are shown in Fig. 5a, b. There is no imaginary frequency in the phonon dispersion curve which reveal that the structure is dynamically stable. The first panel in Fig. 5a indicates different dispersion branches along symmetry direction in first BZ. For all directions, there are optical and acoustic branches which can be splitted into transverse and longitudinal modes. Both these modes are further divided into transverse optical (TO) and transverse acoustic (TA) as well as longitudinal optical (LO) and longitudinal acoustic (LA), respectively. As the unit cell of Sb 2 S 3 has twenty atoms, so, it has sixty modes of vibrations. The dispersion curve depicts (Fig. 5a) three acoustic modes, out of which two are transverse acoustic (TA and ZA) and the third one is longitudinal acoustic (LA) vibrational mode. In acoustic mode, atoms vibrate in phase in the unit cell. Therefore, at point G the acoustic phonon's frequency reaches to zero rather showing polarization. For each wave vector (k) there may be (3S-3) optical branches. Thus our crystal structure has fifty seven optical branches. In optical phonons, the vibration of atoms is out of phase in each unit cell. Here frequency is not zero at point G but it has a finite value. In contradiction to linear dispersion (7) = 4 ∕nc near point G, the transverse acoustic and longitudinal acoustic vibration modes (in-plane) and the ZA mode (out of plane) indicate a q 2 dispersion in the crystal. The few modes of vibration determined by Infrared and Raman spectroscopy for Sb 2 S 3 are displayed in Table 3. It is interesting to mention here that heretofore none of the researchers has made any effort to study such infrared and Raman spectroscopy of the studied compound. As reported in former studies, any change of polarizability during the phonon vibration leads that vibrational modes (Atkins and Paula 2009) are Raman active only. Where it might be possible that Asymmetric stretching modes and bending modes are not Raman active vibrational modes but symmetric stretching vibration mode may be Raman active mode. Hence, it is reported that vibrational modes of atoms are either Raman active or IR active but not both. Literature reveals that the strong phonon anharmonicity effect is induced due to the coupling of the lone pair electrons with lattice vibration. A theoretical study by the harmonic approach of the vibrations spectrum of crystal revealed B 1u -mode (354.20 cm −1 ) active in the IR region. These results show the high anharmonicity of some atoms in semiconducting Sb 2 S 3 material and the polar distortion of the B 1u -mode with temperature leading to the appearance of weak ferroelectricity (Zigas et al. 2017). In Fig. 6, the arrow's length depicts eigenvector amplitude. At low frequencies contribution of heavy atoms of antimony is more as compared to sulfur atoms. Further Jmol analysis unveils that sulfur atoms have in-plane stretching vibrations at frequencies 324.59 cm −1 and 354.20 cm −1 nevertheless out of the plane stretching is noticed at (Fig. 6a-d). The highest optical phonon mode is found at a frequency 354.20 cm −1 . It has also been noticed that bond angle changes between antimony and sulfur atoms if and when atoms vibrate. So, antimony and sulfur atoms have shown bending motion at frequency 128.58 cm −1 and 249.47 cm −1 (Fig. 6e, f). Whereas Sb 2 S 3 has reflected twisting modes of vibration at frequency 196.45 cm −1 and 213.87 cm −1 (Fig. 6g, h). Conclusions DFT based present study about the structural, optoelectronic and vibrational properties of Sb 2 S 3 has been performed by using PBE-GGA functional via CASTEP code. The calculated lattice parameters such as a = 11.31 Å, b = 3.84 Å and c = 11.23 Å are found approximately equal to the formerly reported theoretical as well as experimental results with merely 0.4% deviation. It is declared a semiconducting material having direct energy gap of 1.012 eV. The studied material has shown minimum energy losses in the visible region where absorption is maximum, so, it might a promising material for optoelectronic applications. Owing to high magnitude (12.98) of the static dielectric function, it can be utilized as dielectric material. It is noted to be dynamically stable since there are no negative frequencies. In this compound symmetrical and asymmetrical stretching modes, twisting and bending vibrational modes are observed at different frequencies. Based on the results summarized in the manuscript, Sb 2 S 3 be declared a potential material for optoelectronic applications.
4,992.4
2022-09-29T00:00:00.000
[ "Materials Science" ]
Intra-ocular lens optical changes resulting from the loading of dexamethasone : To study the optical changes on hydrogel-silicone intraocular lenses (IOLs) resulting from loading them with dexamethasone. We used prototype hydrogel(pHEMA)-silicone IOLs and loaded the matrices with an anti-inflammatory drug (dexamethasone). The optical properties we analyzed experimentally were a) modulation transfer function (MTF); b) spectral transmission; c) diopter power. These determinations were performed on drug-loaded IOLs, IOLs that had released the drug, and IOLs that had not been drug-loaded. Loading a hydrogel-silicone IOL with dexamethasone results in impairment of its optical qualities, in particular its MTF and spectral transmission, but not dioptric power. However, once the drug has been released, it almost recovers its initial optical properties. Introduction Cataract surgery is one of the most common operations carried out on the elderly population in developed countries. At present, due to less invasive techniques that reduce surgical risks [1,2], the criteria for surgery indications have been extended, which has made this type of surgery even more usual. In general, cataract surgery is considered safe and effective, but as in other operations, there are certain effects associated with surgery that include, among others: pain, inflammation, infection, and possible intraocular hemorrhage [3]. Daily administration of anti-inflammatory and anti-infectious agents is often used to prevent or decrease these effects for at least three to four weeks after surgery [4]. Nonetheless, the bioavailability of these drugs, which are usually administered topically, is limited due to the rapid and extensive loss of their properties from the pre-corneal area caused by tear drainage and tear renewal. Moreover, the cornea is a highly effective barrier and it considerably hinders the penetration of the drug administered in this way. Consequently, after instillation of an ophthalmic drop, less than 5% of the drug penetrates the cornea and reaches the intraocular tissue since most of it is absorbed systematically through the conjunctiva and nasolacrimal duct, which in turn can give rise to serious secondary effects. In addition, most people who suffer from cataracts, especially elderly people or those who also suffer from arthritis, find it very difficult to administer eye drops correctly, which makes the effectiveness of the treatment even lower. As a result of all of these factors, many patients do not follow the established therapeutic treatment properly or they discontinue the treatment which considerably increases the risk of ocular complications. One way of solving this problem is based on developing a new generation of intraocular lenses (IOL) loaded with ophthalmic drugs that release the postoperative treatment into the eye [5][6][7][8][9][10] in a sustained and controlled way. This could eliminate the need for topical treatment and the risks associated with inadequate treatment compliance; it also increases the effectiveness of the medication as there is a constant and controlled intraocular release of the drugs. However, a point that has not been taken into account is that the drug-loaded IOL, whether eluting in the mass or on the surface, could modify the optical properties of the lens. The introduction of any type of aberration brought about by modifying its surface could change its capacity to form sharp images. Its spectral transmission could also be modified, causing a change in the quantity of total light that reaches the retina and in its spectral composition. Drug-loading a lens can be beneficial, but it should not be detrimental to the main function of a lens, which is to form clear images on the retina. With a view to analyzing whether drug-loaded IOLs undergo changes in their optical properties, we determined experimentally the optical quality of anti-inflammatory (dexamethasone) loaded lenses by means of measuring the modulation transfer function (MTF) in vitro. This measurement has become the internationally accepted standard method for evaluating the performance of IOL image quality [11][12][13]. Subsequently, after the drug had been released, we again determined its MTF and compared the results. Finally, in order to evaluate the quality level in both cases, i.e., the drug-loaded IOLs and those that had released the drug, we compared their MTFs with the corresponding original IOLs, i.e., the non-treated IOLs. All these measurements were performed on 3 mm pupils to simulate diurnal vision. Furthermore, we determined the spectral transmission in each of the cases (the drugloaded IOL, the drug-released IOL, and the non-treated IOL) in order to establish whether there were any variations in intensity or spectral composition of the radiation that reaches the retina. Finally, we also analyzed if the power of the IOL underwent any change when it was drug-loaded. Drug-loaded IOLs Non-commercial spherical monofocal intraocular lenses of 21 diopters and 0.24 mm thickness were used for this study. These prototype IOLs were supplied by AJL Ophthalmic S.A. (Vitoria, Spain). The IOLs were made of a hydrogel based on poly(2hydroxyethylmethacrylate) (pHEMA) and incorporated dexamethasone (DXM), an antiinflammatory agent, in the matrix. For preparation of the drug-loaded polymer matrix, an appropriate DXM dose was dissolved in a 2-hydroxyethyl methacrylate (HEMA, optical grade) solution under sonication and mild heat (0.005%, corresponding to drug solubility). Subsequently, ethyleneglycol dimethacrylate was incorporated in this solution as the crosslinking agent (100 mM), and the mixture was degassed before the addition of 2,20-azobis(2methylpropionitrile) (AIBN) as the polymerization initiator. Subsequently, the mixture was injected into a rectangular mold, and thermo-polymerization took place at 60 °C for 1 h, resulting in a solid polymer plate with DXM homogeneously dispersed within the matrix [14]. The lenses were machined from this polymer base. The migration of the drug to the medium that surrounds it is produced by diffusion through the polymer matrix. The process depends on the initial DXM concentration within the IOL, and should extend till complete discharge of the drug. DXM release in PBS by diffusion through the polymer matrix was monitored by UPLC-UV until complete discharge. For this purpose, a DXM-doped IOL (~20 mg) was introduced in a sterilized dialysis bag (32x20 mm, 12.4 kDa) with 2 mL of PBS. This bag was placed in a 60 mL polypropylene container with 50 mL of PBS and heated in a water bath at 37 °C while shaking. Dialysis medium was completely replaced by fresh PBS at corresponding times for 70 days. The experiment was carried out in triplicate. Afterwards, 10 mL of every dialysis sample were freeze-dried, reconstituted with ethanol and analyzed by reverse-phase high performance liquid chromatography (RP-HPLC) analysis in an Agilent 1220 Infinity LC coupled to a UV detector (λ = 240 nm) with an analytical column (Mediterranean Sea C18, 3μm, 100 x 21 mm). The products were eluted utilizing a constant solvent mixture (CH 3 CN/H 2 O-TFA pH 4.5 50:50 v/v) at 0.8 mL/min. Triplicate analyses were run for every sample. Modulation transfer function measurements The MTF measurements have been described by Artigas et al. [11]. Basically, the MTF was calculated from the cross line-spread function recorded with the OPAL Vector System (Image Science Ltd. Oxford, UK) by using fast Fourier transform techniques. The artificial eye model used simulated in vivo conditions of the anterior chamber, including an artificial cornea and a wet cell containing physiological solution where the IOL was positioned, following the setup required by EN/ISO 11979-2 [15]. The light source was confined to 546 nm [15]. The detector type used the Reticon K series silicone linear photo diode array 12.8 mm long with 512 pixels. The best focus position was determined by measuring the variation of the MTF with focus at a spatial frequency of 20 c/mm. The MTF values were formed with an average of 16 array scans. The MTF measurements conformed to the requirements of the International Organization for Standardization [16,17]. Three prototype lenses were used to carry out the measurements in this study. Figure 1 shows three cross-line spread functions (LSF) that correspond to a dexamethasone-loaded IOL, the same drug-released IOL, and finally the other original IOL, i.e., non-treated and which is used as a control. Spectral transmission measurements The transmission curves were obtained by using a Perkin-Elmer Lambda 35 UV/VIS spectrometer. This apparatus can measure the spectrum from 200 nm onward, which means that spectral transmissions in UVA, UVB, and part of ultraviolet C (UVC) are accurately determined (precision is up to 1 nm). The integrating sphere is used, which means that all radiation that passes through the IOL, both direct and scattered, is collected by the detector. The air was taken as a reference to measure transmittance [18]. Dioptric power lens measurements To measure the IOL dioptric power, we used a focimeter with a negative lens and saline solution (0.9% NaCl) [19]. The focimeter is placed in a vertical position with a negative lens (−10D) with its concave surface facing upward and the saline solution inside the lens to make a "wet cell" where the IOL is placed. With this configuration (with no IOL test) if we focus the target on the focimeter, the power reading is −4.50 D instead of zero. The measurements will start centering the target with the divergent lens plus saline solution, then the IOL is introduced and the target is re-centered again by moving the IOL. The real IOL power is the result of subtracting 4.50 D from the focimeter reading. Results The ISO standards specify that MTF measurements should be performed on 3 mm pupils, which is the average of a human pupil in diurnal vision. Figure 2 shows the MTFs of the IOLs loaded with dexamethasone together with the MTFs of the same IOLs but which had released the drug, with reference to a perfect optical system, i.e., exclusively limited by diffraction. Each of these curves is the mean of three IOLs of the same power and with an equal drug load. Moreover, Fig. 2 shows the MTFs of the drugreleased IOLs, compared with the MTF corresponding to a similar, non-treated IOL for a 3 mm pupil. This comparison is for ascertaining whether the drug release makes the IOL reach the optical quality of the original IOL, i.e., non-treated. Fig. 2. MTF of a dexamethasone-loaded IOL, and MTF of a dexamethasone-loaded IOL that had subsequently released this drug, for a 3 mm pupil compared with the MTF of a similar, but non-treated IOL. Mean of three lenses. The ISO standards specify that the MTF minimum value for an IOL to have good optical quality is 0.43 for the spatial frequency of 100 c/mm and a 3 mm pupil. However, Felipe et al. [22] and Alarcon et al. [23] emphasize how non-predictive this ISO standard is since it only takes one spatial frequency (100 c/mm) as a parameter. This is why we also determined the Average Modulation (AM) [11,20,21] which is the mean value of the MTF calculated from 0 to 100 c/mm, and the Strehl Ratio which is a parameter used classically for quantifying aberration effects Figure 3 shows the mean spectral transmissions of the original non-treated, the drugloaded, and the drug-released IOLs. Discussion This study, as its title indicates, focuses exclusively on the optical properties of the lens, since our aim was to test whether loading an IOL with a type of drug (DXM) could impair the main function of a lens, which is to form images. The MTFs corresponding to the dexamethasone-loaded IOLs and 3 mm pupils (Fig. 2) are quite distant from those of a perfect optical system, i.e., only limited by diffraction, hence their quality is poor. If we now analyze these same IOLs but after they have released their dexamethasone load, we obtain the MTFs also shown in Fig. 2. In principle, the curves can be seen to draw nearer to the system limited by diffraction, i.e., their optical quality improves. This may mean that the dexamethasone impregnation does indeed affect the optical quality of the IOL. If we compare the MTFs corresponding to the drug-released IOLs with similar IOLs of the same power and thickness but which have not been drug-loaded (Fig. 2), we can see that the MTF of the drug-released IOLs is practically the same as that of a non-treated IOL. In order to evaluate these variations numerically, Table 1 shows the MTF values for the spatial frequency of 100 c/mm (ISO standard [15]) and also calculates the Strehl Ratio (SR) and the Average Modulation (AM) for 3 mm pupils and for the drug-loaded IOL, the drugreleased IOL, and the non-treated IOL. The dioptric powers measured for the different IOLs we analyzed are also included in Table 1. In order to know if these variations in the MTF of the IOL can affect a patient's vision, Felipe et al. [22] demonstrated that for multifocal IOLs the eye's tolerance to MTF decay is approximately 15% of the AM value and it would need to reach a 25% difference in the MTF for it to affect the visual acuity of the patient significantly. Although our case deals with monofocal IOLs, these data can be taken as a reference to ascertain the influence that variations in the MTF can bear on the real vision of the patient. For a 3 mm pupil and a spatial frequency of 100 c/mm the ISO standard [15] gives, as mentioned above, a minimum value of 0.43 for the MTF of a monofocal IOL. The MTF mean value of the IOLs used in our study in these conditions is, however, 0.382, i.e., only 11% lower than the minimum value required, thus its optical quality remains good [21]. Moreover, as we stated above, measuring only one spatial frequency is not very significant [23,23] and in any case, our objective was to compare the optical quality of similar IOLs, some loaded with dexamethasone, others that had released the drug, and finally others that were non-treated which were used as controls. Drug-loaded Then, when the IOL was loaded with dexamethasone its mean MTF was 0.259, i.e., 31% lower than the non-treated IOL, therefore the visual acuity of the patient may be compromised. When this IOL releases all the drug, its MTF value for 100 c/mm increases up to 0.353, i.e., only 8% less than for a non-treated IOL, which means that its optical quality reaches a similar level to that of the original IOL. The SR and AM values follow a similar pattern, as the Strehl Ratio gives a 20% lower MTF value for the loaded IOL than for the nontreated IOL, but this MTF value increases when the drug is released up to a 5% lower value than the original IOL. With regard to the Average Modulation, these MTF values are 13% lower for the doped IOL than for the original and this value increases when the drug is released up to only 3% less than the original IOL. It seems logical to think that scattering causes the decrease in the MTF, which increases when the drug is released. On the other hand, since the dexamethasone incorporated in the polymer matrix is of a molecular nature and it is two or three orders lower than the wavelength used for measuring the MTF, λ = 546 nm (ISO standard), the scattering that is brought about cannot be excessive. This agrees with the experimental fact that the decrease in the MTF when the IOL is loaded, it is not very great. However, this hypothesis should be confirmed in future studies. The spectral transmission is hardly affected by the action of the drug load (Fig. 3). This Figure shows that the IOL incorporates a perfect cut-off filter [18], which totally filters out ultraviolet radiation, only a slight uniform decrease (approximately 3%) in the transmission in the visible spectrum when the IOL is drug-loaded can be observed. When the IOL releases the drug this small decrease is practically recovered. The difference between an unloaded IOL and a non-treated IOL is approximately 1%, which enters in the measurement error of the spectrophotometer. In our study and as can be observed in Table 1, the dioptric powers of the IOLs are not affected by drug-loading as the measurements in drug-loaded, drug-released, and non-treated IOLs are always within the tolerated margin of error ( ± 0.4 D) [15] for IOLs. To sum up, when the matrix of a hydrophobic silicone-hydrogel IOL is loaded with dexamethasone its optical quality is affected because its MTF values for a 3 mm pupil (photopic vision) drop significantly. However, this optical quality is practically recovered when it releases the drug and almost reaches the values of that of a non-treated IOL. This would mean that the patient could have a lower visual acuity that would be restored in days as soon as the drug was released. The spectral transmission is hardly affected by dexamethasone loading, just a slight, uniform decrease in the visible spectrum is observed, which is recovered when the drug is released. Likewise, the power of the IOL is not affected by the drug-loading and its value always remains within the tolerable margin of error.
3,904.6
2017-09-21T00:00:00.000
[ "Physics" ]
TVITERAŠI OR TWITTERAŠI ? PRODUCING AND ANALYSING A NORMALISED DATASET OF CROATIAN AND SERBIAN TWEETS In this paper we discuss the parallel manual normalisation of samples extracted from Croatian and Serbian Twitter corpora. We describe the datasets, outline the unified guidelines provided to annotators, and present a series of analyses of standard-to-non-standard transformations found in the Twitter data. The results show that closed part-of-speech classes are transformed more frequently than the open classes, that the most frequently transformed lemmas are auxiliary and modal verbs, interjections, particles and pronouns, that character deletions are more frequent than insertions and replacements, and that more transformations occur at the word end than in other positions. Croatian and Serbian are found to share many, but not all transformation patterns; while some of the discrepancies can be ascribed to the structural differences between the two languages, others appear to be better explained by looking at extralinguistic factors. The produced datasets and their initial analyses can be used for studying the properties of nonstandard language, as well as for developing language technologies for nonstandard data. I N T R O D U C T I O N Since the beginning of its wider use, computer-mediated communication (CMC) has been attracting a lot of attention in fields ranging from communication studies to natural language processing (NLP).On the one hand, CMC is seen as an important source of knowledge and opinions (Crystal 2011); on the other hand, its lexical and structural properties are a well-established research topic in linguistics and NLP.CMC occurs under special technical and social circumstances (Noblia 1998), imposing specific communicative needs and practices (Tagg 2012); as a consequence, its language often deviates from the norms of traditional text production, instantiating numerous non-standard features at all levels, from unorthodox spelling to colloquial and other out-ofvocabulary (OOV) lexis, as well as simplified syntax (see e.g.Kaufmann, Kalita 2010). The non-standard features of CMC are particularly important for NLP, as deviations from the norm make CMC difficult to process automatically, and tools developed for standard languages have a notoriously poor performance when applied to CMC data.This is evidenced by decreases in performance in the entire text processing chain, from tokenisation (Eisenstein 2013) and partof-speech tagging (Gimpel et al. 2011) to sentence parsing (Petrov, McDonald 2012).The non-standard features of CMC have been analyzed both qualitatively and quantitatively (Eisenstein 2013;Hu et al. 2013), and different strategies have been proposed for dealing with non-standardness: adapting standard tools to work on non-standard data (Gimpel et al. 2011), using pre-processing steps to tackle CMC-specific phenomena (Foster et al. 2011), and normalising CMC corpora, i.e. using a dedicated annotation level in which standard forms are assigned to non-standard words (Kaufmann, Kalita 2010;Liu et al. 2011). In this paper we adopt the normalisation-based approach, focusing on Twitter messages (tweets) written in Croatian and Serbian.As one of the most widely used CMC platforms, Twitter has already received a lot of attention in NLP.The number of tweets published per day are counted in hundreds of millions (Benhardus, Kalita 2013), and the content ranges from news broadcasts and official announcements by companies and institutions to personal thoughts and opinions the users share, making Twitter a rich source of data for NLP tasks related to text mining.To enable these tasks to be performed, automatic lowerlevel processing is a must, meaning in turn that the problem of nonstandardness needs to be solved.In the specific case of Twitter, an additional component influencing the structural properties of its language is that messages are constrained by the length restriction of 140 characters.Given the recent availability of basic language tools for standard Croatian and Serbian, a normalisation-based approach was deemed more cost-efficient than an adaptation of standard language tools.Additionally, performing normalisation gives researchers easy access to deflections from standard language occurring in non-standard one. Examples of tweets containing non-standard features in Croatian and Serbian are shown in Table 1.These features include phenomena typical of CMC in general, such as phonetic spelling of foreign words (e.g.fešn for fashion), abbreviations (e.g.zg for Zagreb), @ name mentions and emoticons, but also phenomena typical of Twitter like hashtags and some terms (e.g.fave), as well as some language-specific features, such as omission of diacritics (which occurs in both Croatian and Serbian, e.g.kauc for kauč -couch), and the use of fully language-specific dialectal and colloquial non-standard forms (e.g. the Ikavian dialectal form isprid for ispred -in front of in Croatian). Croatian Serbian -ei [ej] With the future goal of developing tools for automatic CMC normalisation, we manually normalised a sample of 4000 tweets per language.In the remainder of the paper we first describe the corpus the tweets were sampled from and the samples themselves, moving on to the procedure and the unified Croatian and Serbian guidelines used in the manual normalisation.We then present several initial analyses based on the normalisation outcomes; the analyses were performed starting from the normalised forms and looking towards forms found in the Twitter datasets.Specifically, we look at the distribution of standard -> non-standard transformations across parts of speech and lemmas, as well as the distribution of transformation subtypes (deletions vs. insertions vs. replacements), and we compare Croatian and Serbian.As very little related previous work exists for these languages, our main goals are to give an overview of the key trends, and to compare these trends in the two languages, facilitating the formulation of future specific linguistic hypotheses. C O R P U S C O N S T R U C T I O N A N D S A M P L I N G The corpus we employ comprises Croatian and Serbian tweets harvested with TweetCat (Ljubešić et al. 2014b), a custom-built tool for collecting tweets written in lesser-used languages.The collection of tweets for both languages took place from 2013 to 2015, resulting in a corpus of about 25 million tokens in Croatian and 205 million tokens in Serbian, after deduplication and the filtering of foreign-language tweets and tweets without linguistically relevant content (i.e.those containing only photos, links, or emoticons). The sample we used for the manual normalisation task contained a total of 4000 tweets per language, split into four categories with 1000 tweets each.The categories were based on automatically assigned levels of technical (T) and linguistic (L) standardness (Ljubešić et al. 2015), so that 1000 tweets belonged to each of the T1L1, T1L3, T3L1 and T3L3 combinations, with the marks being 1= standard and 3=very non-standard (for more detail about the annotation of standardness levels in Twitter corpora of Croatian, Serbian and Slovene see Fišer et al. 2015).These specific categories were included with the goal of sufficiently representing non-standard forms, given that it has been shown that the language of tweets is mostly very standard in Serbian (67% of tweets being annotated with L1, and 30% with L2), and in particular Croatian (73% of tweets being annotated with L1, and 21% with L2), where Twitter is frequently used for dissemination of information by news agencies and other official accounts (Fišer et al. 2015).To ensure enough content was available, only tweets over 100 characters long were included in the sample. Some tweets in the initial sample were deemed as irrelevant for the normalisation task and were excluded from further processing; these were messages that were unintelligible or automatically generated (e.g.news or advert lead-ins), as well as those that were (almost) completely written in a foreign language, and those that contained no linguistic material.After their removal, 3877 tweets (amounting to 89,215 tokens) remained in the Croatian sample, and 3750 tweets (91,877 tokens) in the Serbian one.Finally, due to nonone-to-one mappings (see section 3 for more detail), the token count changed during normalisation, so that the normalised sample comprises 89,542 tokens for Croatian, and 92,236 tokens for Serbian. After manual normalisation, the normalised sample was automatically linguistically annotated; MSD (morphosyntactic description) tagging and lemmatisation were performed with the tagger and lemmatiser described in Ljubešić et al. (2016b).The accuracy of morphosyntactic tagging (773 different labels) is estimated at ~92% while the part-of-speech tagging (13 different labels) and lemmatisation reach ~98% accuracy. N O R M A L I S A T I O N P R O C E D U R E A N D G U I D E L I N E S The manual normalisation was performed using the web-based annotation platform Webanno, which allows users to define their own annotation levels.In our study, three levels were defined: corrections (tokenisation corrections), sentences (sentence segmentation corrections) and normalisation (linguistic normalisation).Guidelines were developed for each of the three levels, explaining both the technical (WebAnno-related) and the content-related side of interventions.Up to four values could be entered per original token at each level. Each tweet was normalised independently by two annotators.A curation procedure followed, in which the decisions of the different annotators were compared and cases of inter-annotator disagreement were resolved.For Croatian, the curation procedure was coordinated between the two annotators, while for Serbian the task was performed by an independent curator.The guidelines the annotators received are described in the following subsections. General rules The annotators were instructed to identify tweets deemed as irrelevant (e.g.due to being automatically generated, see section 2) and mark them for deletion.As for the relevant tweets, overall, a minimal intervention principle was adopted and it was decided not to make corrections that would be impossible, or extremely difficult for a machine learning algorithm to learn.Context was to be taken into account when resolving potentially problematic issues and ambiguous cases (e.g. in Croatian ko -> kao -as, like, in sreću svu širimo ko zarazu -we spread happiness as if it were a contagious disease, but ko -> tko -who in Ko je ljep?-Who is beautiful?);if an issue could not be resolved based on the context, no normalisations were to be made. Segmentation and tokenisation Defining tokens and sentences in CMC is less straightforward than in standard language corpora, and automatic procedures are more error-prone.For this reason, automatic tokenisation and segmentation were manually checked and corrected where needed. Corrections at the sentence segmentation level relied on punctuation, if present, on other symbols (name mentions designated with @, emoticons/emojis, and hashtags) in case they occupied a position where punctuation would normally be found, and on the annotators' intuition if no explicit symbols were used.Annotators were instructed to only insert a sentence boundary when they were fully confident one was needed, and to pay special attention to sentence-internal use of dots (...) and punctuation sequences such as ?!?!, which can indicate pauses or surprise rather than being sentence boundary markers. As for tokenisation, guidelines were provided for cases known to be problematic: hyphenated inflectional endings for abbreviations (e.g.BMW-uto BMW), cases where vowel omission is marked by an apostophe (e.g.pos'o, from posao -job), and abbreviations ending with a dot (e.g.dr.from drugiother), which often lead to incorrect automatic splitting of a single token into two or three separate ones.An opposite case that was mentioned was that of word combinations containing hyphens, which are sometimes not separated into multiple tokens when they should be. Linguistic normalisation The level we focus on in this paper is normalisation.The main goal of manual normalisation was to provide training data for building tools for automatic normalisation of CMC data, but normalisation in general is also important for the end users of CMC corpora, as it enables them to perform queries based on standard forms, much along the lines of dialectal or diachronic data. In formulating the normalisation guidelines, we tried to strike a balance between the requirements of machine learning algorithms and those of linguistic analysis.The starting point of our work were the guidelines developed for Slovene Twitter data within the JANES project (see Čibej et al. 2016), which were adapted for Croatian and Serbian based on the authors' intuition, consultation with the annotators and other researchers, as well as orthography and grammar manuals of the languages concerned. Normalisation was restricted to word level, and no word order or syntactic deviations from the standard were corrected.Additional kinds of corrections that were explicitly excluded were those concerning lexical choice (e.g.colloquial words were not 'translated' into their standard equivalents; for instance, komp was not changed into kompjuter -computer), the use of punctuation, usernames and hashtags (regardless of what kind of linguistic material they contained), and ellipsis.In other words, we focused on nonstandard forms that can be seen as spelling deviations, not intervening on OOV items that were not misspelt, on style, or on Twitter-specific phenomena. Finally, due to the complexity of the rules listed in orthography manuals, we decided not to intervene when it came to capitalisation, leaving everything as is, including lower case letters at sentence beginnings. The following normalisation rules were applied:  Normalise Croatian/Serbian words making use of foreign letters or letter combinations: shisha -> šiša (he/she cuts hair), chak -> čak (even), kavizzu -> kavicu (coffee)  Normalise non-standard spellings (regardless of whether they are regional forms, phonetic adaptations, or forms containing an obvious typo, and regardless of whether they are intended or non-intended): As can be seen from the examples, several of the above rules lead to non-oneto-one mappings between the original and normalised tokens, affecting the total token count discussed in section 2. D A T A A N A L Y S I S In this section we present the results of a series of analyses performed on the manually normalised Croatian and Serbian Twitter datasets.In these analyses we look at (1) original tokens, (2) normalised tokens (up to four tokens per one original token), (3) morphosyntactic descriptions automatically assigned to normalised tokens, and (4) lemmata automatically assigned to normalised tokens. As explained in section 3.3, the normalisation guidelines we used were formulated in terms of descriptive categories, some of which are difficult or impossible to identify automatically.In the analyses we thus look at the normalisation outcomes using more readily identifiable criteria: parts of speech, specific lemmas and surface forms, Levenshtein transformation types, and the position of transformations within words.While in section 3 we dealt with normalisation, i.e. the assignment of standard language forms to nonstandard ones, in all analyses the focus is on the opposite direction (standard -> non-standard forms), as our the goal is to reconstruct the modifications that take place in non-standard language use compared to the standard; in this case we talk about transformations. Analysis by part-of-speech The analysis we dedicate most attention to is based on part-of-speech information assigned to each token in the normalised sample.We first look at part-of-speech distributions in Croatian vs. Serbian CMC, and in CMC vs. Serbian, and a log likelihood value between 3.8 and 6.5 is significant at p<0.05, while a value of 6.6 or more is significant at p<0.01 (Leech et al. 2000: 17;Mair et al. 2002). 3We also compare the Twitter distributions to the part-of-speech distribution in a standard language dataset for Croatian -hr500k (Ljubešić et al. 2016b); given that a comparable standard dataset for Serbian was not available at the time of writing, here we only look at relative frequencies (%), without conducting statistical tests.standard language dataset, this comparison reveals an expected ten times higher percentage of interjections and the already discussed residuals in CMC data.Furthermore, in CMC there are half as many adjectives as in the standard data, about one-third fewer nouns and one-fourth fewer prepositions, while verbs and pronouns are more present in CMC than in the standard data.Such findings are in line with CMC being a largely informal genre, where a high frequency of verbs compared to nouns is expected (see e.g.Biber et al. 1998: 68 for English). Going back to the Twitter datasets, for each part of speech we also examined the percentages of forms that have been transformed; these results are given in Analysis by lemma and surface form The next set of analyses focuses on the most frequent lemmata in each of the resources, as well as their comparison to a standard-language resource.The most frequently normalised lemmas and surface forms are analysed as well. The lists of the most frequent lemmata in the two Twitter datasets and the hr500k standard Croatian dataset are displayed in Table 4.The most obvious difference between the two languages, not traceable to the difference between CMC and standard language, is the higher frequency of the already discussed conjunction da in Serbian.The most obvious difference between the nonstandard and standard registers is in the pronoun ja (I, me), which has more than 1% of occurrence in both CMC datasets, while it does not make it into the top 20 entries in standard Croatian.Most other lemmata are present in all three lists, with some slight differences in percentage and rank.The biggest difference in percentage can be observed on punctuation, with the full stop and comma being more frequent in standard Croatian than in non-standard Croatian and Serbian.On the other hand, the ellipsis, the exclamation mark and the question mark make it to either both or one of the lists of non-standard data, but not the standard data list.These divergences seem to point to punctuation not being underused in non-standard language, but rather being used somewhat differently, possibly due to its often expressive nature.Tablе 4: The 20 most frequent lemmata in the Croatian and Serbian Twitter datasets and the standard hr500k Croatian dataset. In Table 5 we show the lemmata that were most frequently transformed in each of the Twitter datasets.For each lemma we report the frequency, overall percentage of the transformed forms this lemma covers, as well as the percentage of all forms of that lemma that were transformed.We again disregard transformations due to diacritic omissions.Tablе 5: The 20 most frequently transformed lemmata.The third numerical column describes the proportion of the lemma occurrences that were transformed. Many lemmata are present in both lists, with some variation in rank.In Croatian the most frequently transformed lemma is the ellipsis punctuation (...), which occupies the 13th place in Serbian.The overall most frequently transformed forms come from the verb biti (be).In Croatian, biti is followed by a series of function words, while in Serbian two additional verbs make the top five as well: jebati (fuck), mostly due to the high frequency of abbreviations such as jbg (from jebi ga -fuck it), and hteti (want), mostly due to the drop of the initial h, as in oću (hoću -I want) or oće (hoće -he/she wants).The rest of the list mostly consists of function words and Twitter-specific nouns (tweet and Twitter), as well as two proper nouns in Serbian: the name of the current prime minister Aleksandar Vučić (frequently mentioned and sometimes encoded using the initials AV or the form AVučić), and the Serbian capital Belgrade (mostly shortened to Bg or Bgd). Finally, as for the 20 most frequently transformed surface forms, omitting those that only lack diacritics, they are given in Table 6.Tablе 6: The 20 most frequently transformed surface forms in the Croatian and Serbian Twitter datasets. While some forms are shared between the two lists -for instance jel (je li -is it), al (ali -but), bi (bih -would), ko (kao -like, also tko -who in Croatian) -(kak for kako -how, tak for tako -like that, ak for ako -if) are specific to Croatian, while abbreviations such as fb (Facebook) and tw (Twitter), min (min. for minute) and god (god.for godina -year), or jbt (jebo te -fuck) and jbg (jebi ga -fuck it) are frequent only in Serbian. Analysis by transformation type We start the next analysis by calculating for each language the probability distribution of the three types of Levenshtein transformations -deletions, insertions and replacements (Levenshtein 1966), going from the normalised forms to the forms found in tweets. The results are summarised in Table 7.The numbers in the first three rows capture all transformations, and show that while deletions and insertions are significantly more frequent in Croatian than in Serbian, the opposite is true for replacements.The fact that Serbian has over 10% more replacements than Croatian can be explained by its already mentioned more pronounced tendency towards diacritic omission.In fact, the numbers in the bottom rows, obtained after we discarded the tokens in which the transformations consisted solely in the omission of diacritics, show partly reversed trends: deletions become more frequent in Serbian, and replacements in Croatian.Overall, the most frequent transformation type is character dropping, followed by replacements, roughly half of which in Croatian, and four fifths in Serbian, are due to omission of diacritics. Analysis by position of transformation In the final part of the analysis we focus on the position of transformations (deletions, insertions, replacements) inside the word.Compared to insertions, deletions are more frequently found inside the string, but there is again an emphasis on word end, largely due to final vowel deletions.The corresponding histograms for Serbian can be seen in Figure 2.These histograms show a much less pronounced trend of transformations predominantly being at the end of the string, primarily due to the more frequent omission of diacritics compared to Croatian.This is also reflected in the replacement histogram, where most transformations occur in the second half of the string, but not at its very end.Insertions again have the strongest tendency towards the end of the string, but both insertions and deletions are less biased towards the end than in Croatian. C O N C L U S I O N In this paper we presented a sample of Croatian and Serbian tweets manually normalised by following unified annotation guidelines.The produced datasets will be highly useful both for studying the language of CMC and for developing language technologies for CMC data, especially text normalisers that will enable standard language technologies to be used in downstream processing. We also carried out a series of analyses on the described datasets.Inspecting the overall frequency of transformations, we concluded that Serbian shows a greater tendency towards omitting diacritics, while Croatian is more susceptible to other types of non-standard forms.The distribution of parts of speech in both languages, compared to a standard Croatian dataset, revealed a lower percentage of adjectives and nouns and a higher percentage of verbs in CMC.As for transformations of different parts of speech, most frequent transformations were those on closed part-of-speech classes.Lemma-based analyses showed the most frequently transformed lemmas to be auxiliary and modal verbs, interjections, particles and pronouns. Focusing on Levenshtein transformations, we observed that, putting aside diacritic omissions, the most frequent transformations were deletions, the amount of insertions and replacements being similar.Deletions consisted mostly of vowel droppings, while insertions were mostly due to vowel repetitions and prolonged interjections; most replacemens were due to diacritic ommissions and regional variants.Finally, we found that transformations mostly occurred at word end, and very infrequently at word beginning, especially in Croatian.Insertions were found to have the most pronounced tendency towards the end, deletions coming second. These initial analyses are intended to provide a starting point for studies of more specific linguistic phenomena, as well as extralinguistic factors such as user age.In future work we also plan to focus on a lexical analysis of CMC, not captured in our normalisation guidelines, but shown in previous work (Fišer et al. 2015) to be very relevant for Croatian in Serbian, as they both display a higher percentage of lexical than structural non-standard forms. standard Croatian.In a second step, we further zoom in on CMC data and compare the distribution of transformations by part of speech in Croatian and Serbian.The results of the comparison of part-of-speech distributions in the Twitter data are shown in Table 2.Both absolute and relative frequencies are shown; the LL column contains the values of the log likelihood statistic, which indicates the degree of significance of the difference between frequencies in Croatian and Serbian data; the +/-sign indicates over/under-use in Croatian compared to Figure 1 : Figure 1: Transformations in Croatian by position. Figure 1 Figure1shows the results for Croatian.The overall trend seen in the first histogram is that transformations mostly occur at the word end, and barely ever at word beginning.Replacements, typically being due to omissions of diacritics, as well as some dialectal transformations, occur inside the word as well, although still more frequently at word end.Insertions have the strongest tendency towards the end of the word; a closer inspection of all strings shows that most insertions are in fact expansions via repetitions of the final vowel. Figure 2 : Figure 2: Transformations in Serbian by position. Comparison of part-of-speech distribution in the Croatian and Serbian Twitter datasets and the standard Croatian hr500k dataset.The results show that the biggest difference in the distribution of parts of speech between Croatian and Serbian CMC data lies in the residuals, a part of speech that, in addition to the standard non-classifiable residuals, covers foreign words, emoticons/smileys, hashtags, @ name mentions and URLs.Looking at specific types of residuals, the biggest difference is observed for URLs, which Moving on to the PoS distributions in the two CMC datasets vs. the hr500k4We thank the two anonymous reviewers for undelining the relevance of these variables, of which age and account status (private vs. corporate) seem to be most promising in terms of data availability.Manual inspections of the corpus content so far indicate that more very young (secondary school age) Twitter users are found in Serbia than in Croatia, while more corporate accounts are present in the Croatian sample. This is, of course, a very tentative claim, whose further discussion we leave for future work, in which variables such as the users' age, education level and socioeconomic status, as well as the private vs. corporate account status, need to be included.4Amongtheremainingpartsofspeech,a substantial structurally motivated difference is observed on conjunctions, due mostly to da (that), whose relative frequency is twice as high in Serbian as in Croatian (see Table4, section 4.2).Da is used in complex predicates in combination with the present tense in Serbian; in Croatian, verb infinitives are normally used instead of the da + present tense construction (Ser.mogudauradim = Cro.moguuraditi-I can do).As for the other PoS differences, they are mostly explained by the initial difference in the frequency of residuals. 5To check this, we recalculated the relative frequencies and the LL values after removing residuals and interjections (another CMC-specific part of speech), obtaining the following LLs: adjectives16.74, cjunctions 168.15, numerals 69.54, nouns 0.73, particles -2.49, pronouns -62.36, prepositions 8.97, adverbs 37.16, verbs -11.92, abbreviations 62.57, punctuation 69.32.While many of the differences remain significant, most values become smaller, indicating that no linguistic factors beyond those already mentioned are at play. Table 3 . The overall percentage of tokens that were transformed is quite close in the two languages: 9.34% (8360) in Croatian and 8.57% (7910) in Serbian. However, after the transformations due to diacritic omissions are discarded, we are left with 6.87% (6156) transformed tokens in Croatian and 3.81%(3511)transformed tokens in Serbian, which shows that diacritics are omitted more often in Serbian, while Croatian has a greater tendency towards non-standard forms beyond diacritic omission.The frequencies of transformed tokens by PoS shown in Table3are limited to those tokens that have undergone transformations other than diacritic omissions.As above, the log likelihood statistic is reported alongside the frequencies.The highest percentage of transformed tokens is found among interjections (mostly due to vowel or syllable repetitions, as in Hahahahaha), abbreviations (mostly due to omissions of the final punctuation, as in god instead of god.for godina -year), and particles.The most frequently transformed particles with the corresponding absolute frequencies in Croatian and Serbian are jel (shortened from je li -is it, 82 vs. 73), nebi (shortened from ne bi(h) -would not, 16 vs.7), dal (shortened form da li -would it, 12 vs.4), nek (neka -let it, is mostly due to the non-standard ko often being used in Croatian instead of the standard tko -who (also in compunds such as ne(t)ko -somebody), and šta being used instead of što (what), where in Serbian ko and šta are the standard forms.The only two parts of speech that undergo significantly more transformations in Serbian are abbreviations and residuals, the latter possibly due to Croatian containing more URLs, hashtags and @ name mentions, which were not normalised.Among the open part-of-speech classes most transformations happen among verbs (in particular the auxiliary/copula biti -be; see Table5in section 4.2) and adverbs, once again much more frequently in Croatian than in Serbian, as evidenced by very high LL values; one possible reason is the frequent shortening of infinitives in Croatian (e.g.gledat for gledati -watch), which is highly atypical for Serbian.Nouns come next, with a similar percentage of transformed forms in the two languages.Adjectives are placed last and are only slightly more frequently transformed in Croatian than in Serbian, with the difference not reaching significance.this issue through Levenshtein transformations, we focus on a lemma-based analysis. Comparison of transformation distributions in Croatian and Serbian, with and without (-d) diacritic omission.We next analyse the most frequent specific transformations by language.In Table8we show the top 10 transformations per Levenshtein transformation type, separately for Croatian and Serbian.The 10 most frequent transformations by language and type.As expected, the most frequent deletions in both languages are those of vowels, but with some exceptions as well.In Croatian the most frequent cases are deletions of i (as in al for ali -but, and il for ili -or), the dot (either within punctuation ..., or in abbreviations, as in npr for npr.-e.g.), the space (due to the merging of words such as jel for je li -is it, or nezz for ne znam -I don't know), a (in shortenings such as ko for kao -like and nek for neka -let it), j (due to the use of the ikavian yat reflex, as in di for gdje -where, or uvik for uvijek -always), and e (in shortenings such as bu for bude -will be, or ajd for hajde -come on).In Serbian, the most frequent deletions are those of e (in shortenings like aj for ajde -come on, or jbg for jebi ga -fuck), a (in shortened forms such as ko for kao -like, or reko for rekao -said), i (in jel for je li -is it, al for ali -but, or msm for mislim -I think), the space (in merged words like jel for je li -is it, or ustvari for u stvari -actually), and o (in shortenings like jbt for jebote -fuck, fb for facebook and bi for bismo -we would).This analysis indicates that in Croatian deletions are more frequent on high frequency words, while Serbian shows a tendency towards shortening frequently co-occurring terms or phrases.Insertions in both languages are mostly due to interjections, and some lexical words, containing repeated syllables (e.g.hahahahaha), or repeated vowels (as in vodiiiiiiii -leads).As for replacements, while in Serbian they mostly cover the omission of diacritics and the marking of vowel omissions with an apostrophe (as in je l' for je li -is it, or ost'o for ostao -he stayed, a phenomenon virtually non-existent in Croatian), in Croatian there are three additional frequent cases: e-i (due to the use of the ikavian yat reflex, as in vitar for vjetar -wind), o-a (in the substandard pronoun variant šta (što -what), and the southern dialectal endings of present participles like pogodia (pogodio -he hit) and falija (falio -lacked)), and m-n (transformation of the standard ending m in the southern dialect, as in san (sam -I am) or van (vam -to you),
7,239.4
2016-09-27T00:00:00.000
[ "Linguistics", "Computer Science" ]
Doxorubicin-Resistant TNBC Cells Exhibit Rapid Growth with Cancer Stem Cell-like Properties and EMT Phenotype, Which Can Be Transferred to Parental Cells through Autocrine Signaling Emerging evidence suggests that breast cancer stem cells (BCSCs), and epithelial–mesenchymal transition (EMT) may be involved in resistance to doxorubicin. However, it is unlear whether the doxorubicin-induced EMT and expansion of BCSCs is related to cancer dormancy, or outgrowing cancer cells with maintaining resistance to doxorubicin, or whether the phenotypes can be transferred to other doxorubicin-sensitive cells. Here, we characterized the phenotype of doxorubicin-resistant TNBC cells while monitoring the EMT process and expansion of CSCs during the establishment of doxorubicin-resistant MDA-MB-231 human breast cancer cells (DRM cells). In addition, we assessed the potential signaling associated with the EMT process and expansion of CSCs in doxorubicin-resistance of DRM cells. DRM cells exhibited morphological changes from spindle-shaped MDA-MB-231 cells into round-shaped giant cells. They exhibited highly proliferative, EMT, adhesive, and invasive phenotypes. Molecularly, they showed up-regulation of Cyclin D1, mesenchymal markers (β-catenin, and N-cadherin), MMP-2, MMP-9, ICAM-1 and down-regulation of E-cadherin. As the molecular mechanisms responsible for the resistance to doxorubicin, up-regulation of EGFR and its downstream signaling, were suggested. AKT and ERK1/2 expression were also increased in DRM cells with the advancement of resistance to doxorubicin. Furthermore, doxorubicin resistance of DRM cells can be transferred by autocrine signaling. In conclusion, DRM cells harbored EMT features with CSC properties possessing increased proliferation, invasion, migration, and adhesion ability. The doxorubicin resistance, and doxorubicin-induced EMT and CSC properties of DRM cells, can be transferred to parental cells through autocrine signaling. Lastly, this feature of DRM cells might be associated with the up-regulation of EGFR. Introduction Cancer is one of the most fatal diseases in the world, and breast cancer is one of the most frequent cancer types of cancer in women in terms of incidence and mortality [1]. Breast cancer is a highly heterogeneous disease that displays diverse morphological features and variable clinical outcomes [2]. The responsiveness to treatment of breast cancer is dependent on their specific biological characteristics. Therefore, some breast cancer classifications have been developed [3]. A breast cancer classification includes basal-like, luminal A or B, luminal ER−/AR+, and ERBB2/HER2-amplified. Among them, basal-like, also called triple-negative, breast cancer (TNBC) shows more aggressive behaviors (higher grade, proliferation, and recurrence rate) than other types of breast cancer [4]. The reason why it is called TNBC is that it tests negative for estrogen receptors, progesterone receptors, and excess ERBB2/HER2 protein, which means that it does not respond to hormonal or HER2-targeted therapy [5]. For TNBC, the main treatment contains conventional cytotoxic systemic chemotherapy. Initially, TNBC is susceptible to conventional chemotherapy, but the initial susceptibility to treatment does not correlate with overall survival even in patients with TNBC who obtained complete remission [6][7][8][9]. Among systemic chemotherapeutic agents, doxorubicin is one of the most frequently used drugs [10]. It induces DNA intercalation, topoisomerase II inhibition and free radical formation [11]. It is well recognized that longer exposure to chemotherapeutic agents may generate an adaptive cellular response that results in the induction of acquired drug resistance [12]. TNBC usually acquires resistance to doxorubicin; after acquiring resistance to doxorubicin, the cancer cells develop multi-drug resistant phenotypes [13,14]. Recently, emerging evidence suggests that breast cancer stem cells (BCSCs), and epithelial-mesenchymal transition (EMT) may be involved in doxorubicin resistance [15]. Cancer stem cells (CSCs) can extensively proliferate, self-renew, differentiate to multiple lineages and generate a tumor mass [16]. TNBC is enriched in cancer stem cell populations, and CSCs are distinguished from other cancer cells by the expression of cell surface markers CD44 + /CD24 − [4,17] and overexpression of octamer-binding transcription factor 3/4 (OCT-3/4) [18]. Recent studies provide clear evidence that breast cancer stem cells have a highlighted role in recurrence and distant metastasis as well as drug resistance [19][20][21]. EMT is the process that involves a loss of epithelial features with acquiring mesenchymal features, and acquisition of enhanced invasive and metastatic behaviors. It is also involved in enhanced cancer cell survival, and immune tolerance. In addition, the activation of EMT programs in cancer cells expands their generation of chemo-resistant breast cancer stem cells (BCSCs). Many signaling pathways such as TGF-β, Wnt, Notch, TNF-a, NF-кB, RTK, MAPK/ERK, and PI3K/Akt are involved in CSCs maintenance [22][23][24]. Several lines of evidence suggest that CSCs generated through EMT exhibit resistance to conventional chemotherapies [22]. EMT and CSCs are also deeply involved in doxorubicin resistance through dormancy [15,25]. Therefore, it is believed that activation of EMT programs is tightly linked with the expansion of cancer stem cells [10,26,27] and that CSCs exhibit doxorubicin resistance through dormancy [28]. However, it is uncertain whether doxorubicin-induced EMT and acquisition of CSC properties are related to cancer dormancy or outgrowth of cancer cells with the maintenance of doxorubicin resistance because the phenotype of radio-resistant breast cancer cells showed high proliferative phenotype while harboring the phenotype of EMT and CSCs [29]. In addition, the mechanisms accounting for the maintenance and/or induction of EMT and CSCs still remain largely obscure [10], especially in relation to the transfer of these phenotypes to other doxorubicin-sensitive cells [26]. In this study, we characterized the phenotype of doxorubicin-resistant TNBC cells while monitoring the EMT process and expansion of CSCs during the establishment of mechanisms associated with the EMT process doxorubicin-resistant TNBC cells. In addition, we assessed the potential signaling associated EMT process and expansion of CSCs in doxorubicin-resistant TNBC cells. Targeting the signaling pathways involved in doxorubicin-resistant TNBC cells may improve the effectiveness of therapeutic modalities for TNBC. Doxorubicin-Resistant MDA-MB-231 (DRM) Cells Were Established by Continuous Treatment with Increasing Concentrations of Doxorubicin To establish doxorubicin-resistant cells, we treated them with continuously increasing concentrations of doxorubicin up to 100 nM as the final concentration. The experimental design for the induction of doxorubicin resistance is depicted in Figure 1A. The cells which showed resistance at 22.5, 50 and 100 nM were kept in the liquid nitrogen tank (−190 • C) for further studies. Cells that showed viability with the treatment of 10 µM doxorubicin were considered resistant because the plasma concentration of doxorubicin is~100 nM [10,30] To establish doxorubicin-resistant cells, we treated them with continuously increasing concentrations of doxorubicin up to 100 nM as the final concentration. The experimental design for the induction of doxorubicin resistance is depicted in Figure 1A. The cells which showed resistance at 22.5, 50 and 100 nM were kept in the liquid nitrogen tank (−190 °C) for further studies. Cells that showed viability with the treatment of 10 μM doxorubicin were considered resistant because the plasma concentration of doxorubicin is ~100 nM [10,30]. DRM Cells Showed Morphological Changes and Increased Proliferative Capacity while Acquiring Resistance to Doxorubicin The morphological changes in the DRM cells were photographed under a light microscope. The parental cells changed from spindle-like structures to cobblestone-like giant cells as acquiring resistance to doxorubicin (Figure 2A). The proliferative activity was also highly increased in the DRM cells as acquiring resistance to higher concentrations of doxorubicin (Figure 2A). To further characterize the morphological difference between the parental and the DRM cells, we performed Mayer and DAPI staining (Figure 2A,B). DRM cells showed an increase in the number and size of cells while acquiring resistance to doxorubicin. In addition, the Mayer stain also unravels the morphological changes of DRM cells as an advancement of resistance to doxorubicin ( Figure 2B). DAPI staining revealed that DRM cells have a larger nuclear size than the parental cells while acquiring resistance to higher concentrations of doxorubicin ( Figure 2C). These findings suggest that DRM cells have morphological changes and increased proliferative activity while acquiring resistance to higher concentrations of doxorubicin. DRM Cells Showed Morphological Changes and Increased Proliferative Capacity while Acquiring Resistance to Doxorubicin The morphological changes in the DRM cells were photographed under a light microscope. The parental cells changed from spindle-like structures to cobblestone-like giant cells as acquiring resistance to doxorubicin (Figure 2A). The proliferative activity was also highly increased in the DRM cells as acquiring resistance to higher concentrations of doxorubicin ( Figure 2A). To further characterize the morphological difference between the parental and the DRM cells, we performed Mayer and DAPI staining (Figure 2A,B). DRM cells showed an increase in the number and size of cells while acquiring resistance to doxorubicin. In addition, the Mayer stain also unravels the morphological changes of DRM cells as an advancement of resistance to doxorubicin ( Figure 2B). DAPI staining revealed that DRM cells have a larger nuclear size than the parental cells while acquiring resistance to higher concentrations of doxorubicin ( Figure 2C). These findings suggest that DRM cells have morphological changes and increased proliferative activity while acquiring resistance to higher concentrations of doxorubicin. DRM Cells Showed an Increase in Proliferation, Invasion, Migration, and Adhesion Characteristics Unexpectedly, DRM cells showed high proliferative activity. Here, we confirmed this finding with a colony-forming assay, which revealed that DRM cells acquired high proliferative ability; the number of colonies increased as an advancement of resistance to higher concentrations of doxorubicin ( Figure 3). DRM Cells Showed an Increase in Proliferation, Invasion, Migration, and Adhesion Characteristics Unexpectedly, DRM cells showed high proliferative activity. Here, we confirmed this finding with a colony-forming assay, which revealed that DRM cells acquired high proliferative ability; the number of colonies increased as an advancement of resistance to higher concentrations of doxorubicin ( Figure 3). Resistance to chemotherapy as a result of continuous exposure to the chemotherapeutic drug is usually accompanied by the enhanced migration and metastasis of tumor cells [28]. Therefore, we checked whether DRM cells increased the ability of invasion, migration, and adhesion. The transwell invasion assay showed a significant increase in an invasion of DRM cells with the advancement of resistance to higher concentrations of doxorubicin. The relative invasion ability of the 22.5 nM, 50 nM, 100 nM DRM cells, and the parental cells as control were 151%, 181%, 238%, and 100% at 24 h, respectively ( Figure 4A). With regard to the cell migration, the wound healing assay indicated the area of wound closure in 22.5 nM, 50 nM, 100 nM DRM cells, and the parental cells which were 47%, 22%, 15%, and 58%, respectively ( Figure 5). The adhesion of DRM cells to endothelial cells (ECs) was significantly increased with the advancement of resistance to higher concentrations of doxorubicin ( Figure 6A). These results suggest that DRM cells acquired an increased proliferation, invasion, migration, and adhesion ability, with the advancement of resistance to higher concentrations of doxorubicin. Resistance to chemotherapy as a result of continuous exposure to the chemotherapeutic drug is usually accompanied by the enhanced migration and metastasis of tumor cells [28]. Therefore, we checked whether DRM cells increased the ability of invasion, migration, and adhesion. The transwell invasion assay showed a significant increase in an invasion of DRM cells with the advancement of resistance to higher concentrations of doxorubicin. The relative invasion ability of the 22.5 nM, 50 nM, 100 nM DRM cells, and the parental cells as control were 151%, 181%, 238%, and 100% at 24 h, respectively ( Figure 4A). With regard to the cell migration, the wound healing assay indicated the area of wound closure in 22.5 nM, 50 nM, 100 nM DRM cells, and the parental cells which were 47%, 22%, 15%, and 58%, respectively ( Figure 5). The adhesion of DRM cells to endothelial cells (ECs) was significantly increased with the advancement of resistance to higher concentrations of doxorubicin ( Figure 6A). These results suggest that DRM cells acquired an increased proliferation, invasion, migration, and adhesion ability, with the advancement of resistance to higher concentrations of doxorubicin. Resistance to chemotherapy as a result of continuous exposure to the chemotherapeutic drug is usually accompanied by the enhanced migration and metastasis of tumor cells [28]. Therefore, we checked whether DRM cells increased the ability of invasion, migration, and adhesion. The transwell invasion assay showed a significant increase in an invasion of DRM cells with the advancement of resistance to higher concentrations of doxorubicin. The relative invasion ability of the 22.5 nM, 50 nM, 100 nM DRM cells, and the parental cells as control were 151%, 181%, 238%, and 100% at 24 h, respectively ( Figure 4A). With regard to the cell migration, the wound healing assay indicated the area of wound closure in 22.5 nM, 50 nM, 100 nM DRM cells, and the parental cells which were 47%, 22%, 15%, and 58%, respectively ( Figure 5). The adhesion of DRM cells to endothelial cells (ECs) was significantly increased with the advancement of resistance to higher concentrations of doxorubicin ( Figure 6A). These results suggest that DRM cells acquired an increased proliferation, invasion, migration, and adhesion ability, with the advancement of resistance to higher concentrations of doxorubicin. DRM Cells Expanded the Population of CSCs as Acquiring Resistance to Doxorubicin Cancer stem cells (CSCs) are highly associated with the development of drug-resistant cancer cells [25]. To test whether CSCs expanded while acquiring doxorubicin resistance, we investigated the expression of CD44, a representative CSC marker [10,[30][31][32]. Western blot analysis revealed that DRM cells increased the expression of CD44 by acquiring doxorubicin resistance ( Figure 7A). Another CSC marker, OCT 3/4 is important in maintaining the pluripotent cells [33]. The expression was also increased with the advancement of doxorubicin resistance of DRM cells ( Figure 7A). These findings suggest that the population of CSC of DRM cells expanded with the advancement of resistance to higher concentrations of doxorubicin. DRM Cells Expanded the Population of CSCs as Acquiring Resistance to Doxorubicin Cancer stem cells (CSCs) are highly associated with the development of drug-resistant cancer cells [25]. To test whether CSCs expanded while acquiring doxorubicin resistance, we investigated the expression of CD44, a representative CSC marker [10,[30][31][32]. Western blot analysis revealed that DRM cells increased the expression of CD44 by acquiring doxorubicin resistance ( Figure 7A). Another CSC marker, OCT 3/4 is important in maintaining the pluripotent cells [33]. The expression was also increased with the advancement of doxorubicin resistance of DRM cells ( Figure 7A). These findings suggest that the population of CSC of DRM cells expanded with the advancement of resistance to higher concentrations of doxorubicin. DRM Cells Showed Highly Proliferative, EMT, Adhesive, and Invasive Phenotypes Molecularly Next, we tried to molecularly confirm that DRM cells showed an increase in proliferation, invasion, migration, and adhesion characteristics. First, we assessed the expression of Cyclin D1 because it is a representative biomarker for cell proliferation. Western blot analysis revealed that cyclin D1 was up-regulated in DRM cells' advancement of resistance to higher concentrations of doxorubicin ( Figure 7B). Next, we investigated the EMT phenotype because EMT plays a significant role in tumor progression, metastasis, and chemo-resistance [10,34]. Western blot analysis revealed that DRM cells expressed up-regulation of mesenchymal markers like β-catenin, N-cadherin, and down-regulation of epithelial markers like E-cadherin with the advancement of resistance to higher concentrations of doxorubicin ( Figure 7C). These findings were consistent with EMT. As an adhesion molecule, we chose the adhesion molecule ICAM-1. It also showed a significant increase in DRM cells as acquiring resistance to higher concentrations of doxorubicin ( Figure 7C). MMP-2 (gelatinase-A), and MMP-9 (gelatinase-B) are involved in proteolytic digestion of the extracellular matrix (ECM) for cancer invasion and metastasis [35]. Thus, we investigated the expression of MMP-2 and MMP-9 with gelatin zymography, which showed an increase in MMP-2 and -9 expressions with the advancement of resistance to higher concentrations of doxorubicin ( Figure 7D). The molecular expressions of DRM cells suggest that DRM cells acquired proliferation, EMT, adhesion, invasion, and metastasis phenotype. DRM Cells Showed Highly Proliferative, EMT, Adhesive, and Invasive Phenotypes Molecularly Next, we tried to molecularly confirm that DRM cells showed an increase in proliferation, invasion, migration, and adhesion characteristics. First, we assessed the expression of Cyclin D1 because it is a representative biomarker for cell proliferation. Western blot analysis revealed that cyclin D1 was up-regulated in DRM cells' advancement of resistance to higher concentrations of doxorubicin ( Figure 7B). Next, we investigated the EMT phenotype because EMT plays a significant role in tumor progression, metastasis, and chemo-resistance [10,34]. Western blot analysis revealed that DRM cells expressed up-regulation of mesenchymal markers like β-catenin, N-cadherin, and down-regulation of epithelial markers like E-cadherin with the advancement of resistance to higher concentrations of doxorubicin ( Figure 7C). These findings were consistent with EMT. As an adhesion molecule, we chose the adhesion molecule ICAM-1. It also showed a significant increase in DRM cells as acquiring resistance to higher concentrations of doxorubicin (Figure 7C). MMP-2 (gelatinase-A), and MMP-9 (gelatinase-B) are involved in proteolytic digestion of the extracellular matrix (ECM) for cancer invasion and metastasis [35]. Thus, Epidermal Growth Factor Receptor (EGFR) Upregulation Was Associated with Doxorubicin Resistance of DRM Cells Up-regulation of the epidermal growth factor receptor (EGFR) is associated with high proliferation and drug resistance [36]. Here, we investigated the expression of EGFR in DRM cells. Western blot analysis revealed that EGFR expression was increased with the advancement of resistance to higher concentrations of doxorubicin (Figure 8). The two representative downstream signals of EGFR, AKT, and ERK 1/2 were also increased in DRM cells with the advancement of resistance to higher concentrations of doxorubicin ( Figure 8). These results suggest that the doxorubicin resistance of DRM cells was at least in part involved in the upregulation of EGFR and its activation of downstream signaling. Epidermal Growth Factor Receptor (EGFR) Upregulation was Associated with Doxorubicin Resistance of DRM Cells Up-regulation of the epidermal growth factor receptor (EGFR) is associated with high proliferation and drug resistance [36]. Here, we investigated the expression of EGFR in DRM cells. Western blot analysis revealed that EGFR expression was increased with the advancement of resistance to higher concentrations of doxorubicin (Figure 8). The two representative downstream signals of EGFR, AKT, and ERK 1/2 were also increased in DRM cells with the advancement of resistance to higher concentrations of doxorubicin ( Figure 8). These results suggest that the doxorubicin resistance of DRM cells was at least in part involved in the upregulation of EGFR and its activation of downstream signaling. Doxorubicin Resistance of DRM Cells Can Be Transferred to p-MDA-MB 231 Cells by Autocrine Signaling Cell to cell communication networks have been one of the many driving forces behind the development of drug resistance and CSC [28,37]. Thus, to determine whether autocrine mechanisms are implicated in the tumor microenvironment, we grew the parental cells in the resistant cell-grown media and compared them against cells grown in fresh cell media. The parental cells grown in resistant cell media showed an almost similar pattern of DRM cells in terms of the expression of CSC and EMT phenotype (Figure 9). These results show that the autocrine factors might be important in acquiring doxorubicin resistance of DRM cells and the expansion of the CSC population in DRM cells. Doxorubicin Resistance of DRM Cells Can Be Transferred to p-MDA-MB 231 Cells by Autocrine Signaling Cell to cell communication networks have been one of the many driving forces behind the development of drug resistance and CSC [28,37]. Thus, to determine whether autocrine mechanisms are implicated in the tumor microenvironment, we grew the parental cells in the resistant cell-grown media and compared them against cells grown in fresh cell media. The parental cells grown in resistant cell media showed an almost similar pattern of DRM cells in terms of the expression of CSC and EMT phenotype (Figure 9). These results show that the autocrine factors might be important in acquiring doxorubicin resistance of DRM cells and the expansion of the CSC population in DRM cells. Discussion This study was designed to determine the characteristics of DRM cells morphologically and molecularly, and to answer whether DRM cells showed doxorubicin resistance with dormancy and whether the phenotypes can be transferred to other doxorubicin-sensitive cells. This study clearly demonstrated that DRM cells were outgrowing cancer cells with Discussion This study was designed to determine the characteristics of DRM cells morphologically and molecularly, and to answer whether DRM cells showed doxorubicin resistance with dormancy and whether the phenotypes can be transferred to other doxorubicin-sensitive cells. This study clearly demonstrated that DRM cells were outgrowing cancer cells with maintaining resistance to doxorubicin, and the EMT features with CSC properties can be transferred to other doxorubicin-sensitive cells through autocrine signaling. In addition, we demonstrated DRM cells changed from spindle-like structures to cobblestone-like giant cells with a larger nucleus and highly proliferative activity. In addition, these cells also acquired highly invasive, migratory, and adhesive abilities. Molecularly, DRM cells exhibited an enrichment of the EMT features with CSC properties. Up-regulation of EGFR might be associated with the establishment of DRM cells. The morphological changes from spindle-like structures to cobblestone-like giant cells indicated that the DRM cells were undergoing EMT [38]. This process was facilitated by reducing apical-basal polarity and epithelial adhesion proteins [39]. However, DRM cells showed high adhesive ability ( Figure 6). In addition, mesenchymal-like cancer cells that have undergone EMT may remain in a dormant state after attaching the metastatic sites [10,26], because of the recurrence, decades after primary tumor resection and adjuvant therapy [40]. However, DRM cells did show rapid growth instead of dormancy. We found that DRM cells have unique features including highly proliferative and adhesive properties with EMT features. This can be explained by other studies that showed hybrid EMT/MET CTCs [10,26]. This model is to explain the features of metastatic cancer cells that do not match the EMT/MET cancer metastasis hypothesis. Some investigators explained this phenomenon with a partial EMT model [41]. EMT is not a dichotomous switch between epithelial or mesenchymal status, but intermediate states; this helps to explain the notion that cancer cells utilize the EMT program for metastasis with dormancy, whereas MET helps establish the metastatic outgrowth [27]. There is another model explaining this finding; the cancer cells acquiring EMT with CSCs have high proliferative activity [26]. This finding is more appealing to us and is consistent with our own findings. In addition, we previously demonstrated that radiation-resistant MDA-MB 231 cells were also highly proliferative while they exhibited an enrichment of the EMT features with CSC properties [42]. All this supports our findings. Regarding the high adhesive activity firstly, we thought that the downregulation of E-cadherin may contribute to a decrease in adhesive activity, but DRM cells showed high adhesive activity compared to parental cells ( Figure 6). Hence, we test the expression of I-CAM that is a highly expressed adhesion protein in highly metastatic cancer cells [43]. DRM cells also showed high expression of I-CAM. This finding also suggests that DRM cells are more similar to highly metastatic cancer cells rather than cancer cells in dormancy. Next, we searched the signaling involved in drug resistance with upregulation of cyclin D1 and I-CAM. We found that up-regulation of EGFR and downstream signaling activation was increased with the advancement of doxorubicin resistance (Figure 8). The up-regulation of EGFR and the downstream signals, Akt and ERK activation induces upregulation of cyclin D1 and I-CAM [36]. In addition, inhibition of EGFR inhibitor, gefitinib can inhibit doxorubicin resistance [44], and increased expression of EGFR may be associated with conferring resistance of doxorubicin [45]. These findings support our findings that DRM cells look like highly metastatic cells with rapid growth. Lastly, regarding the transmission of doxorubicin resistance of DRM cells to parental cells, it was reported that autocrine signaling is important in inducing and maintaining mesenchymal and CSCs in the breast cancer [46,47]. That is why we tested whether the doxorubicin resistance of DRM cells can be transferred to a parental cell. As shown in this study, we clearly demonstrated that the doxorubicin resistance of DRM cells can be transferred to parental cells through autocrine signaling (Figure 9). The limitation of this study is that we only performed an experiment with only one cell line. It is still questioned whether this finding can be applied to all doxorubicin-resistant breast cancer cell lines or can be generalized only to triple-negative breast cancer cells. Further research is warranted to answer this question. In summary, DRM cells were outgrowing parental cells while maintaining resistance to doxorubicin, and the EMT features with CSC properties can be transferred to other doxorubicin-sensitive cells through autocrine signaling. DRM cells changed from spindlelike structures to cobblestone-like giant cells with a larger nucleus, exhibiting increased, invasion, migration, and adhesion ability. They were highly metastatic cancer cells with rapid growth, but they harbored EMT features with CSC properties. Lastly, the feature of DRM cells might be associated with the up-regulation of EGFR. Cell Culture and Chemicals The triple-negative human breast cancer cell line MDA-MB-231 was obtained from Korea cell line bank and was sub-cultured with Roswell Park Memorial Institute Medium (RPMI) 1640 media (Hyclone, Marlborough, MA, USA) containing 10% of heat-inactivated (v/v) FBS (fetal bovine serum) (GIBCO BRL, Grand Island, NY, USA), 1 mM l-glutamine, 100 U/mL penicillin, and 100 µg/mL streptomycin at 37 • C in a humidified atmosphere of 95% air and 5% CO 2 . Antibodies against OCT-3/4, AKT, β-Catenin, ERK 1 2 , ICAM-1 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Antibodies against CD44, E-cadherin, N-cadherin, were purchased from Abcam. Antibody against β-actin was purchased from Sigma (Beverly, MA, USA). Peroxidase-labeled donkey anti-rabbit and sheep anti-mouse immunoglobulin, and an enhanced chemiluminescence (ECL) kit were purchased from Amersham (Arlington Heights, IL, USA). All other chemicals not specifically cited here were purchased from Sigma Chemical Co. (St. Louis, MO, USA). Preparation of Doxorubicin Resistant MDA-MB-231 Cells The DRM phenotype was established by exposing the cells to doxorubicin in an increasing concentration (10 nM-100 nM). The final concentration of doxorubicin is determined as 100 nM according to the plasma concentration of doxorubicin. The experimental design depicting the treatment of doxorubicin is shown in Figure 1A. The cells were continuously exposed to different concentrations of doxorubicin. The number of passages the cells were maintained in each concentration is depicted in Figure 1A. Considering each passage is for 3 days in total, induction of resistance took about 38 weeks. The cells were considered resistant when there were no dead cells seen. The cells showed resistance at different passages in different concentrations of the drug. Throughout the induction period, the cells at the initial passage and final passage are collected. The same method was also followed for the untreated group to identify the passage-related alterations in the cells. The cells were considered chemo-resistant with the ability to grow in 10 µM of doxorubicin. Cell Viability Assay Cells were seeded in 24 well plates with a seeding density of 5 × 10 4 cells/well. Cells were treated with and without doxorubicin as indicated (0-10 µM). After 48 h and 72 h incubation at 37 • C in CO 2 incubator the cells were treated with 50 µL of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) solution (5 mg/mL in 1× PBS) and kept for 3 h at 37 • C in a CO 2 incubator. After incubation, the supernatant was removed, and the formazan crystals were dissolved with 200 µL of Dimethyl Sulfoxide (DMSO). The absorbance was read at 540 nm on a microplate reader (Bio-Rad, Hercules, CA, USA). Invasion Assay To identify the invasiveness of Doxorubicin-resistant cells, we performed a transwell assay. 0.5 mg/mL of Matrigel (BD Biosciences, San Jose, CA, USA) was coated onto the top of the Boyden chamber and incubated at 37 • C for 4 h. After the solidification of Matrigel 5 × 10 4 cells/well of cells were added to the upper chamber with serum-free media. In total, 500 µL of RPMI media with 20% FBS was added to the lower chamber as a chemoattractant and incubated for 24 h at 37 • C in a CO 2 incubator. After incubation, the lower part of the upper chamber was permeabilized with 4% formaldehyde and stained with 4 ,6-diamidino-2-phenylindole (DAPI). After staining, the cells were visualized under a fluorescent microscope and counted with the use of ImageJ. Migration Assay The cells were grown in 6 well plates to 100% confluent monolayer and then scratched with 1 mL sterile pipette tip to form a "wound". After the wound formation, the cells were incubated in serum-free media for 0, 18, and 24 h at 37 • C in a CO 2 incubator. The scratch was viewed using an Olympus photomicroscope. Colony Formation Assay The cells were seeded in a 6 cm dish with the cell seeding capacity of 500 cells/plate and starved for 12 h in serum-free RPMI media. After 12 h, the serum-free media was discarded and RPMI with 10% heat-inactivated FBS was added. The cells were incubated for 10 days. The media was replaced every 3 days. After 10 days the plates were washed with 1× PBS and then the cells were fixed with 4% formaldehyde for 30 min. After fixation, the cells were stained with 0.6% Giemsa stain for 30 min. The stain was washed with distilled water and then the pictures were taken using a camera. Gelatin Zymography To perform gelatin zymography, the cells were seeded in a 6-well plate with the seeding density of 3 × 10 5 cells/well and incubated at 37 • C in a CO 2 incubator for 24 h. After incubation, the media were removed and serum-free RPMI media was added. After 12 h, the media was collected in an Eppendorf and centrifuged at 13,000 RPM for 10 min. After centrifugation, the supernatant was resolved in 12% polyacrylamide gel containing gelatin (1 mg/mL). The gels were washed with 2.5% of Triton X-100 for 1 h and then incubated in activation buffer (50 mM Tris-HCl, pH 7.5, 10 mM CaCl 2 ) for 16 h at 37 • C. After incubation in the activation buffer, the gels were stained with a staining solution containing 10% glacial acetic acid, 30% methanol, and 1.5% Coomassie brilliant blue for 1 h. After washing, the gels revealed the white lysis zones indicating gelatin degradation, showing the status of MMP-9 and MMP-2. Western Blot Analysis The cells were seeded in a 10 cm dish with a seeding density of 2.2 × 10 5 and incubated at 37 • C in a CO 2 incubator for 48 h. The cells were collected with the use of a cell scraper and centrifuged at 2000 RPM for 5 min. After centrifugation, the media was removed and centrifuged again to remove the excess media. The pellet was lysed in 500 µL of 2× sample buffer which contains 100 mM of Tris-Cl (pH 6.8), 4% (w/v) sodium dodecyl sulphate (SDS), 0.2% (w/v) Bromophenol blue and 200 mM of DTT (dithiothreitol). The protein lysates were collected in the Eppendorf tubes and heated at 100 • C for 10 min. The protein was then quantified using the Bradford assay. In total, 30 µg of protein was resolved in 8-12% SDS-PAGE and transferred to methanol-activated PVDF membrane. After transfer, the membrane was blocked with 3% skimmed milk for 15 min and then incubated with specific antibodies for 16 h in 4 • C with 3% skimmed milk in TBST. After incubation, the membrane was washed thrice with TBST each wash for about 10 min followed by the incubation with 1:2000 dilution of horseradish peroxidase (HRP)-conjugated secondary antibody for 1 h in room temperature. The membranes were later washed with TBST buffer three times (10 min/wash) subsequently developed with ECL (electrochemiluminescence) solutions (Bio-Rad Laboratory, Hercules, CA, USA). Statistical Analysis The results were expressed as means ± SEM from at least five independent experiments. Significant differences were determined by the one-way analysis of variance (ANOVA) with post-test Newman-Keuls for comparison of at least five treatment groups and Student's t-test for two groups. Statistical significance was defined as p < 0.05.
7,369.6
2021-11-01T00:00:00.000
[ "Biology", "Materials Science" ]
Direct Space Structure Solution Applications The crystal structures of 2,4,6-triisopropylbenzenesulfonamide, 1,2,3-trihydroxybenzene-hexamethylenetetramine (1/1), 5-bromonicotinic acid and chlorothalonil form II have been solved from x-ray powder diffraction data, by application of a direct space structure solution approach using the Monte Carlo method and confirmed by Rietveld refinement. In the sulfonamide, the molecules are linked by N–H⋯O hydrogen bonds into two-dimensional sheets built from alternating eight and twenty-membered rings. In the cocrystal, the molecules are linked by O–H⋯N hydrogen bonds to form puckered molecular ribbons that are in turn linked into a continuous 3D framework by C–H⋯π (arene) interactions. 5-bromonicotinic acid also displays atypical hydrogen-bonding behaviour by formation of dimers through a self-complementary acid-acid hydrogen-bond motif that are connected into antiparallel ribbons by C–H⋯O and C–H⋯N hydrogen bonds. Structure determination of the cocrystal and the bromonicotinic acid was successful despite the presence of preferred orientation in the data, whereas the distortion of the chlorothalonil data was so severe that structure solution was only possible when the effects of preferred orientation were minimized. Both the disordered structure, and an ordered structural approximation of chlorothalonil form II have been determined and rationalized. Introduction The ab initio crystal structure determination of molecular materials from x-ray powder diffraction data is a rapidly expanding field, and has grown substantially in the last few years mainly due to the development and application of new methods of structure solution, in particular "direct-space" based techniques [1,2]. These methods approach structure solution by generation of trial crystal structures based on the known molecular connectivity of the material. The fitness of each structure is then assessed by comparison of the corresponding calculated diffraction pattern and the experimental diffraction data. Global optimization techniques such as Monte Carlo [3][4][5][6][7][8], simulated annealing [9][10][11][12][13] or genetic algorithms [14][15][16][17][18] are used to locate the global minimum corresponding to the best structure solution. In this paper, we present a number of organic structure solution problems that have been resolved from conventional laboratory and synchrotron powder diffraction data using a direct-space structure solution technique based on the Metropolis Monte Carlo algorithm [19] and implemented in the program POSSUM [20]. The compounds studied are selected from two main areas of our research; the study of hydrogen-bond The crystal structures of 2,4,6-triisopropylbenzenesulfonamide, 1,2,3-trihydroxybenzene-hexamethylenetetramine (1/1), 5bromonicotinic acid and chlorothalonil form II have been solved from x-ray powder diffraction data, by application of a direct space structure solution approach using the Monte Carlo method and confirmed by Rietveld refinement. In the sulfonamide, the molecules are linked by N-H⋅⋅⋅O hydrogen bonds into two-dimensional sheets built from alternating eight and twenty-membered rings. In the cocrystal, the molecules are linked by O-H⋅⋅⋅N hydrogen bonds to form puckered molecular ribbons that are in turn linked into a continuous 3D framework by C-H⋅⋅⋅π (arene) interactions. 5-bromonicotinic acid also displays atypical hydrogen-bonding behaviour by formation of dimers through a self-complementary acid-acid hydrogenbond motif that are connected into antiparallel ribbons by C-H⋅⋅⋅O and C-H⋅⋅⋅N hydrogen bonds. Structure determination of the cocrystal and the bromonicotinic acid was successful despite the presence of preferred orientation in the data, whereas the distortion of the chlorothalonil data was so severe that structure solution was only possible when the effects of preferred orientation were minimized. Both the disordered structure, and an ordered structural approximation of chlorothalonil form II have been determined and rationalized. networks and polymorphism, and consist of single and multi-component systems containing both rigid and conformationally flexible molecules. A number of these structures have been determined from powder data significantly affected by preferred orientation; a sample characteristic that arises when crystallites have a tendency to align along a certain direction resulting in a non-random distribution of crystallite orientations in the sample, affecting the relative intensities of given peaks. This distortion of the data can have a disastrous effect on traditional structure solution, whereas direct-space methods appear to be more robust, presumably because a substantial amount of structural knowledge is included in the calculation through the use of a structural model. However, in severe cases (i.e., when the morphology is strongly anisotropic) we illustrate that the direct-space structure solution of even simple structural problems can fail. Crystal Engineering and the Study of Intermolecular Interactions In organic molecular crystals, hydrogen bonds often constitute the strongest intermolecular synthon [21], and hence often dictate the preferred packing arrangement of the molecules. The general principles underlying the formation of hydrogen bonds are reasonably well understood, but there are at present few, if any, reliable methods for the prediction of hydrogen-bonding patterns. A detailed description of the hydrogenbonding patterns in a given system must be derived from analysis of specific experimental data, and as such, a sound knowledge and understanding of the role that intermolecular forces play in supramolecular assembly is generally obtained from systematic crystallographic studies. Materials of interest in this field are ideal targets for direct-space structure solution techniques, particularly when the structures consist of welldefined molecular building blocks, with the intermolecular aggregation of these building blocks within the crystal structure being of primary interest. In this section we highlight the contribution of direct-space structure solution methods to a number of systematic structural studies, including a long-standing investigation of a family of sulfonylamino compounds entirely from powder diffraction data, the study of atypical crystal packing in a group of nicotinic acid derivatives and report the first example of powder diffraction being used in the structure solution of an organic cocrystal. Sulfonylamino and Related Compounds In previous work [22], we have reported the ab initio structure determination of three sulfonylamino compounds, using powder x-ray diffraction data collected using a conventional laboratory powder diffractometer (Scheme 1). The structures of 4-toluenesulfonamide CH 3 C 6 H 4 SO 2 NH 2 (I) and benzenesulfonylhydrazine C 6 H 5 SO 2 NHNH 2 (II) were readily solved using traditional direct methods programs, while the structure of 4-toluenesulfonylhydrazine CH 3 C 6 H 4 SO 2 NHNH 2 (III) was solved using the maximum entropy and likelihood method MICE [23]. Similar data sets were recorded for 2-toluenesulfonamide CH 3 C 6 H 4 SO 2 NH 2 (IV), 2 , 4 , 6 -t r i m e t h y l b e n z e n e s u l f o n y l h y d r a z i n e (Me) 3 C 6 H 2 SO 2 NHNH 2 (V), and 2,4,6-tri-isopropylbenzenesulfonamide (Me 2 CH) 3 C 6 H 2 SO 2 NH 2 (VI). Although these data enabled indexing of (IV) and (VI), attempts at structure solution by traditional methods were unsuccessful. The diffraction pattern of (V) could not be indexed from the data available. A new low-temperature data set for (VI) was collected using synchrotron x-ray radiation, and details of the crystal structure determination by the Monte Carlo method are given below. methyl hydrogen atoms, and constructed using standard bond lengths and angles. Although the benzene ring was maintained as a rigid body, the three isopropyl groups and the sulfonamide group were allowed to rotate freely and independently within the molecule as shown in Scheme 1. The initial position, orientation, and intramolecular geometry of the structural fragment were chosen arbitrarily and the random movement of the molecule in the Monte Carlo calculation carried out by translation and rotation of the structural fragment within the unit cell, simultaneously with the intramolecular rotations. After a sufficient amount of parameter space had been searched, the best structure solution was then taken as the starting model for Rietveld refinement ( Table 1). The positions of all atoms were refined subject to soft restraints on the standard geometric parameters and the methyl H atoms were added to the molecule in positions consistent with standard geometry. Isotropic atomic displacement parameters were refined for the non-hydrogen atoms, but were constrained according to atom type or environment, i.e., S, O, or N; aromatic, propyl (CHMe 2 ) or methyl C. The amino H atoms were placed in positions calculated from the coordinates of the hydrogen-bond donor and acceptors, but had no effect whatsoever on the refinement. Hydrogen Bonding and Molecular Conformation The structure of (VI) is built from discrete molecules linked together by N-H⋅⋅⋅O hydrogen bonds. The conformation of the isopropyl groups is such that the isopropyl C-H bonds all lie approximately parallel to the plane of the aryl ring, with the methyl substituents indicative of repulsive interactions between the isopropyl groups and the sulfonamido group. This conformation of the three independent isopropyl groups appears to be the norm for 2,4,6-tri-isopropyl species (Me 2 CH) 3 C 6 H 2 X regardless of the identity of the αatoms in the substituent X. In nearly all previously reported examples (see Refs. in [5]), the 2,4,6-tri-isopropylphenyl group was employed simply as a sterically bulky blocking group to protect some other part of the molecule, and none of these structure reports comment on its conformation. However, our analysis shows that the conformation of the isopropyl groups is essentially the same in all cases. The NH 2 group in (VI) acts as a double donor of hydrogen bonds, with a sulfone oxygen in each of two different molecules acting as the acceptors. These interactions result in formation of C(4) spirals, based on the N-H⋅⋅⋅O=S motif and generated by 2 1 screw axes, and the generation of a cyclic R 2 2 (8) motif around the centres of inversion (Fig. 1). The C(4) motif of N-H⋅⋅⋅O=S hydrogen bonds is extremely common in sulfonamides [5,22], and the R 2 2 (8) motif has also been observed in sulfonamides [24,25], but these two motifs do not normally occur together in a single sulfonamide. The R 2 2 (8) rings have the effect of linking together two adjacent but anti-parallel C(4) spirals. The propagation of these two hydrogen-bond motifs by means of the combined action of 2 1 screw axes and centres of inversion leads to the generation of a continuous two-dimensional sheet parallel to (100) in which R 2 2 (8) and 1). The tri-isopropylphenyl units lie on either side of the hydrogen-bonded sheet, so that the overall structure is that of a sandwich: a polar layer containing only S, O, N and H atoms lies between two non-polar hydrocarbon layers with only van der Waals contacts between adjacent sandwiches. Organic Cocrystal Systems In the application of direct-space structure solution methods, the presence of more than one molecular fragment in the asymmetric unit [26,27] makes the problem more complex both in terms of the number of degrees of freedom (ie. the number of structural parameters varied to generate new trial crystal structures), and to a certain extent, the effect on R-factor discrimination. There are a few examples of such materials solved from powder diffraction data using the direct-space structure solution approach, a situation made more complicated due to the presence of two entirely different entities in the cocrystal with the location of each molecule in the unit cell being unique and non-superimposable. Previous studies have used single-crystal x-ray diffraction to explore the use of bis-and trisphenols in crystal engineering and the interaction of this type of phenol, acting as a hydrogen bond donor, with hexamethylenetetramine, (CH 2 ) 6 N 4 (HMTA), as a hydrogen bond acceptor [28]. However in the case of the 1:1 adduct of 1,2,3-trihydroxybenzene (pyrogallol, VII) and HMTA (shown in Scheme 2), investigation of the crystal structure has been carried out using powder diffraction data obtained from a conventional laboratorybased diffractometer [6]. Structure Determination of Pyrogallol-HMTA (1/1) The powder diffraction pattern was indexed giving a monoclinic unit cell and space group consistent with the presence of one molecule of each component in the asymmetric unit. The structural model used in the Monte Carlo structure solution comprised a complete HMTA molecule and a pyrogallol molecule excluding the hydrogen atoms on the three hydroxyl groups. Both these molecules were constructed using standard bond lengths and angles and treated as rigid bodies in the calculation. Trial structures were generated by translation and rotation of both molecules completely independently of each other within the unit cell. With more than one independent molecule required to define the structure, the number of degrees of freedom required for random movement is increased (from 6 to 12 in this case) without conformational flexibility being introduced. The only additional constraint is a limit on the closest approach between the two independent bodies in the form of an artificially biased agreement factor. The best structure from the Monte Carlo calculation was used as the starting model for Rietveld refinement and the positions of all atoms refined subject to soft restraints on the standard geometric parameters ( Table 1). As in the previous structure, isotropic atomic displacement parameters were refined for the non-hydrogen atoms only, and constrained according to atom type or environment. Diffraction data had been collected with the sample packed in both disc and capillary geometries and it was clear from the difference in relative intensities of related peaks in these data that there was a significant degree of preferred orientation present (Fig. 2). Although the effects of the preferred orientation were minimised by use of the capillary data set for both solution and refinement, variation of a preferred orientation parameter in the [100] direction was required (Table 1). A plot of the final Rietveld refinement for this structure is shown in Fig. 3. The hydroxyl H atoms were placed in positions calculated from the coordinates of the hydrogen-bond donor and acceptors, but were not included in the refinement. Hydrogen Bonding and Molecular Packing All three hydroxyl groups in the pyrogallol molecule act as hydrogen bond donors with three N atoms each from different HMTA molecules acting as acceptors. This differs from the majority of systems in which HMTA generally acts as a double acceptor of hydrogen bonds [28]. Rather less frequently, HMTA behaves as an acceptor of just one hydrogen bond [28,29], a full complement of four hydrogen bonds, or as in this case, of three hydrogen bonds [30][31][32] O-H⋅⋅⋅N hydrogen bonds from the hydroxyl groups in the 2 positions to another N atom in each HMTA unit forming two distinct cyclic R 4 4 (18) motifs. The result is a lightly-puckered molecular ribbon running parallel to the [100] direction in which the HMTA cages lie alternately above and below the plane (Fig. 4). These ribbons are linked into a continuous threedimensional framework by C-H⋅⋅⋅π(arene) interactions. There are edge-to-face interactions between pyrogallol units in neighbouring ribbons, occupying one face of each ring: the other face of each ring is involved in a C-H⋅⋅⋅π(arene) interaction with a C-H bond from an HMTA unit in a neighbouring ribbon. The latter C-H⋅⋅⋅π(arene) interactions link sets of neighbouring parallel ribbons into columns stacked in the [010] direction, while those between the pyrogallol units link neighbouring stacks together to form a herringbone pattern (Fig. 5). Propagation of these two types of C-H⋅⋅⋅π(arene) interactions based on aromatic and aliphatic C-H bonds respectively links all the parallel ribbons into a single bundle, so that the overall supramolecular structure is three-dimensional. Nicotinic Acid Derivatives 5-Bromonicotinic acid is a relatively simple molecule that can, in principle, provide important information about competition between intermolecular forces since it has limited conformational flexibility and there are relatively few primary supramolecular assemblies that can be envisaged (Scheme 3). This compound suffers from poor crystal growth, and attempts at recrystallization resulted only in the formation of a range of solvates. The structures of three of these solvates (with ethylacetate, acetonitrile and methanol) were all obtained from single-crystal data, whereas the crystal structure of the parent compound itself has been solved from powder diffraction data using the Monte Carlo technique [8]. Structure Determination of 5-Bromonicotinic Acid The powder diffraction pattern was indexed giving a unit cell and space group consistent with one molecule in the asymmetric unit (unlike the solvate structures with multiple parent, and often multiple solvate molecules in the asymmetric unit). The structural model used in the Monte Carlo calculation comprised the complete molecule excluding the carboxylic hydrogen, and was constructed using standard bond lengths and angles. The pyridine ring was assumed to be planar (as in similar systems) and the molecule treated as a rigid body in the structure solution, with the pyridine and carboxylic acid groups constrained to be coplanar. In the Monte Carlo structure solution, the structural model was rotated and translated within the unit cell from an initial random location. The best solution found in the structure solution calculation was taken as the starting model for Rietveld refinement (Table 1), but after several cycles it was clear that the model had become significantly distorted. The data set used for indexing and structure solution had been collected using a stationary disc. Comparison of this data with a second data set collected using capillary geometry, showed the presence of a high degree of preferred orientation (Fig. 6), possibly accounting for this distortion in molecular geometry. A second Monte Carlo calculation was carried out using the capillary data set, under the same optimization conditions as above; this generated the same structure solution but with a better fit to the profile ( Table 1). The structure was then refined successfully using this data set, with all atom positions refined subject to soft constraints on standard geometry. Variation of a preferred orientation parameter along the [010] direction was still required in refinement, and isotropic atomic displacement parameters refined for non-hydrogen atoms only and constrained according to atom type. The carboxyl hydrogen atom was placed in a position calculated from the coordinates of the donor and acceptor carboxyl oxygen atoms, but had no effect on the refinement. Hydrogen Bonding and Molecular Packing The crystal structure of 5-bromonicotinic acid differs significantly from the solvate structures. The molecules of the parent compound form centrosymmetric dimers through a self-complementary acid-acid hydrogenbond motif, rather than formation of the dominating C-H⋅⋅⋅O and O-H⋅⋅⋅N supramolecular interactions and infinite chain motif found in the solvates. Adjacent acid dimers are connected into antiparallel ribbons by C-H⋅⋅⋅O and C-H⋅⋅⋅N hydrogen bonds. These infinite planar ribbons run parallel to the [100] direction and are arranged into two-dimensional sheets held together by weak Br⋅⋅⋅Br interactions (Fig. 7), with π-π stacking of these layers to form a three-dimensional structure. Polymorphism The study of polymorphism in organic materials continues to attract considerable academic and industrial attention, but still requires full structural characterisation in each case to attain a true understanding of the aspects controlling this phenomenon. However, the conditions used to prepare many polymorphs, in particular metastable forms, often yield materials that occur only as polycrystalline powders. These systems are therefore often both initially identified and their structure investigated by powder diffraction alone [7]. A New Polymorph of Chlorothalonil Chlorothalonil (2,4,5,6-tetrachloro-1,3-dicyanobenzene) is a broad-spectrum fungicide used to control fungi that threaten turf, vegetables, and other agricultural crops. A recent study has suggested that there may be three polymorphs of chlorothalonil [33] although only form I, the commercially available form, has been fully structurally characterized [34]. As a system reported to show possible polymorphic behaviour, chlorothalonil was chosen as a test for independent simultaneous studies involving an experimental search for new polymorphs and theoretical crystal structure prediction [35]. X-ray powder diffraction data was used both initially to confirm the preparation of a new polymorph (form II), obtained by recrystallization from butanol, and subsequently in the determination and rationalization of its crystal structure. In the event, the structure of form II is disordered, and so cannot be predicted by current theoretical methods. Structure Determination of Chlorothalonil Form II The powder diffraction pattern of form II was indexed on the basis of the first 21 observable peaks using the CRYSFIRE package [36]. A large number of cells were obtained with high figures of merit (M 20 > 250) [37], although many of these cells did not satisfy suitable density requirements (often with a volume less than that required for a single chlorothalonil molecule). Despite having a relatively low figure of merit (M 20 = 41), the unit cell chosen was that of highest symmetry; a hexagonal unit cell a = b = 9.24 Å, c = 10.10 Å with a volume 747 Å 3 (Z=3). Systematic absences suggested R-3 (148) and R-3m (166) as probable space groups, although both would require six-fold symmetry in the molecule. This is possible if the molecule is assumed to be disordered with the -C≡N and -Cl substituents on the benzene ring being indistinguishable and represented by a C-(C≡N)/Cl "spur". In addition to this structure solution calculation (using a disordered hexagonal model (Fig. 8a)), a second structure determination was attempted using an "ordered" model in P1 (Fig. 8b). By consideration of only the most basic crystallographic symmetry (P1), we hoped to obtain a good "directspace" approximation to the disordered structure that would provide an insight into the nature of the disorder and enable straightforward comparison with any ordered structures obtained from the crystal structure prediction calculation. Structure Solution in R-3m (Disordered Structural Model) Structure determination was attempted initially in R-3m due to the higher symmetry constraints imposed on this structure by the R-3m space group. Structure solution was carried out using a grid search technique by rotation of a C-(C≡N)/Cl spur with relevant disorder occupancies, around the 0,0,z axis in 1° steps and over the range 0 ≤ z ≤ 0.5 at intervals of 0.1, thus generating a complete disordered molecular model (Fig. 8a). The best structure solution (that with the lowest R wp ), with the chlorothalonil molecule lying parallel to the ab plane with atoms in the 2x,x,-z positions, was taken as the starting model for Rietveld refinement. The positions of all atoms were refined subject to symmetry and geometrical restraints, and refinement of a preferred orientation parameter was also required in the [001] direction (see Sec. 3.1.4). The final Rietveld refinement agreement factors are given in Table 1. This disordered structure was later confirmed by single-crystal x-ray diffraction studies. Structure Solution in P1 (Ordered Structural Model) The rhombohedral setting equivalent to the indexed hexagonal cell was used as a basis for the triclinic lat-tice parameters (a ≠ b ≠ c ≈ 6.32 Å, α ≠ β ≠ γ ≈ 94.2°) with one molecule in the asymmetric unit. The structural model used in the Monte Carlo structure solution calculation comprised the complete ordered molecule (Fig. 8b) constructed using standard bond lengths and angles. In the generation of trial structures, the chlorothalonil molecule was treated as a rigid body with only variation of the orientation of the molecule in the unit cell being required from a random initial position. A few structures were located with an R wp value similar to that of the best structure (with the lowest R wp ), but were related by 60º rotation of the model within the plane of the molecule. The best structure was taken as the starting model for Rietveld refinement and the positions of all atoms refined subject to soft geometrical restraints on standard geometry. Isotropic atomic displacement parameters were refined, but constrained according to atom type or environment. Variation of a preferred orientation parameter was also required in the [111] direction (see Sec. 3.1.4). The final Rietveld refinement agreement factors are given in Table 1 and the Rietveld plot shown in Fig. 9. Preferred Orientation Considerations Initial attempts at structure solution in both R-3m and P1 generated structure solutions that despite having relatively low R wp values (e.g., in the P1 calculation, the best structure solution had R wp = 0.12, whereas the average range of values for a typical "wrong" structure was 0.22-0.24), were clearly incorrect and immediately rejected as implausible in terms of molecular packing. The rotation of a C-(C≡N)/Cl spur around a fixed axis in R-3m, or the movement of a single rigid molecule in P1 is a simple global optimisation problem, and hence the presence of preferred orientation in the data was investigated as a possible reason for unsuccessful structure solution. Both structure solution calculations were initially attempted using powder data collected in a flat disc. Subsequent collection of a data set using capillary geometry, and comparison with the original disc data clearly shows that the degree of preferred orientation present in this case is severe (Fig. 10). Consequently structure solution and refinement was only successful when carried out using the capillary data to minimize the preferred orientation effects of the plate-like crystallites, although a preferred orientation correction was still required in refinement. Molecular Packing and Comparison of Structures The lack of any strong intermolecular bond functionality means that the molecular packing in chlorothalonil is controlled primarily by weak C≡N⋅⋅⋅Cl interactions. The R-3m and P1 crystal structures of form II are very similar in terms of molecular packing (Figs. 11 and 12), and differ only in the application of a disordered or ordered structural model (although the triclinic structure is only an ordered approximation to the true disordered structure). Both structures consist of infinite planar sheets in which the molecules are held together by C≡N⋅⋅⋅Cl interactions, with π-π stacking of these layers to form a three-dimensional structure. In the R-3m structure, each molecule is surrounded by six others in each sheet with an N⋅⋅⋅Cl distance of 3.272(6) Å (Fig. 11). These sheets run parallel to the (001) plane with an inter-layer distance of 3.364(6) Å. A similar inter-layer distance of 3.36(2) Å is found in the P1 structure, although the sheets lie in the [111] direction, and the molecules in each sheet are linked by N⋅⋅⋅Cl interactions of 3.05(4) Å and 3.35(3) Å (Fig. 12). However, it is clear that in the P1 structure the intermolecular distance between the cyano groups in neighbouring molecules in the [011] direction is too short (2.45(4) Å). Rotation of the molecule in 60º steps within the (111) plane results in similar molecular packing with close cyano contacts running between molecules in the [101] or [110] directions, respectively. The five new crystal structures generated by these rotations are also indistinguishable by R wp (calculated from the experimental powder data), confirming that any of the six orientations give an equivalent representation of the disordered structure. As the P1 structure is clearly implausible in terms of intermolecular packing, we can conclude that the disorder in this system does not arise through the existence of domains in the crystal each containing a section of the P1 symmetry structure rotat- ed through all six possible orientations, but may still be a reasonable approximation to the true crystal structure through correlated disorder. Concluding Remarks The ability to determine the crystal structures of small organic materials that suffer from poor crystal growth is essential if reliable conclusions are to be drawn from systematic structural studies of intermolecular forces. Many of these structures show, not surprisingly, crystal packing that is atypical, but play a key role in our understanding of non-covalent interactions. In the case of 2,4,6-tri-isopropylbenzenesulfonamide, the initial room-temperature dataset collected using a laboratory x-ray source could be indexed, but the structure could not be determined from these data using traditional structure solution methods. The success achieved with low-temperature synchrotron data raises the possibility that the previous attempt at structure solution may have been hampered by the occurrence of intramolecular rotations at room temperature. While rotation of the sulfonamido group about the C-S bond is unlikely because of the hydrogen bonding, rotation of the isopropyl groups about the C(aryl)-CHMe 2 bonds seemed plausible. However, solid-state CP-MAS NMR investigations [38] have shown that such a rotation is not observed even at room temperature, and we conclude that it is a combination of the superior resolution of the synchrotron data and the application of improved structure solution software that has now permitted structure determination. Attempts at the structure solution of 2-toluenesulfonamide (IV) using the Monte Carlo method were unsuccessful. However, structure determination has been achieved recently by the application of another direct-space method based on the differential evolution algorithm [39], but using the original diffraction data, collected some ten years ago. We have also demonstrated that conventional laboratory powder diffraction data, collected under non-ideal conditions (in which the sample displays significant preferred orientation) can be used to study such structures. The development of direct-space structure solution methods has had a significant impact in this area, and may prove to be more powerful than thought if shown to be robust when dealing with data that is distorted by preferred orientation. This is clearly illustrated by the structure determination of pyrogallol HMTA (1/1), in which despite the presence of two entirely different molecular components in the structure, and the evidence of preferred orientation in the data, structure solution and refinement ran smoothly. Although the structure solution of the majority of materials described in this paper progressed in a relatively straightforward manner, the structure determination of chlorothalonil form II proved to be more problematic. Despite being a simple structural problem in terms of direct-space structure solution methodology, the presence of a severe degree of preferred orientation in the diffraction data resulted in the failure of initial attempts at structure solution. Given the earlier successes of the direct-space approach, this was somewhat unexpected (even though the preferred orientation in this case was much more severe). However, this clearly demonstrates that no matter how straightforward the structure may seem, measures should be taken in sample preparation or choice of data collection conditions to minimize these sample effects and ensure the best chance of success in structure solution. The advantages of using capillary data for the structure solution of "sheet-type" organic materials are obvious, although disc data can also be used. However, it is important to note that all the data used here were collected in transmission geometry and that detrimental sample effects are often maximized using "flat plate" reflection geometry which should be avoided if possible. Such considerations enabled the determination of the structure of a new disordered polymorph of chlorothalonil, but attempts to rationalize this disorder using experimental data resulted in a low symmetry structure that was implausible in terms of crystal packing. Although the disorder in this structure cannot be predicted by current computational methods, our recent structure prediction studies have generated an alternative ordered layer structure that provides a valuable insight into the nature of the disorder, and demonstrates how the complementary use of these two techniques can reveal structural information that would be unavailable if the experimental and theoretical results were considered independently [35].
7,050.4
2004-02-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Role of Rv3351c in trafficking Mycobacterium tuberculosis bacilli in alveolar epithelial cells and its contribution to disease Although interactions with alveolar macrophages have been well characterized for Mycobacterium tuberculosis, the roles epithelial cells play during infection and disease development have been less studied. We have previously shown that deletion of gene rv3351c reduces M. tuberculosis replication in and necrosis of A549 human type II pneumocyte cells. In the present study, we report that rv3351c is required for lipid raft aggregation on A549 cell plasma membranes during M. tuberculosis infection. Lipid raft aggregation was also induced directly by recombinant Rv3351c protein. A Δrv3351c deletion mutant was less effective than wild type M. tuberculosis at circumventing phagolysosome fusion in A549 cells as evidenced by increased co-localization with lysosomal markers LAMP-2 and cathepsin-L by the mutant bacilli. These observations indicate a role for Rv3351c in modification of the plasma membrane to facilitate trafficking and survival of M. tuberculosis bacilli through alveolar epithelial cells, and support the hypothesis that M. tuberculosis has mechanisms to target the alveolar epithelium. Preliminary data also demonstrate that like the type II pneumocyte-targeting M. tuberculosis secreted protein heparin-binding filamentous hemagglutinin (HBHA), Rv3351c is detected by the host cellular and humoral immune responses during infection, and may play an important role in mycobacterial dissemination from the lungs. Author summary Mycobacterium tuberculosis is the leading causes of death due to a single infectious agent and many facets regarding the pathogenesis of this organism remain unknown. This facultative intracellular bacterial pathogen often establishes infection through inhalation of the bacilli into the alveoli of the lungs. Interactions with alveolar macrophages have been well characterized and it had been assumed that these interactions with phagocytic cells primarily determine the fate of the disease. However, alveolar epithelial cells, such as type II pneumocytes, play important roles in disease progression of other bacterial and viral respiratory pathogens, which provided the impetus to more-closely examine pneumocyte-M. tuberculosis interactions. We describe in this study the role of the M. tuberculosis rv3351c gene product in the internalization and survival of this pathogen in human type II pneumocytes. We previously showed that a Δrv3351c mutant replicates less efficiently and generates less necrosis than the parental M. tuberculosis strain in this cell type. We demonstrate herein that Rv3351c protein induces lipid raft aggregation on the membranes of alveolar epithelial cells and that M. tuberculosis Δrv3351c traffics through LAMP-2-labeled endosomes 30% more frequently than the parent strain. This trafficking toward phagolysosomes may underlie the reduced replication and cytotoxicity of the mutant. The role of Rv3351c in trafficking and survival of M. tuberculosis bacilli through epithelial cells ultimately resulting in dissemination from the lungs may begin with modifications to the plasma membrane prior to attachment. Such a mechanism of activity suggests Rv3351c as a potential vaccine target to train the host immune system to bind and eliminate the protein before it modulates the alveolar epithelium. 72 Mycobacterium tuberculosis, the causative agent of tuberculosis infects an estimated one-quarter 73 of the world's population with 60-90% of these individuals potentially harboring latent infection 74 (1,2). To date, no vaccine reproducibly protects against the pulmonary form of the disease in post-75 adolescents. 76 The alveolar macrophage is generally believed to control the initial success or failure of 105 The present study examines attachment, internalization and trafficking within type II 225 To examine conversion of LC3 forms, A549 cells were incubated for 12 or 24 hours with 226 M. tuberculosis Erdman or Δrv3351c at a MOI of 100. After extracellular bacteria were removed, 227 Autophagy levels were analyzed by LC3 immunoblotting assay with GAPDH as a loading control 228 ( Figure 7A and B). The LC3-II/GAPDH ratio is higher after Δrv3351c or Erdman infection than 229 uninfected controls at 12 hpi ( Figure 7A and C); however, the LC3-II/GAPDH ratio is higher at 12 230 hpi for A549 cells infected with Δrv3351c than Erdman or cells induced for autophagy by amino We previously demonstrated microscopically that viable M. tuberculosis Erdman bacilli induce 302 lipid raft aggregation on infected A549 cells to similar or greater levels than lipid raft super 303 aggregator LLO (9). Additionally, we showed that culture filtrates from infected A549 cells also 304 induce lipid raft aggregation when added to fresh monolayers indicating the responsible factor is 305 mycobacterial and likely secreted during or prior to infection (9). In this current study, we 306 demonstrate that infection with Δrv3351c cells did not as efficiently aggregate lipid rafts compared 307 to the parent strain, but recombinant Rv3351c protein alone induced similar levels of lipid raft 308 aggregation on A549 cell plasma membranes as the LLO positive control when added at equal 309 concentrations ( Figure 1). Interestingly, the difference in lipid raft aggregation between A549 cells 310 infected with the wild type and Δrv3351c bacilli did not appear at the gross level to affect the 311 method or rate of bacterial attachment and internalization, but did alter the intracellular trafficking 312 pattern between the two strains. Thus, the overall studies described here provide for a more 313 thorough understanding of the role Rv3351c in the process of M. tuberculosis attachment, 314 internalization and trafficking within type II pneumocytes. However, these data also contribute to 315 the body of knowledge indicating a larger role for alveolar pneumocytes; they may not simply 316 provide a barrier to infection, but may also contribute to the pathology associated with tuberculosis 317 (34-37). What was surprising is the apparent role this protein plays in modifying the host cell 318 membrane to induce lipid raft aggregation, and thus plays a major role in the optimal trafficking 319 that is not observed with the mutant strain; potentially causing the mutant bacteria to attach to, 320 enter the pneumocyte, and traffic through a sub-optimal pathway that is sub-optimal for bacterial 321 survival and spread. 322 The process of attachment and internalization is readily observed with both M.
1,376.2
2020-12-03T00:00:00.000
[ "Biology", "Medicine" ]
Qualitative Prediction of Ligand Dissociation Kinetics from Focal Adhesion Kinase Using Steered Molecular Dynamics Most early-stage drug discovery projects focus on equilibrium binding affinity to the target alongside selectivity and other pharmaceutical properties. Since many approved drugs have nonequilibrium binding characteristics, there has been increasing interest in optimizing binding kinetics early in the drug discovery process. As focal adhesion kinase (FAK) is an important drug target, we examine whether steered molecular dynamics (SMD) can be useful for identifying drug candidates with the desired drug-binding kinetics. In simulating the dissociation of 14 ligands from FAK, we find an empirical power–law relationship between the simulated time needed for ligand unbinding and the experimental rate constant for dissociation, with a strong correlation depending on the SMD force used. To improve predictions, we further develop regression models connecting experimental dissociation rate with various structural and energetic quantities derived from the simulations. These models can be used to predict dissociation rates from FAK for related compounds. Introduction Most drugs function by binding to a specific target and altering its activity in a way that ultimately prevents or treats a disease. Consequently, while a wide range of factors affect the clinical efficacy and safety of a drug candidate (including selectivity, toxicity, solubility and pharmacokinetic properties), the binding affinity of a drug candidate to its target is of primary importance, and high binding affinity can make up for less desirable characteristics elsewhere. Accordingly, many drug discovery efforts begin by searching for compounds that have high affinity for binding to the target. Computational methods have become an important part of this effort [1], and a wide range of methods have been developed for determining binding affinities. The most rigorous and computationally expensive methods involve alchemical free energy methods [2][3][4][5][6]. These methods take advantage of the fact that free energy is a state function by effectively causing the ligand to appear within the binding site of the protein or in solvent and determining the free energy changes for these "alchemical" transformations. The free energy change for the dissociation of the ligand from the target can then be calculated using a thermodynamic cycle. Other less rigorous methods include MM/PBSA and MM/GBSA methods [7,8], which determine the affinity using energies calculated from simulations with and without the ligand, and molecular docking, which uses scoring functions that have been fitted to correlate with the binding affinity [9][10][11]. There is increasing evidence that the in vivo effectiveness of some drug candidates depends not only on their equilibrium binding affinity but also on their residence time bound to their targets. The advantages of a longer residence time at the target can include better selectivity, a larger therapeutic window and increased duration of action or less frequent dosing [12][13][14], although there is not universal agreement that residence time offers more information than equilibrium binding affinity [15]. Nevertheless, the pharmaceutical industry is making an effort to develop a better understanding of the factors affecting drug residence time. Computational simulations of biomolecules can play an important role in this effort [16,17]. It is more difficult to directly simulate the dissociation of a ligand from a target than to apply free energy techniques to determine the binding affinity because the timescales on which ligands dissociate are usually orders of magnitude longer than those directly accessible to simulation. The binding equilibrium of ligand fragments to FKBP has been directly observed using simulations on the microsecond timescale carried out on the Anton supercomputer [18]. In addition, there are a number of enhanced sampling methods that can determine the rates of long timescale processes from simulations on shorter timescales. These include Markov state analysis of unbiased simulations [19][20][21], the weighted ensemble method [22][23][24][25][26], multiple replica scaled MD [27], selectively scaled MD simulations [28], τ-RAMD simulations [29], milestoning [30,31], combined weighed ensemble and milestoning [32] and transition path sampling [33,34]. Other methods, while not directly yielding estimates of dissociation rates, could be used to obtain pathways that ligands might take in dissociating from the binding site of the target. These include steered molecular dynamics [35,36], targeted molecular dynamics [37,38] and biased molecular dynamics [39], which use additional forces or constraints in different ways to cause conformational changes to take place much faster than they would in an unbiased simulation. Once such a pathway has been identified, it can be used to define the reaction coordinate in more rigorous but time-consuming simulations such as computing free energy profiles or surfaces by umbrella sampling [40]. Combining these methods can achieve accurate predictions of residence times. For example, the dissociation of ligands from the A2A adenosine receptor, a G-protein coupled receptor, has been simulated by using SMD to pull the ligand from the binding site into the extracellular vestibule, and the calculated change in the interaction energy between ligand and water over the course of the simulation was found to have a strong correlation with the experimental dissociation rate [41]. Enhanced sampling techniques can also be used to explore protein conformational changes that are critical to ligand unbinding and thereby improve the accuracy of predicted dissociation rates. For example, combining SMD, infrequent metadynamics, and Markov analysis allowed detailed study of the unbinding mechanism of a radioligand from the α7 nicotinic acetylcholine receptor as well as estimation of the dissociation rate within an order of magnitude of the experimental value [42]. Finally, using methods similar to those used here, the dissociation of eight ligands from the protein kinase p38α was studied by first performing SMD to determine unbinding paths, then resampling again using SMD but in a manner similar to umbrella sampling to determine the potential of mean force for unbinding. The barrier heights from this potential of mean force for each ligand were then used to predict the dissociation rate, and a high correlation of 0.86 with the experimental dissociation rate was observed. However, this method was relatively expensive computationally, requiring a total of 4.5 µs for each ligand [43]. Recently, our group tested the concept of using steered molecular dynamics simulations to predict the dissociation rates of three ligands from focal adhesion kinase (FAK) simulations [44], comparing the results to dissociation rates for these ligands measured using surface plasmon resonance [45]. FAK has attracted interest as a target for the development of anticancer therapies because it plays an important role in mediating interactions between cells and the extracellular matrix, activating pathways that promote cell growth and survival in response to integrin binding to the extracellular matrix [46,47]. The activity of FAK is thought to contribute to the ability of cancer cells to survive, grow, and metastasize even in the absence of attachment to the extracellular matrix [48]. In our previous SMD study, we found that SMD could rank the dissociation rates for the three ligands in the same order as the experimental measurements. Furthermore, dissociation pathways for the ligands appeared to have a two-step mechanism, with an intermediate state and the exposure of a hydrophobic phenyl ring contributes to the activation barrier. In addition to SMD simulations, we have used Markov state analysis and the milestoning method to estimate the dissociation rate of one of the ligands (Wong, not published) and found the rate to exceed the experimental value by 6-7 orders of magnitude. We then performed umbrella sampling simulations to determine the free energy surface for dissociation and found a low barrier giving a similar overestimate of the dissociation rate (results shown below). Therefore, we turn to SMD simulations again to explore its capability as an inexpensive method to simply classify compounds into fast or slow dissociating ligands. Initially, we simply hoped that the method could increase the enrichment factor in screening compounds with the desired drug-binding kinetics, such as long residence time, as molecular docking could for finding strong binders. This is already useful for practical drug discovery. To our surprise, the study of 14 ligands here suggests that the method can even rank-order the dissociation rate of these ligands, especially by introducing regression models that connect experimental measurements with various energetic or structural parameters from the simulations. The regression models we develop may be useful for predicting the dissociation rate of other ligands from FAK in a computationally inexpensive way. Free Energy Surface for Exit of Ligands from the Binding Site We first examined the free energy surface for the dissociation of ligands 32, 2, and 41 from the active site of FAK to get a sense for the mechanism of dissociation and the location and nature of the activation barrier. Figure 1 shows the three-dimensional free energy surface for the center of mass of ligand 32 relative to FAK as a series of contour surfaces. Figure 1b shows the one-dimensional potential of mean force along the dissociation path, which has been obtained from the three-dimensional free energy surface by integrating out two of the degrees of freedom. Since only conformations in which the ligand was near the exit pathway previously identified by SMD were sampled, only the free energy near this pathway was well estimated. The profile of the potential of mean force shows a free energy basin corresponding to the bound state and a shallow basin for an intermediate state before ligand dissociation. This qualitatively agrees with the two-step dissociation mechanism observed in previous SMD simulations carried out in explicit solvent [44]. The free energy surface shows the dissociation pathway in the structure of the protein. Figure 1 displays the free energy contours just before the bound state and intermediate state can be connected and after-10.5 kcal/mol and 11.0 kcal/mol, respectively. It illustrates the location of the center of mass of the ligand at the transition state. It is located near the side chains of Arg 426, Glu 506, and Ser 509. If one estimates the activation barrier for the dissociation of ligand 32 by the average of 10.5 kcal/mol and 11.0 kcal/mol and uses it to estimate the order of magnitude of the dissociation rate by Eyring's transition state equation [49,50]. where ∆G ‡ is the transition state free energy, T is the temperature, and κ the transmission coefficient (the fraction of reactions that proceed to products after reaching the transition state, which we assume to be 1). This corresponds to a dissociation rate of approximately 2 × 10 5 s −1 , some seven orders of magnitude larger than the experimental dissociation rate for ligand 32. Similar results were obtained in previous attempts to calculate ligand dissociation rates using unbiased MD simulations and Markov chain analysis (Wong, unpublished). If one uses the activation barrier from the potential of mean force, which is smaller at about 8 kcal/mol, the dissociation rate deviates further from the experimental value. relative to FAK as determined from umbrella sampling simulations. Each image shows two contour surfaces. The blue contour was obtained just below the free energy threshold before two neighboring structures along the reaction coordinate could be connected. The red contour was obtained right after the two structures could be connected. The free energies of the two colored contours thus give us an idea on the energy threshold when the transition state started to form. Three images are shown for ligand 41 because its more stable intermediate state gives a somewhat different picture. A transition state can be seen that forms between the blue contour at 5.0 kcal/mol and the red contour at 5.5 kcal/mol. However, because the barrier for dissociation from the intermediate state to the unbound state is large, this contour does not extend towards the unbound state. The contours at 6.0-6.5 kcal/mol and 8.0-8.5 kcal/mol show the sequence as the unbound state is reached at higher free energies. For ligands 32 and 2, the insets show close-up views when the transition states begin to form. (b) One-dimensional potential of mean force as a function of the distance of the center of mass of the ligand from the binding site. The profile of the one-dimensional potential of mean force (PMF) for ligand 2 is quite different from that of 32. The global minimum corresponds to a bound structure in which the center of mass of ligand 2 is approximately 6 Å away from the position shown in the crystal structure of ligand 32. The height of the dissociation barrier is approximately 4-4.5 kcal/mol above this global minimum. (Because of the averaging involved in going from a three-dimensional free energy surface to a one-dimensional PMF, this barrier appears to be only approximately 2 kcal/mol in the one-dimensional PMF.) This is qualitatively consistent with the increased dissociation rate for ligand 2 compared to 32 but may exaggerate it quantitatively. The potential of mean force for ligand 41 appears similar to that of 32, with a stable bound state and an intermediate state. The free energy difference between the bound and unbound states are similar for the two compounds, which is consistent with them having comparable binding affinities to FAK. However, the energy barriers differ. The barrier for transition from the bound state to the intermediate state is smaller than that of ligand 32, about 5 kcal/mol. The transition from the intermediate state to the unbound state is about the same, approximately 4 kcal/mol. The free energy surface shows that the bound and intermediate states start to connect with each other between the contours at 5.0 kcal/mol and that at 5.5 kcal/mol. Between 6.0 kcal/mol and 6.5 kcal/mol, the intermediate state starts to connect with a dissociated state. Between 8.0 kcal/mol and 8.5 kcal/mol, the ligand dissociates further out of the protein. The highest barrier height from the potential of mean force appears to be about 5 kcal/mol, which is lower than the barrier height for ligand 32, despite 41 dissociating more slowly. Prediction of Experimental Dissociation Rates from Simulated Exit Times The SMD simulations reported here used a force to push the ligands out of the binding site, and consequently, the ligands exited the binding site on a much shorter timescale in the simulations compared to experiments. Nevertheless, the time needed to exit the binding site in the simulations was a natural choice for prediction of the experimental dissociation rate. Figure 2 shows the correlation between the exit time as measured in the SMD simulations and the experimental dissociation rates. The data was fitted to the empirical power-law relationship where k d,i is the experimental dissociation rate for compound i, and t exit i is the average of the exit times over the three SMD simulations carried out for compound i. (The experimental dissociation rate for compound 2 was very high and outside the instrument range [45]. For the regressions, the dissociation rate was assumed to be 1 s −1 , which was the minimum dissociation rate consistent with the measurement). A strong correlation between simulated exit time and experimental dissociation rate was observed for all four forces used for SMD simulations; a force of 450 pN gave the highest correlation. The values and standard errors for the exponent β 1 and the correlation coefficients are shown in Table 1. The root mean squared deviation of the residuals ε i was approximately 0.5-0.7 log 10 units, which implies that the experimental dissociation rate can be predicted within a factor of approximately 4-5. Despite the strong correlation, the exit time could vary by an order of magnitude for individual SMD simulations with the same ligand. Ligand 29 appeared to be an outlier; the correlations were much higher and the root mean squared error lower when this ligand was removed. Nevertheless, these correlations are encouraging, considering the approximations employed by the SMD simulations to substantially reduce simulation time. Table 1). Prediction of Experimental Dissociation Rates from Structural and Energetic Variables Given the wide range over which simulated exit times were found to vary for each compound, predictive models were also constructed using other independent variables that could be calculated from the simulations. For these regressions, the simulations with an applied force of 400 pN force were used. While the correlation of simulated exit time with experimental dissociation rate was slightly less for an applied force of 400 pN than for a force of 450 pN, we chose to analyze the 400 pN trajectories because they are longer and provide more structural data than the 450 pN trajectories. The independent variables for which predictive models were constructed included the exposed solvent-accessible surface area of the ligand and the total surface area buried between the protein and ligand, as well as the total interaction energy between ligand and protein (determined using FACTS) and its components. The following model was fitted using the following regression to relate the experimental dissociation rate k d,i to an independent variable x i . Here, x i is the mean value of either one of the interaction energy terms or one of the surface area terms, calculated over all three simulations for each ligand i. Figure 3 shows these regressions for the total interaction energy and its components. The total interaction energy showed a strong correlation to the experimental dissociation rate, slightly greater in absolute value than the simulated exit time for the same force (0.76 vs. 0.71). All of its components except the electrostatic interaction energy also showed strong correlations with the experimental dissociation rate; the strongest was for the nonpolar solvation component, at −0.79. Since the nonpolar solvation component of the FACTS energy function is proportional to the surface area [51], similar regressions were carried out using various surface areas. Figure 4 shows regressions for the ligand surface area and total buried surface area. These also featured strong correlations, although not as high as those for exit time or interaction energy terms. In order to study whether contact between the ligand and particular residues could be used to predict dissociation rate, further regressions, also fitted using Equation (3), were conducted in which the experimental dissociation rate was correlated against the minimum distance between the ligand and each residue of the protein. These regressions were done with data points from each of the 42 simulations. Figure 5a Figure 6 illustrates the pathway taken by the center of mass of the ligand in each of the SMD simulations with a force of 400 pN. For most ligands, the exit pathways obtained from the three simulations are similar near the binding pocket, but may diverge when the ligands reach the surface of the protein . Ligands 30, 35, 39, and 48 showed larger variations even near the binding pocket, but the variations were not substantial. Low Barrier in Free Energy Simulations A free energy simulation was carried out using umbrella sampling to identify the location and magnitude of the barrier to exit for three of the ligands (32, 2 and 41). For 32, this simulation gave an estimate of the barrier height that corresponds to a dissociation rate seven orders of magnitude larger than the experimental rate. There are a number of possible reasons for this discrepancy. The transmission coefficient κ in Eyring's equation might be much less than 1, although not likely to change the results by more than several orders of magnitude. The umbrella sampling simulations also exaggerated the differences in binding affinity and dissociation rate of ligand 2 compared to 32, and produced a barrier height for 41 lower than that for 32, despite the former ligand dissocating more slowly. This further demonstrates that, despite its computational cost, umbrella sampling simulation has its own limitations in quantitatively predicting dissociation rates. Nevertheless, umbrella sampling was able to broadly classify the ligands of very high (ligand 32 and ligand 41) and very low (ligand 2) residence times. The SMD simulations also produced varied trajectories for ligand exit for many of the bound ligands. While an effort was made to use long trajectories as a basis for the umbrella sampling simulations in order to base them on the most realistic exit trajectory possible, this variation may have an influence on the calculated barrier heights. Previous simulations have similarly identified multiple pathways for the exit of benzene from the binding site of the T4 lysozyme L99A mutant [52]. This problem is not limited to SMD simulations but also to approaches that use reaction coordinates specified by structural parameters. As Markov state analysis and the milestoning method also gave comparable dissociation rates as the umbrella simulations, perhaps the most probable explanation for the deviations from experiment would be an error in the force fields. While the CHARMM36 force field for proteins is extensively tested, the CGenFF force field relies on algorithms to choose from a large database of parameters and guess which ones are most chemically appropriate. The CGenFF force field generator also provides a "penalty score" for each parameter, indicating the level of confidence it has in that parameter. Several dihedral parameters for rotatable bonds had high penalty scores, indicating that the CGenFF force field generator had little confidence in the chemical analogy between the parameters in its database and in the molecule. These included the bonds connecting the sulfonamide to the pyridine ring, linking the pyridine ring to the fused pyrrolo-pyridine ring, and linking the pyrrolo-pyridine ring to the benzene ring. Incorrect parameters for these bonds could possibly affect the conformational behavior of 32. Choice of Solvation Model for SMD Simulations Since it was found that simulations were not able to accurately predict the absolute dissociation rate for ligand 32, we sought to determine whether dissociation rates could be predicted in a more qualitative fashion using more approximate, less expensive computational techniques. In particular, a strong correlation between simulated exit time in SMD simulations and experimental dissociation rates was observed for 14 ligands, confirming the result previously obtained for three of the ligands [44]. This result was obtained despite the use of a relatively approximate simulation setup. Steered molecular dynamics simulation started with only three initial conditions, and an implicit solvent model was used instead of explicit solvent. The FACTS implicit solvent model is a generalized Born model which provides an approximate description of solvation and hydrophobic effects. With the settings used in this work, it can estimate the electrostatic solvation energy of protein conformations to within approximately 3% compared to numerical solutions of the Poisson-Boltmann equation [51]. Although the use of explicit solvent coupled with the particle mesh Ewald method is generally considered to be a superior method of treating these effects, the use of explicit solvent in this instance can also result in frictional and hydrodynamic effects that would not be realistic given the accelerated timescale for ligand exit in the simulations. The FACTS model is also more computationally efficient than previous generalized Born models because it makes use of geometrical calculations rather than numerical integration to calculate the Born radii, supporting our goal of developing a model for qualitative prediction of ligand dissociation rates with minimal computational cost. Outlier Nature of Ligand 29 One of the compounds, ligand 29, appeared to be an outlier in this regression, with a significantly longer simulated exit time than would be expected based on the correlation involving the other ligands. It is not clear why this is the case. The only ligand for which a bound crystal structure was available was ligand 32, so in creating starting structures for the simulations, it was assumed that all the ligands bound in a similar way. If this is not the case for ligand 29, then this ligand could be encountering different free energy barriers to exit from the binding site compared to the other ligands, possibly explaining the outlier nature of the simulated exit times for this ligand. Forces Influencing the Experimental Dissociation Rates In order to determine which forces are responsible for the wide range of experimental dissociation rates among the studied ligands, we studied the correlation between the experimental dissociation rate and average values of the total interaction energy and its components. There is a positive correlation between the experimental dissociation rate and the mean value of the total interaction energy, indicating that stronger interactions (with more negative interaction energies) result in slower dissociation. All of the components of the interaction energy also showed statistically significant correlation, except for the electrostatic component. The polar and nonpolar solvation components show the strongest correlations, suggesting a significant role for solvation effects in determining the dissociation rates. Regressions involving structural features of the protein-ligand complex were also carried out in order to further confirm the type of forces involved and to determine if these quantities could be used to create better models for predicting the dissociation rate. Figure 4 shows regressions for the ligand surface area and total buried surface area. For both surface areas, there is a negative correlation with the dissociation rate, indicating that ligands that are more deeply buried in the binding site dissociate more slowly. We also studied the correlation of experimental dissociation rate with the minimum distance between the ligand and specific residues of the protein, in order to identify the residues whose interaction with the ligand contributed the most to the differences in dissociation rates. The highest correlations were observed with the minimum distance between the ligand and specific residues of the protein. However, it is to be expected that residues that are nearby in the protein will also show similar correlations, so not all of these correlations represent residue-ligand interactions with a direct effect on the dissociation rate. In order to narrow down these interactions, additional criteria were applied. Only residues that had a minimum distance to the ligand of less than 6 Å were considered further. Of these residues, those with the highest correlations and not adjacent in sequence were chosen for further investigation. This analysis pointed to four residues that had the highest correlation: Ile 497, Leu 486, Lys 454, and Ser 509. Of these residues, two (Ile 497 and Leu 486) have hydrophobic side chains. Ile 453 actually has a slightly higher correlation than Lys 454, but the side chain of Ile 453 faces away from the ligand, while the hydrocarbon part of the Lys 454 side chain comes toward the ligand. Ser 509 was previously identified in the umbrella sampling simulations as being near the transition state, possibly forming some steric hindrance to ligand exit. Hydrogen bonding did not appear to play a significant role in the interactions between these residues and the ligand. A hydrogen bond analysis of the SMD trajectories did not reveal any significant hydrogen bonds between the protein and ligands other than those between the pyrrolopyrimidine ring and Cys 502 that are present in the crystal structure of ligand 32 bound to FAK [45]. Ligands, Initial Structures and Force Fields Simulations were performed of FAK in complex with compounds 2, 28, 29, 30, 31, 32, 33, 34, 35, 37, 39, 41, 42, and 48 from the work of Heinrich et al., [45], based on the crystal structure of FAK in complex with compound 32 (PDB code 4GU6) presented in that same work. The structures of the compounds are shown in Figure 7 and Table 3. The FAK protein was modeled using the CHARMM22 force field [53] while the ligands were modeled using the CHARMM36 CGenFF force field [54]. (The CHARMM36 force field for proteins could not be used because it is incompatible with the FACTS implicit solvent model [51] that was used for this work.) An initial reference structure of FAK in complex with ligand 32 was prepared based on the crystal structure of this system from the work of Heinrich et al. [45] (PDB code 4GU6). Hydrogen atoms were added and tautomeric states for histidine assigned using the CHARMM-GUI web server [55]. Each ligand was constructed in Schrodinger Maestro [56] from the initial reference structure, ensuring consistent atom names, and saved in mol2 format. Force fields for each of these ligands were constructed from the mol2 files using the CGenFF web server [54]. Table 3. Table 3. R groups for structures of FAK ligands simulated in this work, corresponding to Markush structures shown in Figure 7. Experimental data as measured using surface plasmon resonance taken from Ref. [45]. * The dissociation rate of compound 2 was outside the instrument range; the value shown is the minimum value possible. Steered Molecular Dynamics Simulations We conducted steered molecular dynamics (SMD) simulations of the exit of the various ligands of FAK in order to develop models that could be used to predict the dissociation rate. These simulations used an implicit solvent model in order to limit the computational cost. CHARMM [57] was used for initial minimization and heating and the SMD simulations, while NAMD [58,59] was used for the umbrella sampling simulations in explicit solvent described below. The FACTS implicit solvent model [51] was used together with the CMAP corrections originally developed for the GBSW implicit solvent model [60] and also recommended for FACTS. A switching function from 10 to 12 Å was used for the van der Waals interaction as recommended in the FACTS documentation. A hydrophobic surface tension coefficient of 0.015 kcal/mol Å 2 and Debye-Huckel correction corresponding to an ionic strength of 0.15 M were used. Using this energy function, the initial structure of FAK in complex with each ligand was minimized under harmonic restraints. Each system was then heated to 300 K over 1.5 ns with harmonic restraints of 1.0 kcal/mol Å 2 on each non-hydrogen atom in the system, and equilibrated while relaxing the restraints over a subsequent 1 ns of equilibration. For each system, three steered molecular dynamics simulations [35,36] were then undertaken, starting with the heated and equilibrated configuration and reassigning the velocities in order to ensure independence of the simulations. A choice of four possible applied forces (350 pN, 400 pN, 450 pN, or 500 pN) were used, applied in such a way as to push the N5 atom of the indole or benzimidazole ring away from the amide nitrogen of Cys 502. This choice is based on previous SMD simulations and has been found to give the lowest dissociation barrier and therefore contribute most significantly to dissociation rate [44]. A 2 fs step size was used together with SHAKE to constrain all bonds involving hydrogen. The temperature was maintained at 300 K using Langevin dynamics [61] with a low friction coefficient of 0.1 ps −1 . Coordinates were recorded every 100 fs. The simulations were continued until the distance between these two atoms exceeded 100 Å, at which point the ligand was deemed to have exited the binding site, the simulation was terminated and the time noted as exit time. This condition was chosen to be sure that the ligand had indeed come out of the protein. From the potential of mean force shown in Results, the ligand can be considered outside at about 20 Å. The difference between 20 Å and 100 Å might appear large but the time to travel this distance is insignificant in comparison to the time spent inside the protein and thus has negligible contribution to the total exit time. Structural and Energetic Analysis of the SMD Simulations In order to identify structural features that could be used to predict the experimental dissociation rates of each compound, the SMD trajectories were analyzed using CHARMM, VMD [62] and MDAnalysis [63,64]. MDAnalysis was used to compute the minimum distance between the heavy atoms of each ligand and of each amino acid residue for each frame in each SMD trajectory. From this, the minimum distance of the ligand from each residue was determined over the whole trajectory for each simulation. VMD was used to compute solvent-accessible surface areas of the ligand, the protein, and the entire system, each in its own context, as well as the solvent-accessible surface area of the ligand in the context of the whole system. The buried surface area was determined as the sum of protein and ligand surface areas, each in its own context, less the total surface area of the system. CHARMM was used to compute the interaction energy of the ligand and protein for each frame, which was then decomposed into van der Waals, electrostatic, polar solvation and nonpolar solvation components according to the FACTS implicit solvent model. Regression analysis was then used to determine the correlation of each of these quantities with the experimental dissociation rate, as discussed further in the Results section. Umbrella Sampling Simulations and Free Energy Surfaces In order to characterize the free energy surface for the exit of ligands 32, 2 and 41 from the FAK binding site, umbrella sampling simulations [40] of the three ligands in complex with FAK were carried out. These simulations were based on exit pathways from initial SMD simulations carried out using low forces in the expectation that these pathways would be more likely to be representative of the actual exit pathway. In the case of ligand 32, the umbrella sampling simulation was based on a separate SMD simulation with a force of 250 pN that had been carried out prior to the group of SMD simulations described above. In the case of ligands 2 and 41, the umbrella sampling simulations were based on the longest SMD exit simulations with a force of 350 pN taken from the main group of SMD simulations. In each case, windows for umbrella sampling were identified by first RMSD aligning the SMD trajectory with the reference structure along the peptide backbone, then choosing frames in which the center of mass of the ligand was 1 Å from the center of mass in any previous window. Each such center of mass was then used as the center of a harmonic umbrella potential for one of the windows. 30 windows were identified in this way for ligand 32 and 22 for ligand 2. In the case of ligand 41, an initial set of 40 windows was identified in this way, but the umbrella sampling simulation did not adequately sample near one of the free energy barriers, so an additional 6 windows had to be added centered on frames from the SMD simulation. For each ligand, the chosen frames from the SMD simulations were each solvated in a rhombic dodecahedral box of TIP3 water [65] and sodium chloride to an ionic strength of 150 mM, and simulated in explicit solvent with the CHARMM 36 force field [66,67] and the particle mesh Ewald method [68]. This simulation consisted of heating to 300 K and equilibration under protocols similar to those used for the SMD simulations, followed by 20 ns of simulation for each window. Langevin dynamics also used to maintain constant temperature, with a damping constant of 10 ps −1 . All simulations were also conducted at a constant pressure of 1 atm, with a barostat oscillation time of 100 fs and a decay time of 50 fs. The production simulations used a harmonic umbrella potential centered on the position of the center of mass in the initial frame taken from the SMD simulation and having the following form: where U FF (x) is the potential given by the force field, r CM is the center of mass of the ligand after alignment of the protein backbone with the initial frame taken from the SMD simulation, and r CM,0 was the corresponding position taken from that frame. (These positions are shown in Figure 8.) The force constant k of the harmonic restraint was 1.0 kcal/mol Å 2 . The use of backbone alignment ensured that overall rotation and translation of the protein had no effect on the umbrella potential. These simulations were carried out with NAMD [58,59] and the umbrella potential was imposed using the colvars module within NAMD. The first 2 ns of each trajectory was discarded. From the remaining 18 ns of each trajectory, a three-dimensional histogram of the position of the center of mass of the ligand was constructed with a bin size of 0.25 Å in each dimension. These histograms were then combined into a three-dimensional free energy surface for each ligand using the weighted histogram analysis method [69,70]. This method simultaneously corrected for use of the biasing potential and constructed the minimum variance estimate of the free energy surface by combining the histograms from each simulation using the following equations: G(r) = −k B T ln p 0 (r) (7) where G(r) and p 0 (r are the free energy surface and probability density in terms of the center of mass r of the ligand, h j (r) is the histogram obtained from simulation j, U j (r) = k 2 r CM − r CM,j is the biasing potential applied to simulation j, and n j is the number of data points for simulation j. Z i Z 0 is the ratio of the partition function for simulation i to that of the unbiased simulation. These equations were solved self-consistently by iterating until the values of ln Z i Z 0 for all simulations i had converged to less than 0.001. Using VMD, each resulting three-dimensional free energy surface was then visualized as contour surfaces together with the protein. This made it possible to determine the location of free energy basins and saddle points corresponding to transition states in relation to the protein. To determine the locations, the isosurface of constant free energy was visualized for increasing free energy levels until two energy basins could connect with each other, as shown in Figure 1a. Each free energy surface was also used to produce a one-dimensional potential of mean force in terms of the distance of the ligand center of mass from its position in the reference structure. This was done by integrating p 0 (r), as calculated from Equation (6) over all bins corresponding to a given center of mass distance and converting to a free energy. Due to statistical error in the free energy surface, it is not possible to estimate the transition state free energy with a precision much higher than 0.2-0.3 kcal/mol. Conclusions We have studied the dissociation of 14 ligands from focal adhesion kinase using a combination of umbrella sampling free energy simulations and steered molecular dynamics simulations. While a free energy simulation of three of the ligands showed barrier heights too low to be consistent with the experimental dissociation rate, exit times obtained with steered molecular dynamics simulations showed a strong correlation, making qualitative comparisons possible. There were also strong correlations with most of the components of the interaction energy, particularly the nonpolar component, and with distances to several nonpolar residues. Regression models were also developed that may prove helpful in predicting dissociation rates for other FAK ligands and designing molecules to have a desired dissociation rate. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript: MD molecular dynamics PMF potential of mean force FAK focal adhesion kinase CHARMM chemistry at Harvard molecular mechanics SMD steered molecular dynamics FACTS fast analytical continuum treatment of solvation CGenFF CHARMM general force field CMAP cross-term map
9,077.6
2021-01-20T00:00:00.000
[ "Biology", "Chemistry" ]
Search for top squark pair production in a final state with two tau leptons in proton-proton collisions at s\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s} $$\end{document} = 13 TeV A search for pair production of the supersymmetric partner of the top quark, the top squark, in proton-proton collision events at s\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s} $$\end{document} = 13 TeV is presented in a final state containing hadronically decaying tau leptons and large missing transverse momentum. This final state is highly sensitive to high-tan β or higgsino-like scenarios in which decays of electroweak gauginos to tau leptons are dominant. The search uses a data set corresponding to an integrated luminosity of 77.2 fb−1, which was recorded with the CMS detector during 2016 and 2017. No significant excess is observed with respect to the background prediction. Exclusion limits at 95% confidence level are presented in the top squark and lightest neutralino mass plane within the framework of simplified models, in which top squark masses up to 1100 GeV are excluded for a nearly massless neutralino. the charginos and neutralinos with fermion-sfermion pairs involves both gauge and Yukawa terms [9], so if charginos and neutralinos are predominantly higgsino-like, they will preferentially couple to third-generation fermion-sfermion pairs through the large Yukawa coupling. Moreover, the Yukawa coupling to the tau lepton-slepton pairs can be large for a high value of tan β even if the higgsino component is relatively small. Additionally, a large value of tan β can make the lighter state of the superpartner of the tau lepton ( τ 1 ) much lighter than the superpartners of the first and second generation leptons. Consequently, the chargino decays predominantly as χ + 1 → τ + 1 ν τ or τ + ν τ (charge conjugation is assumed throughout in this paper), and the decay rates in the electron and muon channels are greatly reduced [11,12]. Therefore, searches for SUSY signals in electron and muon channels are less sensitive to this scenario. We focus on the top squark decays t 1 → b χ + 1 → b τ + 1 ν τ → bτ + χ 0 1 ν τ and t 1 → b χ + 1 → bτ + ν τ → bτ + χ 0 1 ν τ . The χ 0 1 is assumed to be the lightest SUSY particle (LSP). Being neutral and weakly interacting, it leaves no signature in the detector, resulting in an imbalance in transverse momentum p T . The neutrinos produced in the decay chains also contribute to the p T imbalance. Hence, the events of interest contain two tau leptons, two b quarks, and a p T imbalance. The decay chains are depicted by the four diagrams in figure 1 within the simplified model spectra (SMS) framework [13,14]. It is assumed that the χ + 1 decays to τ + 1 or ν τ with equal probability. This search is performed using proton-proton collision events at a center-of-mass energy of 13 TeV, recorded by the CMS experiment at the CERN LHC. The data sample corresponds to integrated luminosities of 35 Searches for top squark pair production in leptonic and hadronic final states have been performed by the CMS [15][16][17][18][19][20][21][22] and ATLAS [23][24][25][26][27] Collaborations, establishing limits on top squark masses in the framework of SMS models. The ATLAS Collaboration performed a search [28] based on 2015 and 2016 data probing the same final state as that used here, but optimized for a gauge-mediated SUSY breaking scenario with an almost massless gravitino as a source of missing momentum. Therefore, final states containing hadronically decaying tau leptons have not been extensively explored in the context of top squark searches motivated by high-tan β and higgsino-like scenarios. The paper is organized as follows. A brief description of the CMS detector is presented in section 2, followed by descriptions of the event simulation in section 3, and reconstruction in section 4. The event selection and search strategy are detailed in section 5. We explain the various methods used for background estimation in section 6, the systematic uncertainties are discussed in section 7, and the results are provided in section 8. Finally, the analysis is summarized in section 9. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in ref. [29]. Events of interest are selected using a two-tiered trigger system [30]. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a time interval of less than 4 µs. The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage. Monte Carlo simulation Simulation is used to estimate several of the SM backgrounds. The predictions for signal event rates are also estimated using simulation, based on simplified SUSY signal models. The simulation is corrected for small discrepancies observed with respect to collision data using a number of scale factors (SFs). These will be discussed in later sections. JHEP02(2020)015 The pair production of top quarks (tt) is generated at next-to-leading order (NLO) in α S using powheg v2 [31][32][33][34][35]. The same powheg generator has been used for the single top quark t-channel process, whereas powheg v1 has been used for the tW process [36]. The MadGraph5 amc@nlo v2.2.2 (v2.4.2 for 2017) [37] generator is used at leading order (LO) for modeling the Drell-Yan+jets (DY+jets) and W+jets backgrounds, which are normalized to the next-to-next-to-leading order (NNLO) cross sections. The Mad-Graph5 amc@nlo generator is also used at NLO for simulating the diboson, VH, and ttV (V = W or Z) backgrounds. For the 2016 analysis, the parton shower and hadronization are simulated with pythia v8.212 [38] using the underlying event tunes CUETP8M2T4 [39] (for tt only) or CUETP8M1 [40]. For the 2017 analysis, pythia v8.230 with the tune CP5 [41] is used. The CMS detector response is modeled using Geant4 [42], and the simulated events are then reconstructed in the same way as collision data. We assume a branching fraction of 50% for each of the two decay modes of the chargino, Each of the four diagrams in figure 1 therefore contributes 25% of the generated signal events. The masses of SUSY particles appearing in the decay chain are determined by the parameterization ), x ∈ [0.25, 0.5, 0.75], m ν τ = m τ 1 . (3.1) In this parameterization, the chargino mass is fixed to be the mean of the top squark and χ 0 1 masses. The masses of the leptonic superpartners are set by the value of x for a given pair of top squark and χ 0 1 masses. The kinematic properties of the final state particles in each of the decay chains depicted in figure 1 therefore depend on the choice of x. • x = 0.25: the mass of the lepton superpartner is closer to that of the χ 0 1 than to that of the χ − 1 . Hence, the upper left diagram in figure 1 produces lower energy tau leptons than the upper right. The lower two diagrams both typically produce two tau leptons with a large difference in energy. • x = 0.75: the masses of the τ ± 1 and the χ ± 1 are relatively close, so the upper left diagram in figure 1 produces more energetic tau leptons than the upper right. The lower two diagrams produce the same energy asymmetry as in the case of x = 0.25. • x = 0.5: the tau leptons in all four diagrams have similar energies. JHEP02(2020)015 In fact, when all four diagrams are taken into account the distributions of the kinematic properties are found to be very similar for the three different values of x, for a given set of chargino and LSP masses. It is important to note, however, that the choice of chargino mass does affect the overall sensitivity. For instance, if the chargino is very close in mass to the top squark, then the momenta of the b jets are reduced and those of the remaining decay products are increased. This results in an increase in the overall sensitivity, provided the b jet p T values are within the acceptance. On the other hand if the chargino is very close in mass to the LSP, then an overall loss of sensitivity is expected. Such scenarios are not explored in this paper, where the default chargino mass given in eq. (3.1) is taken throughout. The polarizations of the tau leptons originating from SUSY cascade decays, which have been found to be useful for studying SUSY signals [12], have not been exploited here. Event reconstruction The particle-flow (PF) algorithm [49] aims at reconstructing each individual particle in an event, with an optimized combination of information from the various components of the CMS detector. The energy of photons is obtained from the ECAL measurement, whereas the momentum of electrons is determined from a combination of the measurement of momentum by the tracker, the energy of matching ECAL deposits, and the energy of all bremsstrahlung photons consistent with originating from the track. The momentum of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of the momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies. Reconstruction of jets is performed by clustering PF objects using the anti-k T algorithm [50,51] with a distance parameter of R = 0.4. Jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found in simulation to be, on average, within 5-10% of the generated momentum over the whole p T spectrum and detector acceptance. Additional proton-proton interactions within the same or nearby bunch crossings (pileup) can contribute spurious tracks and calorimetric energy deposits, increasing the apparent jet momentum. In order to mitigate this effect, tracks identified as originating from pileup vertices are discarded, and an offset is applied to correct for the remaining contributions [52]. Jets are calibrated using both simulation and data studies [52]. Additional selection criteria are applied to each jet to remove those potentially dominated by instrumental effects or reconstruction failures [53]. Jets with p T > 20 GeV and |η| < 2.4 are used in this analysis. Vertices reconstructed in an event are required to be within 24 cm of the center of the detector in the z direction, and to have a transverse displacement from the beam line of less than 2 cm. The vertex with the largest value of summed physics-object p 2 T is taken to be the primary pp interaction vertex. The physics objects used for this purpose are jets, clustered using the aforementioned jet finding algorithm with the tracks assigned to the -5 -JHEP02(2020)015 vertex as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the p T of those jets. Jets originating from the fragmentation of b quarks are identified as b-tagged jets by using the combined secondary vertex (CSVv2) algorithm [54], which utilizes information from displaced tracks and reconstructed secondary vertices. An operating point is chosen corresponding to a signal efficiency of 70% with a mistagging probability of about 1% for light jets (from up, down and strange quarks, and gluons) and 15% for jets originating from charm quarks. The momentum resolution for electrons with p T ≈ 45 GeV from Z → ee decays ranges from 1.7 to 4.5%. It is generally better in the barrel region than in the endcaps, and also depends on the bremsstrahlung energy emitted by the electron as it traverses the material in front of the ECAL [55]. Electrons with p T > 20 GeV and |η| < 2.4 are used for this analysis. Muons are measured with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Matching muons to tracks measured in the silicon tracker results in a p T resolution of 1% in the barrel and 3% in the endcaps, for muons with a p T of up to 100 GeV. The p T resolution in the barrel is better than 7% for muons with a p T of up to 1 TeV [56]. This search uses muons with p T > 20 GeV and |η| < 2.4. Isolation criteria are imposed on the lepton (electron and muon) candidates to reject leptons originating from hadronic decays. The isolation variable used for this purpose is defined as the scalar sum of the p T of reconstructed charged and neutral particles within a cone of radius ∆R = (∆η) 2 + (∆φ) 2 = 0.3 (0.4) around the electron (muon) candidate track, excluding the lepton candidate, divided by the p T of the lepton candidate. Charged particles not originating from the primary vertex are excluded from this sum and a correction is applied to account for the neutral components originating from pileup, following the procedure described in ref. [55]. This relative isolation is required to be less than 15 (20)% for electron (muons). The electron and muon candidates passing the aforementioned criteria are used to identify a control region (CR) that is used for the estimation of the background from top quark pair production, as explained in section 6.1. The missing transverse momentum vector p miss T is computed as the negative vector sum of the p T of all the PF candidates in an event, and its magnitude is denoted as p miss T . The p miss T is modified to account for the energy calibration of the reconstructed jets in the event. The energy calibration of the PF candidates that have not been clustered into jets is also taken into account. Anomalous high-p miss T events may appear because of a variety of reconstruction failures, detector malfunctions, or backgrounds not originating from collisions (e.g., particles in the beam halo). Such events are rejected by filters that are designed to identify more than 85-90% of the spurious high-p miss T events with a misidentification rate of less than 0.1% [57]. In order to minimize the effect of extra noise in the ECAL endcap in 2017, forward jets with uncalibrated p T < 50 GeV and 2.65 < |η| < 3.14 are removed from the calculation of p miss T in both data and simulation. This improves the agreement between simulation and data at the cost of degrading the p miss T resolution by only a few percent. The hadrons-plus-strips algorithm [58] is used to reconstruct τ h candidates: one charged hadron and up to two neutral pions, or three charged hadrons, consistent with -6 -JHEP02(2020)015 originating from the decay of a tau lepton. The probability of an electron or muon being misidentified as a τ h candidate is greatly reduced by combining information from the tracker, calorimeters, and muon detector. The isolation of the τ h candidate is determined from the presence of reconstructed particles within a radius of ∆R = 0.3 around the τ h axis that are not compatible with the decay, and is a useful quantity to distinguish between jets and τ h decays. In order to distinguish between jets originating from quarks or gluons, and genuine hadronic tau lepton decays, a multivariate discriminant is calculated from information including the isolation and measured lifetime. The τ h candidates are selected with p T > 40 GeV, |η| < 2.1, and the "tight" working point of the above discriminant. This working point has an efficiency of ≈50% with a misidentification probability of ≈0.03%. The "loose" working point, which has an efficiency of ≈65% and a misidentification probability of ≈0.07%, is used for estimating the background from misidentified τ h candidates. Event selection The sources of p miss T in the signal events are the neutrinos and the weakly interacting neutralinos, which are correlated with the visible objects (in particular the τ h decays). In contrast, p miss T in the SM background processes is primarily due to neutrinos. This difference can be exploited by first constructing the transverse mass m T , defined as follows: Here the masses of the visible (vis) and invisible (inv) particles are denoted by m vis and m inv , respectively. The value of m T has a maximum at the mass of the parent of the visible and the invisible particles. To account for multiple sources of missing momentum in the signal process, the "stransverse mass" [59,60] is defined as: (5.2) Since the momenta of the individual invisible particles in eq. (5.2) are unknown, p miss T is divided into two components ( p T inv1 and p T inv2 ) in such a way that the value of m T2 is minimized. If m T2 is computed using the two τ h candidates as the visible objects, vis1 and vis2, then its upper limit in the signal will be at the chargino mass. This is different from the SM background processes. For example in tt events, the upper limit is at the W boson mass. For this analysis, m T2 is calculated with the masses of the invisible particles in eq. (5.1) set to zero [61]. The signal and background processes can be further separated by utilizing the total visible momentum of the system. This is characterized using the quantity H T , which is defined as the scalar sum of the p T of all jets and the τ h candidates in the event. Jets lying within a cone of ∆R = 0.3 around either of the two selected τ h candidates are excluded from this sum to avoid double counting. Being a measure of the total energy of the system, H T is sensitive to the mass of the top squark. Signal events are selected using τ h τ h triggers, where both τ h candidates are required to have |η| < 2.1, and p T > 35 or 40 GeV depending on the trigger path. The τ h τ h trigger has an efficiency of ≈95% for τ h candidates that pass the offline selection. The trigger efficiencies in simulation are corrected to match the efficiencies measured in data. For the offline selection, signal events are required to have p miss T > 50 GeV, H T > 100 GeV, at least two oppositely charged τ h candidates with p T > 40 GeV and |η| < 2.1, and at least one b-tagged jet with p T > 20 GeV and |η| < 2.4. The requirements on p miss T and the number of b-tagged jets (n b ) help to reduce the contributions from DY+jets and SM events comprised uniquely of jets produced through the strong interaction, referred to as multijet events. Distributions of the variables p miss T , m T2 , and H T after this selection are shown in figure 2 for data and the predicted background, along with representative signal distributions. The background prediction includes tt, DY+jets, events with misidentified τ h , and other rare SM processes. Detailed descriptions of the background estimation methods are presented in section 6. Signal events with different top squark and LSP masses populate different regions of the phase space. For example, regions with low p miss T , m T2 , and H T are sensitive to signals with low top squark masses. On the other hand, events with high p miss T , m T2 , and H T are sensitive to models with high top squark and low LSP masses. In order to obtain the highest sensitivity over the entire phase space, the signal region (SR) is divided into 15 bins as a function of the measured p miss T , m T2 , and H T , which are illustrated in figure 3. Background estimation The most significant background is tt production, either with two genuine τ h decays or because of jets being misidentified as τ h candidates. Because of theoretical uncertainties in the tt background modeling in the SR (which contains events that populate the tails of the kinematic distributions), we estimate the tt contribution to events with two genuine τ h decays using CRs in data, as discussed below. The background contribution from DY events is typically minor in the most sensitive bins, and has been estimated using simulation. To account for residual discrepancies between data and the LO DY sample, correction factors for simulated events are derived from DY-enriched dimuon CRs in data and simulation as functions of the dimuon invariant mass and p T . The contribution from multijet events is negligible because of the selections p miss T > 50 GeV and n b ≥ 1. Other less significant backgrounds, such as W+jets, VV, VH, and ttV are also estimated from simulation. The overall SM contribution from jets being misidentified as τ h candidates is estimated using CRs in data. In the following sections we detail the estimation of those backgrounds that are obtained from CRs in data. Tau lepton pairs from top production The estimation of the background from tt events in which there are two genuine τ h decays is based on the method described in ref. [62]. The predicted yields in each SR bin from simulation are multiplied by correction factors derived in a tt-enriched CR. The tt-enriched CR is identified by selecting events with an oppositely charged eµ pair. These events are selected with eµ triggers, and are required to satisfy the same offline requirements as the SR with the e and µ replacing the two τ h candidates. The eµ triggers are ≈95% efficient for lepton candidates. In addition, in order to reduce possible DY contamination (from the tail of the eµ invariant mass distribution in the process Z/γ * → τ τ → eµ) in this CR, events are vetoed if the invariant mass of the eµ system lies in the range 60 < m eµ < 120 GeV. This selection on the dilepton invariant mass is more effective in the µµ CR to be discussed later, but is also applied here in order to be consistent. Other objects, such as jets and b-tagged jets, are selected using the same kinematic requirements and working points as in the SR. The definitions of the search variables for this CR are the same as those in the SR, except that the eµ pair is used in place of the τ h pair for evaluating the search variables. The purity of tt in the CR (i.e., the fraction of tt events in each bin) is measured in simulation as 85%, as shown in figure 4 (upper panels). Residual differences between data and simulation are quantified by SFs. For a given SR region (i) we define where the numerator and the denominator represent the yields in the CR in data and simulation, respectively. The corrected tt yield in simulation in each region of the SR is then obtained as: i, tt MC is the prediction from simulated tt events in the SR. An alternative way of interpreting this method is that we take the tt spectrum from a tt-enriched eµ CR in data (N eµ CR i, data ) and extrapolate it to the τ h τ h SR by accounting for the differences between the properties of τ h τ h and eµ final states with the ratio N i, MC taken from simulation. The SFs in the different bins, shown in figure 4 (middle row) for both 2016 and 2017 data, are mostly found to be within ≈10% of unity. Note that separate SFs for bins 14 and 15 are shown for information, but these are merged and a single SF is used in subsequent calculations to reduce the statistical uncertainty. In order to cross-check the validity of this method, the same technique is applied to an independent tt-enriched CR with an oppositely charged µµ pair in the final state. These events are selected with single muon triggers that reach ≈95% efficiency. The event selection for the µµ CR is similar to that for the eµ CR. This cross-check evaluates the effect of possible contamination from DY events (the branching fraction of Z/γ * → µµ being much higher than that of Z/γ * → τ τ → eµ), and is also useful for checking any dependence of the SFs on lepton reconstruction. The differences between the SFs calculated in the main and cross-check CRs, shown in figure 4 (bottom row), are small (within ≈10% in most cases), and are taken as an uncertainty in the SFs. These are added in quadrature to the statistical uncertainty in the SFs, and propagated as a contribution to the uncertainty in the final tt prediction. Misidentified hadronically decaying tau lepton candidates The next largest component of the total background originates from quark or gluon jets that are misidentified as a τ h candidate. The largest sources of such events in the SR are semileptonic and hadronic tt decays. We estimate this contribution to the SR following a strategy [63] that uses the yields in τ h τ h CRs, defined by inverting the requirements on the working point of the τ h identification. For a genuine τ h passing the loose identification requirements, we define g as the probability that it also passes the tight identification requirements. We define f as the corresponding probability for a misidentified τ h candidate. We then define N gf as the number of τ h τ h events where the τ h candidate with the highest p T is genuine and that with the second-highest p T is misidentified, with other terms (N fg , N gg , and N ff ) defined similarly. We also define N TL as the number of τ h τ h events where the candidate with the highest p T passes the tight identification criteria and that with the second-highest p T fails, -11 -but passes the loose criteria, with other terms (N LT , N LL , and N TT ) defined similarly. If N is the total number of events, the following set of equations can be constructed: where the subscripts 1 and 2 on g and f refer to the τ h candidates with the highest and second-highest p T , respectively. The above equations can be inverted to give the numbers of genuine and misidentified τ h τ h candidate events in the SR: Here N gen TT represents the number of events in the SR with two genuine τ h candidates in the final state, and N misid TT the number of events in the SR with one or two misidentified τ h candidates. The probability g is determined using tt simulation, with the τ h candidate being matched to a generated hadronically decaying tau within a cone of radius ∆R = 0.3. The value of g is calculated as the ratio between the number of genuine τ h jets passing the tight identification criteria and the number passing the loose criteria. It is evaluated as a function of the τ h decay modes and p T and is observed to be about 80% with very little dependence on the p T of the τ h . The dependence on the decay mode is observed to be at the 10% level. The misidentification rate f is estimated using a multijet-enriched CR in data. This CR is defined by requiring a same-charge τ h pair satisfying the τ h selection criteria, and by requiring p miss T < 50 GeV. The misidentification rate for a single τ h candidate is estimated from this CR using the following two definitions: . (6.5) Here, the term τ i h (X) denotes the number of events where the candidate with the highest (i = 1) or second-highest (i = 2) p T passes the tight (X=T) or loose (X=L) identification criteria. In each of the two definitions above, the working point of one of the τ h candidates in the numerator is changed with respect to the denominator, so they could be expected -12 - JHEP02(2020)015 to yield the same result. However, if the probability of one τ h candidate passing the tight criteria is correlated with the probability of the other to pass, differences may occur. In practice, differences of up to ≈10%, depending on the p T and the decay mode of the τ h , are observed between the two definitions. These differences are used to estimate the uncertainty in this method. The misidentification rate is measured as a function of the τ h decay modes and p T . It is found to be around 35% with a mild dependence on the p T of the τ h candidate. The variations with decay mode are up to the 20% level. It has been found in simulation studies [63] that the misidentification rate also depends on the flavor of the parton corresponding to the jet that is misidentified as a τ h . Since the jet flavor cannot be reliably determined in data, an additional 15% uncertainty in f is included. This uncertainty is evaluated as the relative difference between the average and the maximum (or minimum) of the misidentification rates corresponding to the different jet flavors (up, down, strange, and bottom quarks, and gluons), estimated using simulated W+jets events. Systematic uncertainties There are several sources of systematic uncertainty that are propagated to the prediction of the final signal and background yields. The most significant is the uncertainty in the modeling of the identification and isolation requirements (ID-iso) [58] of the τ h candidates, estimated to be approximately 10% for all processes in 2016, and 20% in 2017. The other sources of uncertainty affecting all processes include the jet energy scale (JES) and jet energy resolution (JER), the τ h energy scale, the effect of unclustered components in calculating p miss T , pileup reweighting, and the b tagging efficiency. The simulation is reweighted to make its pileup distribution identical to that in data. The pileup in data depends on the measured total inelastic cross section [64], which is varied by ±2.5% to obtain the uncertainty in this correction. Since the tt contribution in the SR is obtained by multiplying the simulated yield by a SF, defined as the ratio between the number of events in data and simulation, several uncertainties cancel to first order. As mentioned earlier, the difference between the tt SFs obtained in the eµ and µµ CRs, added in quadrature with the statistical uncertainty, is taken as the uncertainty in this method. The difference between the two definitions of the misidentification rate, as defined in eq. (6.5), is taken to be the uncertainty in the misidentification rate, while the flavor dependence of the rate is accounted for by adding an additional 15% uncertainty. The factorization (µ F ) and renormalization (µ R ) scales used in the simulation are varied up and down by a factor of two, avoiding the cases in which one is doubled and the other is halved. The SysCalc package [65] has been used for this purpose. The resulting uncertainty is estimated to be less than 6% for both signal and background processes estimated from simulation. A 2.5% uncertainty in the measured integrated luminosity is used for 2016 [66], reducing to 2.3% for 2017 [67]. The uncertainty in the Z boson p T correction applied to DY+jets events is taken to be equal to the deviation of the correction -13 -JHEP02(2020)015 factor from unity. A normalization uncertainty of 15% is assigned to the production cross sections of the background processes that are evaluated directly from simulation [68][69][70][71][72][73][74]. Since the simulation of the detector for signal events is performed using FastSim, the signal yields are corrected to account for the differences in the τ h identification efficiency with respect to the Geant4 simulation used for the backgrounds. The statistical uncertainty in this correction is propagated as its uncertainty. The FastSim package has a worse p miss T resolution than the full Geant4 simulation, resulting in a potential artificial enhancement of the signal yields. To account for this, the signal yields are corrected, and the uncertainty in the resulting correction to the yield is estimated to be 5-10%. The uncertainties in the signal and background from all sources are presented in table 1. Upper and lower numbers correspond to the relative uncertainties due to the upward and downward variations of the respective source. These values are the weighted averages of the relative uncertainties in the different search bins with the weights being the yields in the respective bins. The tabulated sources of systematic uncertainties are modeled by lognormal distributions [75] in the likelihood function used for the statistical interpretation of the results, which is discussed in section 8. These uncertainties are considered not to be correlated with each other, but correlated across the 15 search bins. In addition, the statistical uncertainties are also taken into account and are considered to be uncorrelated across the bins. Results We present the observed and expected yields in all 15 search bins in table 2 along with their uncertainties. Figure 5 shows the observed data in all of the search bins, compared to the signal and background predictions. As expected, the dominant contributions in the sensitive signal bins are from tt and misidentified τ h backgrounds. In cases where the background prediction of a process in a given bin is negligible, the statistical uncertainty is modeled by a gamma distribution [75] in the likelihood function used for the statistical interpretation, and the Poissonian upper limit at 68% confidence level (CL) is shown as a positive uncertainty in the table. The number of events observed in data is found to be consistent with the SM background prediction. The test statistic used for the interpretation of the result is the profile likelihood ratio q µ = −2 ln (L µ /L max ), where L µ is the maximum likelihood for a fixed signal strength µ, and L max is the global maximum of the likelihood [75]. We set upper limits on signal production at 95% CL using a modified frequentist approach and the CL s criterion [76,77], implemented through an asymptotic approximation of the test statistic [78]. In this calculation all the background and signal uncertainties are modeled as nuisance parameters and profiled in the maximum likelihood fit. Final results are obtained by combining the yields from 2016 and 2017 data sets. The systematic uncertainties due to JES, factorization and renormalization scales, misidentification rate measurement, and FastSim p miss T correction are taken as correlated, and the rest of the uncertainties are treated as uncorrelated between the two data sets. The results are presented as observed and expected exclusion limits in the top squark and LSP mass -14 - JHEP02(2020)015 Uncertainty source x = 0. The most sensitive search bins for the higher top squark masses are 14 and 15. The observed data in these two bins are lower than the total background prediction, resulting in the observed limit being higher than the expected one. Hence, even though there are more events in data than prediction overall, the observed mass limit is stronger than expected. The excesses are primarily in bins 2, 5, 7, and 12 which are more significant for low top squark masses, hence the observed limit is slightly worse than expected in that region. The limits become weaker with decreasing ∆m = m t 1 − m χ 0 1 , corresponding to a parameter space with final-state particles having lower momentum and hence less sensitivity. Summary The signature of top squark pair production in final states with two tau leptons has been explored in data collected with the CMS detector during 2016 and 2017, corresponding to integrated luminosities of 35.9 and 41.3 fb −1 , respectively. The search was performed in the final state containing an oppositely charged hadronic tau lepton pair, at least one jet identified as likely to originate from the fragmentation of a b quark, and missing transverse momentum. The dominant standard model backgrounds were found to originate from top quark pair production and processes where jets were misidentified as hadronic tau lepton decays. Control samples in data were used to estimate these backgrounds, while other backgrounds were estimated using simulation. No significant excess was observed, and exclusion limits on the top squark mass in terms of the mass of the lightest neutralino were set at 95% confidence level within the framework of simplified models where the top squark decays via a chargino to final states including tau leptons. In such models, top squark masses are excluded up to 1100 GeV for an almost massless neutralino, and LSP masses up to 450 GeV are excluded for a top squark mass of 900 GeV. These results probe a region of the supersymmetric parameter space corresponding to high-tan β and higgsino-like scenarios. Acknowledgments We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
9,007.6
2020-02-01T00:00:00.000
[ "Physics" ]
Haploinsufficiency of Col5a1 causes intrinsic lung and respiratory changes in a mouse model of classical Ehlers‐Danlos syndrome Abstract The Ehlers‐Danlos syndromes (EDS) are inherited connective tissue diseases with primary manifestations that affect the skin and the musculoskeletal system. However, the effects of EDS on the respiratory system are not well understood and are described in the literature as sporadic case reports. We performed histological, histomorphometric, and the first in‐depth characterization of respiratory system function in a mouse model of classical EDS (cEDS) with haploinsufficiency of type V collagen (Col5a1+/−). In young adult male and female mice, lung histology showed reduced alveolar density, reminiscent of emphysematous‐like changes. Respiratory mechanics showed a consistent increase in respiratory system compliance accompanied by increased lung volumes in Col5a1+/− compared to control mice. Flow–volume curves, generated to mimic human spirometry measurements, demonstrated larger volumes throughout the expiratory limb of the flow volume curves in Col5a1+/− compared to controls. Some parameters showed a sexual dimorphism with significant changes in male but not female mice. Our study identified a clear respiratory phenotype in the Col5a1+/− mouse model of EDS and indicated that intrinsic respiratory and lung changes may exist in cEDS patients. Their potential impact on the respiratory function during lung infections, other respiratory disease processes, or insults may be significant and justify further clinical evaluation. | INTRODUCTION The Ehlers-Danlos syndromes (EDS) represent a genetically and clinically heterogeneous group of inherited connective tissue diseases (for a review see Malfait et al., 2020) that share clinical features including joint hypermobility, tissue fragility, and skin changes with a spectrum of severity that goes from subclinical to life-threatening. The 2017 international Ehlers-Danlos syndrome classification describes 13 different types of EDS and a fourteenth rare subtype has recently been identified (Blackburn et al., 2018;Malfait et al., 2017). Variants in 20 distinct genes can cause EDS and negatively affect the extracellular matrix (ECM). Primary defects in fibrillar collagens (types V, III and I), in their processing, or in the synthesis of proteoglycans are associated with the majority of EDS cases (Malfait et al., 2020). Different EDS types have a characteristic profile of clinical features and major criteria for each type have been described (Malfait et al., 2020). These can be helpful in the differential diagnosis, which is often challenging. Although the disease prevalence for rare forms of EDS has not been determined, the incidence of some of the most common types of EDS, such as classical EDS (cEDS) and vascular EDS (vEDS) is estimated at approximately 1 in 20,000 and 1 in 50,000-200,000, respectively Ghali et al., 2019;Symoens et al., 2012). Although EDS is frequently considered a disorder of the integumentary, vascular and musculoskeletal systems, most EDS types are systemic conditions and can affect additional organs and systems of the body, including the gastrointestinal, genitourinary, cardiovascular, respiratory, craniofacial, and ocular systems (Malfait et al., 2020). The effects of EDS on the respiratory system are not well characterized and are mostly published as case reports. EDS has been associated with the development of respiratory complications including bullous emphysema (Ruggeri et al., 2015), pulmonary cysts and nodules (Berezowska et al., 2018;Herman & McAlister, 1994;Kawabata et al., 2010), pneumothorax (Boone et al., 2019;Dowton et al., 1996;Kadota et al., 2016;Malfait et al., 2020;Nakagawa et al., 2015;Shalhub et al., 2019), hemoptysis (Dowton et al., 1996;Hatake et al., 2013), tracheobronchomegaly (Girit et al., 2018), asthma symptoms, and other respiratory problems such as obstructive sleep apnea (Gaisl et al., 2017;Garcia Saez et al., 2014;Harris et al., 2013;Morgan et al., 2007;Stoberl et al., 2019). In one case, series of 252 patients with EDS, including 33 with cEDS, clinically significant pulmonary symptoms were described in nearly half of all patients (Sheehan et al., 2017). In the cEDS cohort, clinically significant shortness of breath and chest pain was found in a third of the patients (Sheehan et al., 2017). The respiratory manifestations in EDS have recently been reviewed (Chohan et al., 2021;Parducci, 2021) and were described in patients affected with vEDS, cEDS and hEDS (hypermobile) types. Respiratory abnormalities may be associated with significant morbidity and may be under-recognized in EDS (Chohan et al., 2021;Parducci, 2021). Importantly, although connective tissue laxity is a prominent feature of EDS, its effects at the level of the lung tissue and on the respiratory function of patients with EDS have not been described. As studies in humans are difficult to perform and often underpowered in rare diseases, the availability of a validated mouse model that mimics the condition in humans is an important tool for a more robust and adequately powered research approach. The availability of a validated mouse model of classical EDS allows for correlations between respiratory system function, lung structure, and a specific collagen-related mutation, which is not possible in studies of human subjects. Therefore, we used a well-accepted mouse model for cEDS (Col5a1+/−) (Wenstrup et al., , 2006 to study the effects of Col5a1 haploinsufficiency on the lung parenchyma and perform accurate measurements of respiratory system mechanics and pulmonary function. | Mice Col5a1+/− mice were kindly provided by Dr. David Birk (University of South Florida, Tampa, FL) and their generation was described elsewhere (Wenstrup et al., 2006). The Col5a1 mice were maintained on a C57/BL6 pure background and were genotype by PCR protocol using the following primers: Col5a1_FORWARD 5'-CTGTAGAGGTTTGATCTTAGGGCG-3' and REVERSE-1 5'-CATCATAAACCATCTACTATCGGG-3' and REVERSE-2 5'-CTTCTATCGCCTTCTTGACGAGTT-3'. The product of the reaction is a 500 base pair (bp) for the wild-type (WT) and 700bp for heterozygotes (Col5a1+/−). Mice were housed in a pathogen-free facility, with unlimited access to water and standard rodent chow and an environmental 12hour light/dark cycle. All animal studies were performed under UAMS IACUC approved protocol (AUP # 3845) and in accordance with local, state, and U.S. Federal regulations. | FlexiVent respiratory measurements Both male and female Col5a1+/− mice (n = 10 and n = 11, respectively) and their WT littermate controls (n = 10 and n = 9, respectively) were studied between 3-4 months of age. Mice were anesthetized with a mixture of Ketamine (100 mg/kg I.P.) and Xylazine Hydrochloride (10 mg/kg I.P.), underwent surgical tracheostomy (as previously described in Dimori et al., 2020) and were intubated with a beveled metal cannula (18 gauge, 0.5" long). The cannula was tied to the trachea with a silk suture #3 and then connected to a Flexivent small animal ventilator (Scientific Respiratory Equipment (SCIREQ), Montreal, Quebec, Canada) which was configured with an F2 module without nebulizer and operated by FlexiWare software v.5.1. Animals were ventilated with a respiratory rate of 150 breaths/min, a tidal volume 10 mL/kg, and a PEEP 3 cm H 2 O. After assessment of adequate ventilation, the mice were paralyzed with succinylcholine (7.5 mg/kg I.P.) followed by 3 minutes of ventilation to allow the drug to reach its effect and stabilize the subject. The ventilator was programmed to perform the following sequence of perturbations: Deep inflation (6s at 30 cm H 2 O), Snap shot-150 (150 breath/min 2.5 Hz), Quick prime-3 (3 s, 1-20.5 Hz), NPFE (negative pressure-driven forced expiration to emulate spirometry), and Partial pressure-volume (PV) curve. This set of the maneuvers was repeated 3 time with one-minute interval between each set, and at the end, a terminal procedure to record the full pressure-volume curve was performed. All experiments were performed in a closed chest wall configuration. | Deep inflation opening pressure was increased to 30 cm H 2 O over 3 s before being held for three additional seconds. This maneuver is repeated multiple times to allow for standardization of volumes as well as recruitment of alveoli. The deep inflation maneuver was utilized to approximate inspiratory capacity. | Forced oscillation techniques respiratory mechanics were evaluated utilizing 1.2 s, 2.5 Hz single-frequency forced oscillation maneuver (Snapshot 150 perturbation). Applying a single compartment model to output data yields measurements of respiratory system resistance (R rs ), elastance (E rs ), and compliance (C rs ). Next, a broadband low-frequency-forced oscillation maneuver via volume-driven perturbation (Quick prime-3) delivered volumes over input frequencies ranging between 1 and 20.5 Hz. Using a constant phase model, the measured data are derived to calculate Newtonian resistance (R n ), tissue damping (G), elastance (H), and hysteresivity (G/H or Eta). | Pressure-volume curves partial range pressure-volume curves were obtained on each mouse. Beginning at FRC (matched to PEEP), pressure was delivered to the lung at 5 cm H 2 O steps over 1 second intervals to a total pressure of 30 cm H 2 O until deflating the lung in a similar fashion back to starting FRC. Both pressure and volume were measured with each incremental plateau, which were then plotted as pressurevolume (PV) curves. The Salazar-Knowles experimental function (Salazar & Knowles, 1964) was fit to the deflation limb of the PV curves to determine parameters including quasi-static compliance (C st ), an estimate of inspiratory capacity (A), the shape constant showing the curvature of the upper deflation (K), and the area enclosed by the pressure-volume loop (Area), which correlates with airspace atelectasis. | Negative pressure-forced expiration maneuvers A Negative pressure-forced expiration (NPFE) maneuver was utilized to emulate spirometry data. First, the subject's lungs were inflated to 30 cm H 2 O (Total Lung Capacity) before exposure to vacuum output circuit resulting in rapid deflation of the lungs. This perturbation then plotted flow-volume curves and resultant forced expiratory volume at 0.1 seconds (FEV 0.1 ), forced vital capacity (FVC) and the ratio between the two (FEV0.1/FVC). FEV 0.1 is utilized in mouse models as analogous to FEV 1 in human spirometry (Bonnardel et al., 2019). | Full range pressure-volume curves and lung volume measurements To obtain full range pressure-volume curves, TLC, RV and FRC, an automated series of perturbations was performed as described in Robichaud et al (Robichaud et al., 2017). In brief, recruitment maneuvers, followed by a partial pressure-volume loop to a pressure of 35 cm H 2 O were performed. Subjects were then ventilated with 100% oxygen for 5 min to flush nitrogen from the lungs. Following this, ventilation was stopped and the valves of the FlexiVent were closed. At this time, the pure oxygen within the lungs of the subject was absorbed, resulting in complete degassing of the lungs. This is a terminal procedure. The lungs were then inflated with a full-range quasi-static, ramp style, pressure-volume loop; whereby the lungs were inflated to a pressure of 35 cm H 2 O before being deflated to a pressure of −10 cm H 2 O. The TLC minus the IC, measured at 35 cm H 2 O, was used to calculate the FRC. The volume at −10 cm H 2 O, at the end of the full-range PV loop, was used to characterize the RV. | Lung histology and morphometry Lungs were harvested from a subset of mice (6 from each genotype and sex) following the Flexivent measurements. Mice were detached from the ventilator, a cervical dislocation was performed, and the cannula was then attached to a reservoir containing 10% buffered formalin. Lungs were fixed in situ at 25 cm H 2 O for 30 min, excised from the thoracic cavity and then fixed overnight in 10% buffered formalin. The next day, the fixed lung volume was measured (three measurements for each lung were taken and then averaged) utilizing Archimedes' principle of water displacement (Limjunyawong et al., 2015). The lungs were then sequentially dehydrated in a series of increased concentration of ethanol and then embedded in paraffin. 5-micron sections were obtained and stained with H&E for morphological analysis. Images from ten histological field were captured at 20× magnification using a Nikon Microscope (Eclipse E400) per each mouse. The ImageJ software plug-in grid analysis was utilized to overlay a grid over each image of size equal to 745.28 × 558.96 μm with 8 horizontal lines and 10 vertical lines. The ImageJ Software was utilized to count the intersection of each alveolar wall with the grid line, allowing for a linear intercept distance for each image to be recorded. The mean linear intercept was calculated using the following equation Lm = horizontally (N) × (L) + vertically (N) × (L)/m where N is the number of time the transverse was placed on the tissue, L = length of the transverses, and m the sum of all the intercepts from each field. The internal lung surface area was also calculated using the formula ILSA = 4 × (lung fix volume)/MLI. | Statistical analysis All measured parameters are presented as mean ± standard deviation and were analyzed with the Student's t-test using two-tailed distribution and two-sample equal variance as appropriate. P < 0.05 were considered statistically significant and reported as such. | Lung histology, mean linear intercept, and alveolar surface area Classical EDS is most often caused by heterozygous pathogenic variants in COL5A1, leading to haploinsufficiency and a quantitative reduction in α1(V), the alpha 1 chain of type V collagen (Ritelli et al., 2013;Schwarze et al., 2000;Symoens et al., 2012;Wenstrup et al., 2000). A mouse model with a Col5a1 null allele (Col5a1+/−) was generated by others and shown to reproduce multiple features of cEDS (Wenstrup et al., 2006). To determine whether Col5a1+/− mice have intrinsic changes in their lung tissue compared to littermate controls, lung histology and histomorphometry were performed. Male and female mice were analyzed between 3 and 4 months of age (12-16 weeks). Their body weights were measured and no differences between males or females of the two genotypes were noted. Histological lung sections showed significant alterations in the alveolar structure of Col5a1+/− mice compared to WT controls. In males and females (n = 6/sex/genotype), Col5a1+/− lungs showed markedly enlarged alveolar spaces with a reduction in alveolar septal density (Figure 1a). These differences were quantified via measurement of the mean linear intercept (MLI) which was greater in both Col5a1+/− male and female (n = 6, p < 0.0001 and p = 0.001, respectively) compared to control mice (Figure 1b). The lung internal surface area (ISA), calculated from the MLI data, was markedly reduced in Col5a1+/− males (p = 0.0005) and females (p = 0.024), as a result of the overall loss in alveolar density (Figure 1b). | Forced oscillation technique To evaluate if the observed lung morphological changes impact respiratory function, the forced oscillation technique (FOT) was used to detect possible differences in respiratory system mechanics between Col5a1+/− and WT mice. Respiratory system resistance (R rs ), compliance (C rs ), and elastance (E rs ) were measured via single-frequency forced oscillation (Snapshot 150). R rs was significantly reduced in Col5a1+/− males compared to WT mice, but not in females (n = 10, p = 0.014). C rs was significantly higher in Col5a1+/− males and females (p = 0.007 and p = 0.017, respectively) and, as expected, Ers was significantly reduced in Col5a1+/− males (p = 0.006) and females (p = 0.016) (Figure 2). A broadband frequency-forced oscillation approach (Quick Prime-3) was used to measure Newtonian airway resistance (R n ), tissue damping (G), tissue elasticity (H), and tissue hysteresivity (eta; G/H) and there were no statistically significant differences between the Col5a1+/− and WT mice ( Figure S1). | Partial pressure-volume curves Partial pressure-volume (PV) curves showed a clear upward shift in Col5a1+/− mice compared to controls (Figure 3a). At the maximal inspiratory pressure of 30 cm H 2 O, both male and female Col5a1+/− mice reached significantly higher volumes compared to WT controls (p = 0.0008 and p = 0.0002, respectively) ( Figure 3a). Quasi-static compliance (Cst) in Col5a1+/− mice was significantly higher when compared to WT in both males and females (p = 0.001 and p = 0.004, respectively) ( Figure 3b). The curvature parameter (K) of the deflation limb of the PV loop was significantly lower in Col5a1+/− males and females when compared to WT (p = 0.048 and p = 0.009, respectively). The derived estimate of inspiratory capacity (A) was markedly elevated in Col5a1+/− males (p = 0.0006) and females (p = 0.0001) when compared to WT. The area enclosed by the inflation and deflation limb of the PV curve was larger in both male and female Col5a1+/− mice compared to WT (p = 0.015 and p = 0.018) (Figure 3b). | Full-range pressure-volume curves and lung volume measurements Full-range PV curves also showed a clear upward shift in male and female Col5a1+/− mice compared to wildtype controls (Figure 4a). In Col5a1+/− mice, the inspiratory volume was higher at the maximum pressure (35 cm H 2 O) in male and female mice (p = 0.003 and p = 0.006, F I G U R E 1 (a) Representative histological sections of WT and Col5a1+/− lung from 3-4 monthold male and female mice. (b) Lung histomorphomeric measurements to quantify the lung parenchyma defect using the mean linear intercept (MLI) method and the calculation of the internal lung surface area (ISA) (n = 6/sex/ genotype). Student's t-test F I G U R E 2 Measurements of the respiratory system resistance (R rs ), compliance (C rs ), and elastance (E rs ) derived from the Snapshot 150 maneuver using the forced oscillation technique. (n = 10 males, n = 9-11 females). Student's t-test F I G U R E 3 (a) Male and female partial pressure-volume (P-V) curves. (b) All parameters derived from the P-V curves, including static compliance (C st ), k, A (an estimate of inspiratory capacity), and area were significantly reduced in Col5a1+/− compared to control mice (n = 10 males, n = 9-11 females). Student's t-test F I G U R E 4 (a) Full-range pressurevolume curves and (b) Inspiratory capacity measurements derived from deep inflation maneuvers at 30 and 35 cm H 2 O of pressure. (n = 10 males, n = 9-11 females). Student's t-test respectively) compared to controls (Figure 4a). Derived values for inspiratory capacity at static pressures were significantly higher at 30 cm H 2 O (p = 0.001 and p = 0.0003 for males and females, respectively) and 35 cm H 2 O (p = 0.001 and p = 0.0002 for males and females, respectively) when compared to WT mice (Figure 4b). Lung volumes and derived variables were calculated from full-range PV curves. The total lung capacity (TLC) of Col5a1+/− was significantly larger compared to WT in males (p = 0.007) and females (p = 0.02) ( Figure 5). Similarly, vital capacity (VC) was significantly elevated in Col5a1+/− males (p = 0.008), as well as in females (p = 0.043), compared to WT controls. The functional residual capacity (FRC) trended to be larger in male Col5a1+/− compared to WT (p = 0.08) but not in female mice (p = 0.801) ( Figure 5). Compliance was significantly higher in Col5a1+/− male mice compared to WT controls (p = 0.026) but not in female Col5a1+/− mice ( Figure 5). Specific compliance (C s ) (compliance normalized to FRC) was not different in Col5a1+/− males (p = 0.141) or females (p = 0.421) when compared to WT. Residual volume (RV), RV/TLC, and airway compliance (C aw ) did not differ in Col5a1+/− compared to WT mice. V10_TLC, an index of the shape of the PV curve, was significantly diminished in Col5a1+/− males (p = 0.0005) with a similar trend in females (p = 0.053) ( Figure 5). | Negative pressure-driven forced expiration (NPFE) measurements NPFE maneuvers showed that the forced expiratory volume at 0.1 s (FEV 0.1) was significantly higher in Col5a1+/− males and females compared to WT (p = 0.005 and p = 0.001, respectively) ( Figure 6a). Forced vital capacity (FVC) was also higher in Col5a1+/− males (p = 0.004) and females (p = 0.001) compared to controls. There was no difference in the FEV 0.1/FVC ratio in either male or female Col5a1+/− mice compared to WT, indicating that the higher FEV 0.1 is proportional to the higher FVC in Col5a1+/− mice (Figure 6a). The expiratory limb of the flow volume curves reflected this relationship, with larger volumes throughout the expiratory limb of Col5a1+/− mice (Figure 6b). All of the parameters measured by the NPFE maneuver are presented in Table S1. | DISCUSSION Although a variety of respiratory manifestations occur in patients with EDS and general observations of lung abnormalities have been reported in a few animal models of EDS (Vroman et al., 2021), comprehensive characterization of respiratory system function has not been previously reported in an EDS mouse model. Using the Col5a1+/− mouse model that mimics classical EDS, we found parenchymal lung defects that were accompanied by altered respiratory mechanics compared to wild-type controls. The lung parenchyma showed alveolar simplification and enlarged air spaces reminiscent of emphysematouslike changes, while respiratory function studies showed increased respiratory system compliance and increased lung volumes compared to controls, without evidence of expiratory airflow obstruction. The ECM is important for a healthy lung and alterations in composition, quantity, and post-translational modification of ECM components underlie important adult respiratory diseases such as idiopathic pulmonary fibrosis (IPF) and chronic obstructive pulmonary disease (COPD) (Burgess et al., 2016). The impact of congenital mutations of collagens on lung development and the respiratory function is less well understood. Type I, III and IV collagens are the most abundant collagens in the lung (Burgstaller et al., 2017;Naba et al., 2012). Recent data, including from our group, demonstrated that type I collagen mutations causing osteogenesis imperfecta (OI), in addition to skeletal manifestations, also generate early morphological lung defects resulting in reduction of alveoli and increased air-spaces (Baglole et al., 2018;Dimori et al., 2020;Thiele et al., 2012). Changes in respiratory mechanics have also been described in at least one mouse model of OI (Dimori et al., 2020). Heterozygous pathogenic variants in type III collagen cause vEDS, a severe type of EDS characterized by tissue fragility affecting arteries and hollow organs. In vEDS, respiratory complications have been documented and often required clinical intervention (Chohan et al., 2021;Parducci, 2021). This is likely due to the deleterious effects of type III collagen mutations on blood vessels, airways, and lung parenchyma although investigations on the respiratory consequences of vEDS have not been conducted. Reduced quantity or structural irregularities in type V collagen, resulting from heterozygous pathogenic variants in COL5A1 or COL5A2, cause cEDS, one of the most common EDS types (Wenstrup et al., 2000). Here, we studied the effects of type V collagen haploinsufficiency on the murine lung. Type V collagen, typically expressed as a heterotrimer formed by two α1(V) chains and one α2(V) chain, is considered a minor fibrillar collagen and constitutes only about 2-5% of the total collagen tissue content (Malfait et al., 2020). Importantly though, type V collagen forms heterotypic fibrils with type I collagen, and plays a key role in the assembly of these fibrils and the regulation of their diameter (Birk, 2001;Wenstrup, Florer, Brunskill, et al., 2004). As a consequence, haploinsufficiency of COL5A1 results in defective fibril formation with reduction in fibril count as well as density of those fibrils that are formed (Wenstrup, Florer, Cole, et al., 2004). As such, the early role of type V collagen in the process of heterotypic collagen fibrils nucleation is expected to affect and reduce the deposition of more widespread distributed collagens in the lung, such as type I and III, and manifest in lung tissue changes similar to emphysema. Panacinar emphysema was described in at least one EDS patient in conjunction with reduced, irregular, and frayed in appearance dermal collagen fibrils (Cupo et al., 1981). These defects are likely to be early onset and to impair alveolar formation, similar to what was described in Adamts2-/-mice (a mouse model of dermatosparaxis dEDS) (Le Goff et al., 2006) and in mouse models of OI with alterations in type I collagen (Baglole et al., 2018;Baldridge et al., 2010;Dimori et al., 2020;Grafe et al., 2014). Interestingly, however, in cEDS patients respiratory manifestations seem less common than in OI patients. They are not part of the minor clinical criteria for this EDS type (Malfait et al., 2020) nor have been described in a recent cohort of 75 patients with genetically confirmed cEDS (Ritelli et al., 2020). This suggests that intrinsic respiratory and lung changes may exist in cEDS patients but their manifestations are mild or subclinical in otherwise healthy individuals. However, their F I G U R E 5 Parameters derived from the lung volume measurements. Total lung capacity (TLC), vital capacity (VC), functional residual capacity (FRC), residual volume (RV), RV/TLC, compliance (C), specific compliance (Cs), airway compliance (Caw), and volume at 10 cm H 2 O as percentage of TLC (V10_TLC). (n = 8-10). Student's t-test potential impact on respiratory infections or other respiratory disease processes or insults may be significant. In order to study the effects of altered collagen and their possible impact on lung function, we performed a comprehensive characterization of respiratory system function utilizing the force oscillation technique, pressure-volume (PV) measurements, and negative pressure-driven forced exhalation. FOT measurements showed significantly increased Crs and reduced Ers in male and female mice and decreased Rrs in male Col5a1+/− mice, consistent with a respiratory system that is more distensible and compliant likely due to disruption in the normal architecture of lung tissue. In female mice, there was a trend toward decreased Rrs that did not meet statistical significance. The reason for this dissimilarity between males and females is unclear. Multiple studies have described sex-related differences in COPD phenotypes, showing that women are significantly more likely to develop COPD than men (Foreman et al., 2011;Martinez et al., 2007;Pinkerton et al., 2015). Consistent findings, including in non-smokers with COPD, showed a predisposition to an airway phenotype in females and an emphysema phenotype in males (Hardin et al., 2016;Hong et al., 2016). Sex-related differences in pulmonary mechanics have also been reported in mouse models with chronic smoke exposure (Tam et al., 2016). Although respiratory mechanics parameters in cEDS mice were consistent with an emphysema-like pattern in both sexes, some of the measured differences appeared to be of larger magnitude in male vs. female mice, reminiscent of the sex-related differences observed for COPD. The significance of this is unclear as so little is known about EDS pulmonary phenotypes but represents an interesting finding that merits further studies in the EDS population. Broadband FOT (Quick Prime-3) was used to assess possible changes in tissue mechanics in WT vs. Col5a1+/− mice. Our results did not reveal statistically significant differences in any of these parameters. Given the emphysema-like changes observed in Col5a1+/− mice histological sections and from histomorphometric measurements, decreased R n and tissue elasticity (H) may be expected, as demonstrated in other mouse models of emphysema (Vanoirbeek et al., 2010). In the present study, tissue elasticity (H) trended lower in Col5a1+/− mice, as anticipated, but did not reach statistical significance. This may reflect that the emphysemalike lung changes in cEDS mice are less severe than in F I G U R E 6 (a) Selected measurements derived from the negative pressure-driven forced expiration (NPFE) maneuver and (b) flowvolume curves (n = 10 males, n = 9-11 females). Student's t-test elastase-induced emphysema, resulting in a milder phenotype. In a model of protease-induced emphysema in BALB-C mice, histological evidence of emphysema was present early in the time course when measures of G, H and airways resistance remained similar to controls (Anciaes et al., 2011). Our finding that R n was not different in WT vs. Col5a1+/− mice is consistent with our forced expiratory measurements (see below) and, again, may reflect that the emphysema-like changes are not sufficiently severe in this EDS genotype to affect expiratory airways resistance. Stepwise partial pressure-volume measurements showed that cEDS mice had greater lung volumes, especially at higher pressures. The relatively higher lung volume on the expiratory limb of the PV curves is greatest between 10 and 30 cm H 2 O, demonstrating the increased respiratory system distensibility of cEDS mice at higher pressures compared to WT. Consistent with this finding, quasi-static compliance was higher in both males and females. The area enclosed by the partial PV loops is significantly higher in cEDS mice compared to WT, suggesting decreased elastic recoil in the cEDS mice during lung deflation, especially at pressures higher than the tidal breathing range. The K parameter reflects the curvature of the expiratory limb of the PV curve independent of lung volume and represents concavity of the curve toward the pressure axis. In the cEDS mice, K was decreased compared to WT, likely due to the elevation of the PV curve at higher pressures, which leads to less concavity of the PV curve toward the pressure axis and further reflects the relatively lower elastic recoil at higher pressures in cEDS mice. The increased respiratory distensibility in the cEDS mice was especially evident in the "A" parameter of the partial PV curves (Figure 3b), which is an estimate of inspiratory capacity. The full range PV curves clearly replicate the relationship demonstrated in the partial loops, with elevated distention, volume, and flattened slope of the expiratory limb consistent with more easily distensible tissue. These findings are also consistent with the interplay of elastin and collagens in the extracellular matrix (ECM) with elastin bearing the majority of stress at low pressures and volumes and collagen fibers serving as a limitstep to prevent maximal distention of small airways and alveoli (Toshima et al., 2004). Lung volume measurements were consistent with elevated FVC measurements and PV loop findings. Total lung capacity (TLC) was higher in cEDS mice when compared to WT, consistent with a respiratory system that is more readily distended to larger volumes. The increase in TLC was due to a greater inspiratory capacity (IC) and vital capacity (VC) and not due to air trapping, as residual volume (RV), FRC and RV/TLC were not significantly elevated. FRC trended higher in male cEDS mice (p = 0.08), which likely explains why specific compliance (Cs), compliance normalized to FRC, was not significantly different in cEDS vs. WT mice. V10_TLC, which is the lung volume at 10 cm H 2 O pressure expressed as percent of TLC, was markedly reduced in male and borderline significantly reduced in female cEDS mice compared to WT. This, again, reflects the observation that in Col5a1+/− mice respiratory system distensibility was mainly affected at higher pressures. Thus, the volume at 10 cm H 2 O pressure showed only a minor increased while TLC (at 30 cm H 2 O pressure) showed a disproportionately greater increase ( Figure 4a). Compliance, as measured during the lung volume protocol, was significantly higher in male cEDS mice compared to WT but not in females. In this protocol, compliance (C) is derived from the slope of the linear portion of the expiratory pressure-volume curve between 3 and 7 cm H 2 O. It is evident from Figure 4a that the slope of this portion of the expiratory PV curve in this pressure range is greater in male cEDS mice compared to WT, while the slopes between 3 and 7 cm H 2 O appear to be very similar for female cEDS mice vs. WT. This pattern is consistent with the observation that the largest differences between cEDS mice and WT are most evident at pressures >10 cm H 2 O. Altered compliance in the male cEDS mice could reflect changes in lung, ribcage, or airway compliance. Airway compliance was not significantly different in cEDS mice vs. WT. Although we suspect that compliance differences are due to ECM alterations in the lungs, we cannot rule out a contribution from the ribcage, which would require open-chest measurements that are beyond the scope of this study. Negative pressure-driven forced exhalation is a maneuver designed to replicate spirometry, demonstrating the relationship between flow and volume during expiration. The results show a significant increase in forced expiratory volume at 0.1 s (FEV0.1), which closely mirrors forced expiratory volume (FEV1) in human subjects, in cEDS mice compared to WT. Forced vital capacity (FVC) was also increased in cEDS mice compared to controls with no difference in the ratio between FEV0.1 and FVC. This indicates that the higher FEV0.1 reflects the larger FVC in cEDS mice, as FEV0.1/FVC was not affected. Interestingly, a study of mostly classical EDS found a high prevalence of dyspnea and FVC, VC, and TLC >120% of predicted in a substantial proportion of subjects, consistent with our results (Morgan et al., 2007). However, the same study also found an elevated RV in >50% of mostly classical EDS patients, which does not agree with our findings. It is possible that this obstructive defect evolves over time and could potentially develop in older mice. | LIMITATIONS Although this is the first study to explore respiratory system abnormalities in a mouse model of cEDS, we cannot generalize the findings to other EDS types and it will be important to characterize the lung phenotype of other mouse models mimicking other forms of the disease. For instance, the vEDS type due to mutations in type III collagen appears to be the form of EDS that is most susceptible to pulmonary complications requiring clinical intervention and the study of mice with type III collagen mutations may provide important insights into their specific effects onto the respiratory system. Mutations in TNXB cause classical-like EDS and it has been shown that tenascin-XB is highly expressed during secondary alveolar septae formation (Foster et al., 2006). Therefore, the study of the lung phenotype in tenascin-XB mutant mice may provide further insights into the pathology of EDS as it affects the respiratory system. Importantly, potential differences in how collagen mutations impact the respiratory function may exist between rodents and humans and therefore data obtained from mouse models will need to be confirmed in patients. Another limitation of our study is that all the respiratory measurements were performed in closed-chest configuration. As such, the potential effects of Col5a1 haploinsufficiency on the thoracic cage expansion, rib elevation, and diaphragm movement may contribute to the observed pulmonary phenotype. Assessment of their contribution to the observed phenotype would require open-chest measurements which are beyond the scope of the present study. Lastly, we have performed lung histology and histomorphometry in paraffin-embedded sections and this process may lead to tissue shrinkage effects which may be genotype specific. Although we have processed samples of both genotypes identically, we cannot exclude potential effects of this issue onto our morphometric measurements. | CONCLUSION We have performed the first in-depth study of the respiratory function in a mouse model of classical EDS and identified significant morphological and functional changes consistent with a more distensible respiratory system, especially at higher pressures. These observations will need to be confirmed in human studies of cEDS patients and may warrant new clinical recommendations and guidelines for the care of these patients.
8,034.6
2022-04-01T00:00:00.000
[ "Medicine", "Biology" ]
Four-Reference State-Specific Brillouin-Wigner Coupled- Cluster Method: Study of the IBr Molecule We implemented the state-specific Brillouin–Wigner coupled-cluster method for the complete model space spanned by four reference configurations generated by two electrons in two active orbitals. We applied the method (together with the previously suggested a posteriori size-extensivity correction) to the calculation of spectroscopic constants of the IBr molecule, using averaged relativistic effective core potential. Introduction This paper is a continuation of our previous studies [1][2][3][4][5][6][7][8][9] on the development of a multi-reference coupled-cluster (MRCC) method that would be free of the problem of intruder states and that would be amenable to treatment of systems requiring more than two reference configurations.Avoidance of intruder states was achieved by using the Brillouin-Wigner (BW) resolvent, and simplicity of the method and feasibility of calculations were achieved by subjecting the BWCCSD method to a state-specific form [2][3][4].Although the method so developed was general in respect to the number of reference configurations, its implementation [4] in the ACES II program [10] was limited to two-reference cases, where HOMO and LUMO orbitals have different spatial symmetry and only two closed-shell configurations can contribute to the wave function.In order to be able to treat molecules with quasidegenerate HOMO and LUMO orbitals with the same spatial symmetry, we extended our original implementation of the BWCCSD method for open shell reference configurations.For merits of the statespecific BW theory we paid the price that the method was not size extensive any longer.We developed therefore an a posteriori correction [5,6] for eliminating the size-inconsistent terms in the amplitudes.Again, the correction in its original implementation [6] was confined to two-reference cases.In this paper it has been generalized and calculated for four reference configurations. The IBr molecule was selected for testing.Its HOMO and LUMO are of the same symmetry and the two singly excited configurations (determinants) HOMO → LUMO have to be included in the reference space.One of us (J.P.) was a coauthor of a paper [11] dealing with single-reference calculations on IBr.Results of these calculations are used in this paper for judging the performance of BWCCSD.Finally, IBr was selected for that reason that this molecule is also interesting experimentally and the MR BWCCSD calculations could be helpful for the theoretical interpretation. The IBr has been studied for several decades.The early measurements of spectra of IBr in visible [12] and ultraviolet [13] have found bands corresponding to transitions to higher electronic states.Further studies confirmed these findings [14] and gave refined values of rotational and vibrational constants from high resolution far infrared absorption spectra [15].The value of dissociation energy was obtained from isotopically selective velocity mapping measurements [16].The predissociation of IBr has been studied by resonance Raman spectroscopy [17], pump-probe experiments [18] and time dependent wave packet dynamics [19].Recently ion imaging methods were used to investigate the photolysis of IBr [20]. Early theoretical calculations of electronic structure on IBr [21] were performed at Hartree-Fock level.Subsequent ab initio studies with inclusion of relativistic effects provided values of spectroscopic [11,22,23] and electric [22,24,25] properties. Theory Since the derivation of the state-specific Brillouin-Wigner coupled-cluster theory has been presented for the general multireference case previously [3,5], and details of the implementation for closed-shell molecules have also been published [4], we give here only a brief review of the method and the relevant working equations. We use the notation that i, j and a, b indices represent occupied and unoccupied spin-orbitals, respectively, while p, q are generic spin-orbital indices.Moreover, we reserve the letters k and c for denoting the internal excitations (thus these indices are fixed for a given pair of reference configurations µ, ν), while i, j, a, b are used as general or summation indices.For many systems, including the IBr molecule, k is the HOMO and c is the LUMO orbital and the four (spin-unrestricted) reference configurations are generated by all possible occupations of these two orbitals by two electrons. The model function for the ground state is assumed in the form where M is the number of reference configurations.For the exact ground-state wave function Ψ 0 and exact energy 0 (2) and 0 0 0 The "effective" Hamiltonian H eff is defined as where P is the projection operator onto the model space spanned by M configurations Assuming the CCSD cluster operator T(µ) = T 1 (µ) + T 2 (µ) with respect to configuration µ as Fermi vacuum, the wave operator Ω 0 in the Hilbert space ansatz [26] is subject to the state-specific Brillouin-Wigner analogue of the Bloch equation where B 0 is the Brillouin-Wigner resolvent 0 0 Here 0 E % denotes the exact energy (lowest eigenvalue of H eff ), while E q are the unperturbed energies corresponding to Φ q similarly as in the Rayleigh-Schrödinger resolvent, and the notation q >M in Eq. (8) means that internal excitations (the ones relating Φ µ and Φ ν , µ,ν ≤ M) are excluded. The diagonal matrix elements of the effective Hamiltonian read where H µµ is Hamiltonian expectation value for reference configuration Φ µ and H N (µ) is the Hamiltonian normally ordered with respect to Fermi vacuum Φ µ .They can be computed by a modified routine for energy in single-reference CCSD [4].Coupling between the reference configurations is furnished by the off-diagonal elements These have to be treated separately for cases when Φ ν and Φ µ are single-or double-excitations with respect to each other.In the former case we can write either and the matrix elements are obtained as r.h.s. of the T 1 amplitude update equations.In the latter and the matrix element is obtained from the T 2 amplitude equations as described in [4].Presently, our implementation does not support reference configurations which would be mutually more than biexcited. The final equations for T 1 amplitudes read { } where the r.h.s. is the same as in the case of single reference CCSD, except that the amplitudes of internal excitations are set to zero.In the T 2 case we have where P ij is antisymmetrization operator acting on the i, j indices.The r.h.s.term in curly brackets can be identified with the r.h.s. of the single reference CCSD equations computed with internal amplitudes set to zero.As described in [6], the amplitudes obtained by this method need to be corrected for retaining sizeextensivity.The idea is to make BWCC a posteriori close to its Rayleigh-Schrödinger (RS) version. Since our BW variant of the Bloch equation ( 7) contains only a denominator of the type 0 ( ) where The first two terms on the r.h.s. of Eq. ( 13) are size-extensive because they may be viewed as the analog of the usual RS form of the Bloch equation.However, the "additional" third term on the r.h.s. of Eq. ( 13) gives, on iterating the BWCCSD equations, size-inextensive terms.Computationally, it is simplest to identify the size-inextensive terms in the amplitude update equations and eliminate them in an additional a posteriori iteration.For T 1 amplitudes the only term responsible for size-inextensivity is on the l.h.s of Eq. (11).With the T 2 amplitudes it is the 0 ( )( ) term on the l.h.s of Eq. ( 12) and the last term on the r.h.s. of Eq. (12). From the corrected amplitudes we construct a new H eff matrix, and by its diagonalization we obtain the final energy.Note that the size-inextensive terms cannot be eliminated during iterations, since the only coupling between amplitudes of different reference configurations, provided by 0 E % , would be lost. The BWCCSD calculation proceeds iteratively.After a standard initial guess of amplitudes, the H eff matrix elements are calculated and H eff is diagonalized.The lowest eigenvalue is then used as 0 E % and new T 1 and T 2 amplitudes for all reference configurations are computed according to Eqs. ( 11) and ( 12).This procedure is repeated, employing the DIIS convergence acceleration, until amplitudes converge.Subsequently the a posteriori size-extensivity correction is performed in an additional iteration. Note that the absence of explicit coupling of amplitudes from different reference sets makes the method simple and computationally feasible for larger molecules.The computational demands per iteration for the M-reference BWCCSD method are thus only marginally higher than M times the demands for single-reference CCSD procedure. Computational In molecules containing heavy atoms, such as iodine or bromine, relativistic effects, and spin-orbit (LS) interactions in particular, cannot be neglected and in some cases even first-order perturbative treatment is not accurate enough.Moreover, in order to obtain potential energy (PE) curve accurate enough in the whole range of internuclear distances, inclusion of both statical and dynamical electron correlation in a sophisticated way is absolutely essential.The ground state of the IBr molecule is 1 Σ+. Since it has zero angular and spin momentum, it is not subject to spin-orbit splitting, however, scalar relativistic effects play an important role.Therefore in our calculations the inner-shell electrons have been treated using recent averaged relativistic effective core potentials (AREP) [27].We employed two valence basis sets: the basis set supplied together with the AREP of the size (6s6p1d/3s3p1d) for both I and Br labeled A, and basis set augmented by diffuse functions of size (8s8p3d/5s5p3d) for both atoms, labeled B, which is described in [11] and serves mainly for comparison with that paper.For each internuclear geometry, we performed first a CASSCF calculation with two electrons in the HOMO and LUMO active orbitals, i.e. the same active space as in BWCCSD.The canonical CASSCF orbitals were used in the subsequent BWCCSD treatment based on the four spin-unrestricted reference configurations possible in this active space.Since the BWCCSD calculation is performed in a spin-unrestricted form, three eigenvalues of H eff correspond to singlet states and one to a triplet.It has been checked that no spin contamination occurs in numerical procedure.The CASSCF was chosen for the description of the reference state because it is size extensive and correctly describes the static correlation. For all coupled cluster calculations reported in this paper, we used our implementation of the BW CCSD method [4] and of the a posteriori correction [6] within the ACES II program [10].The MR-CISD results reported in this work were obtained using the GAMESS-UK program [19]. The spectroscopic constants were obtained from the polynomial expansion of the energy by means of the Dunham analysis.D e values were obtained as difference between the minimum and the dissociation limit of the potential energy curves.Rovibrational levels (cf.Table 2) have been calculated by the program LEVEL [17]. Results and Discussion The MR-BWCCSD potential energy curves of IBr computed in basis sets A and B are shown in Figs. 1 and 2, respectively.CCSD curves are shown for comparison and it can be seen that the single reference method fails to meet the correct dissociation limit indicated by the dashed line.This clearly shows the need for a multireference description for larger interatomic distances, while near the equilibrium the single-reference description is quite reliable. The calculated spectroscopic constants are compared with the sample results of previous works [11,22,23] in Table 1.We also computed a set of rovibrational levels, given in Table 2. First we tested the size extensivity, and we found that the size-consistency error was small.It was only 0.10 kcal/mol with the basis set A and 0.11 kcal/mol with the basis set B. This is because the CASSCF orbitals are localized for large interatomic distances, which is beneficial for size consistency, making thus BWCCSD near size consistent even without the a posteriori correction. On comparing the calculated spectroscopic constants with those obtained from single-reference calculations [11], using both the same basis set B and the pseudopotential, it can be seen that the Dissociation curve of IBr Basis Set A Energy (kcal/mol) Interatomic distance (a.u.) dissociation energy D e has decreased from 47.7 kcal/mol (CCSD) to 38.0 kcal/mol (MR BWCCSD), due to the multireference description at large internuclear separation.The equilibrium bond length r e has increased by approximately 0.01 Å, while the vibrational frequency ω e has decreased slightly and the anharmonicity ω e x e remained almost same.They are also very similar to values obtained by the MR-CISD method (cf.Table.1).The lack of improvement of the multi-reference method for r e , ω e and ω e x e with respect to single reference CCSD can be explained by a low contribution of the excited references near the equilibrium geometry and therefore effective reduction of BWCCSD to a single reference CCSD, since the values of r e , ω e , and ω e x e are determined by the shape of the PES curve near equilibrium.Additional inclusion of dynamical correlation by means of connected triples, when implemented, should result in an improvement of the obtained spectroscopic constants.As our experience with MR-BWCCSD shows, our previous results are very close to those of stateuniversal MR-CCSD [4,8,31,32].We believe that in case of IBr this will be similar.Unfortunately we did not find such a result from state-universal MR-CCSD method for comparison. The dependence of the calculated values of r e , ω e and ω e x e show a very small difference between the two basis sets used.That means that adding extra diffuse functions to the basis set does not have a large effect on the ground state wave function in the equilibrium interatomic distance range.However the D e value, which is slightly more sensitive, has changed by approximately 1.5 kcal mol -1 towards the experimental value. Conclusions The BWCCSD method developed previously has been shown in this paper to be computationally feasible for a four-reference problem.The calculations performed for the IBr molecule yielded a smooth potential curve in the whole range of interatomic distances assumed (4 to 80 au).The potential curve has the correct dissociation limit, and the spectroscopic constants compare well with the experimental and best available theoretical data.The results obtained for IBr also provide another numerical evidence that the size-extensivity error of the MR BWCCSD method is small. Figure 1 .Figure 2 . Figure 1.Potential energy curves of the IBr molecule calculated by the CCSD and MR-BWCCSD methods in the basis set A (see text) shown by solid lines.Energy scale is relative to the sum of energies of isolated atoms, which is indicated by the dashed line.Notice the incorrect dissociation limit of the single-reference CCSD. Table 1 . Spectroscopic constants of IBr b Using a relativistic effective core potential.c The same reference configurations as in MR-BWCCSD; without size-extensivity correction.d The same reference configurations as in MR-BWCCSD; with Davidson correction.e Based on RHF molecular orbitals.f Based on CASSCF molecular orbitals. Table 2 . Rovibrational energy levels of IBr obtained from potential curve calculated by MR BWCCSD in basis B (cm -1 )
3,381
2001-12-19T00:00:00.000
[ "Physics", "Chemistry" ]
Hierarchically Porous Carbon Nanosheets from One-Step Carbonization of Zinc Gluconate for High-Performance Supercapacitors Supercapacitors, with high energy density, rapid charge–discharge capabilities, and long cycling ability, have gained favor among many researchers. However, the universality of high-performance carbon-based electrodes is often constrained by their complex fabrication methods. In this study, the common industrial materials, zinc gluconate and ammonium chloride, are uniformly mixed and subjected to a one-step carbonization strategy to prepare three-dimensional hierarchical porous carbon materials with high specific surface area and suitable nitrogen doping. The results show that a specific capacitance of 221 F g−1 is achieved at a current density of 1 A g−1. The assembled symmetrical supercapacitor achieves a high energy density of 17 Wh kg−1, and after 50,000 cycles at a current density of 50 A g−1, it retains 82% of its initial capacitance. Moreover, the operating voltage window of the symmetrical device can be easily expanded to 2.5 V when using Et4NBF4 as the electrolyte, resulting in a maximum energy density of up to 153 Wh kg−1, and retaining 85.03% of the initial specific capacitance after 10,000 cycles. This method, using common industrial materials as raw materials, provides ideas for the simple preparation of high-performance carbon materials and also provides a promising method for the large-scale production of highly porous carbons. Introduction Human beings are increasingly inclined to use clean electricity to replace oil resources, due to issues such as environmental pollution and the depletion of fossil resources [1,2].Currently, higher and more stringent demands have been placed on energy storage devices, and battery-based energy storage systems have been fairly well developed [3][4][5][6][7].However, battery-based energy storage systems are difficult to use in all application needs, due to their limited power density and poor cycle stability.Supercapacitors are a new type of energy storage device with rapid charge-discharge capabilities, featuring higher power density, a broader operating temperature range, and an exceptionally long lifespan [8,9].Supercapacitors bridge the application gap between traditional capacitors and batteries, making them favored by many researchers; they have now become one of the hotspots for the development of new energy storage devices.At the current stage of research, the development of high-performance electrode materials and optimization of electrode fabrication strategies are considered to be one of the main directions for the future development of electrochemical energy storage technology. of 11 Electrode materials for supercapacitors can generally be divided into metal oxides, conductive polymers, and porous carbon materials [10,11].Carbon materials have attracted a lot of attention due to their ultra-long cycle stability and simple preparation methods [12].Gluconate is a common type of industrial product that does not produce environmentally polluting gases or other impurities during the pyrolysis process.Through simple heat treatment, it can be transformed into porous carbon materials with a high specific surface area, which makes them an ideal carbon precursor.Fuertes et al. [13] prepared two-dimensional carbon nanosheets from sodium gluconate through a one-step heat treatment.These carbon nanosheets possess a high specific surface area, of 1390 m 2 g −1 .Notably, the material exhibits a specific capacitance of 140 F g −1 at a current density of 1 A g −1 .Li et al. [14] utilized low-melting-point iron gluconate as a carbon precursor and combined it with KOH activation to synthesize hierarchical porous carbon nanosheets.These carbon nanosheets achieved impressive specific capacitances of 226 F g −1 and 168 F g −1 at 1 A g −1 and 50 A g −1 current densities, respectively.Additionally, precursors such as magnesium gluconate [15] and cobalt gluconate [16] have been used for the preparation of supercapacitor electrodes.However, there are a large number of metal oxides in the materials obtained after high temperature pyrolysis of the above-mentioned gluconates (such as sodium gluconate, cobalt gluconate, iron gluconate, etc.).Therefore, post-processing requires tedious steps, such as pickling and washing, which greatly increase the preparation cost.In short, a simpler preparation method is urgently needed.It is worth noting that the pyrolysis of zinc gluconate is similar to that of zinc-containing metal-organic frameworks, where organic components are transformed into carbon during pyrolysis.The non-corrosive zinc metal element evaporates at high temperatures, leaving micropores on the carbon surface, which cannot be substituted by the aforementioned gluconates. Although zinc gluconate powder can achieve a higher specific surface area during pyrolysis due to its self-activation effect, its poor electrochemical performance is attributed to its single-element composition.Nitrogen doping is a common strategy to improve the electrochemical performance of carbon materials by introducing heteroatoms into the carbon material framework [17,18].In this study, an efficient and simple preparation method is adopted to synthesize N-doped porous carbon (ZnPCN-1) through one-step pyrolysis of uniformly mixed zinc gluconate and NH 4 Cl.ZnPCN-1 possesses a high specific surface area of 1162 m 2 g −1 and a suitable nitrogen doping content of 4.57 at%.Consequently, ZnPCN-1 exhibits excellent electrochemical performance.The results show that a specific capacitance of 221 F g −1 is achieved at a current density of 1 A g −1 .The assembled symmetrical supercapacitor achieves a high energy density of 17 Wh kg −1 , and after 50,000 cycles at a current density of 50 A g −1 , it retains 82% of its initial capacitance.Moreover, the operating voltage window of the symmetrical device can be easily expanded to 2.5 V when using Et 4 NBF 4 as the electrolyte, resulting in a maximum energy density of up to 153 Wh kg −1 , and it retains 85% of the initial specific capacitance after 5000 cycles. Results and Discussion The preparation process of ZnPCNs is illustrated in Figure 1a.Zinc gluconate and ammonium chloride are mixed uniformly through a simple dissolution method.The dried sample is then placed into a tube furnace for high-temperature pyrolysis.The organic part of zinc gluconate undergoes thermal decomposition, resulting in the generation of numerous micropores.Additionally, zinc elements evaporate directly at high temperatures, similar to zinc-based MOFs.As a result, the final product does not require acid washing.It should be noted that ammonium chloride generates a significant amount of NH 3 and HCl gases at high temperatures, further activating the carbon material and introducing a certain amount of nitrogen heteroatoms into the carbon framework, which play a crucial role in enhancing the electrochemical performance of samples.SEM is a useful characterization to observe the surface morphology of materials [19][20][21].The SEM morphology characterization of ZnPC, ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 is shown in Figure 1b-e.It can be seen that all the samples show a sheet structure of different sizes, which is similar to other reports.In addition, the addition of ammonium chloride will cause carbon nanosheets to produce a large number of channels of nearly 100 nm, which will undoubtedly promote the transport of electrolyte ions.We speculate that ammonium chloride acts as a foaming agent during the pyrolysis process.With the gradual increase in NH 4 Cl proportion, the gas generated during the activation process increases, leading to a richer surface wrinkling and pore structure in the carbon, while still retaining the original two-dimensional layered structure.Figure 1f demonstrates that the carbon material exhibits a thin two-dimensional structure, consistent with the SEM results.It should be noted that a large number of microporous and mesoporous structures are distributed on the surface of the carbon material, suggesting that the carbon material should possess favorable electrochemical performance (Figure 1g).In general, the thin and porous two-dimensional nanosheet structure can effectively shorten the transmission distance of electrons and ions, and the large number of micropores generated by pyrolysis provides a large number of active sites for energy storage, providing a structural basis for the electrochemical performance of the material. in enhancing the electrochemical performance of samples.SEM is a useful characterization to observe the surface morphology of materials [19][20][21].The SEM morphology characterization of ZnPC, ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 is shown in Figure 1b-e.It can be seen that all the samples show a sheet structure of different sizes, which is similar to other reports.In addition, the addition of ammonium chloride will cause carbon nanosheets to produce a large number of channels of nearly 100 nm, which will undoubtedly promote the transport of electrolyte ions.We speculate that ammonium chloride acts as a foaming agent during the pyrolysis process.With the gradual increase in NH4Cl proportion, the gas generated during the activation process increases, leading to a richer surface wrinkling and pore structure in the carbon, while still retaining the original two-dimensional layered structure.Figure 1f demonstrates that the carbon material exhibits a thin two-dimensional structure, consistent with the SEM results.It should be noted that a large number of microporous and mesoporous structures are distributed on the surface of the carbon material, suggesting that the carbon material should possess favorable electrochemical performance (Figure 1g).In general, the thin and porous two-dimensional nanosheet structure can effectively shorten the transmission distance of electrons and ions, and the large number of micropores generated by pyrolysis provides a large number of active sites for energy storage, providing a structural basis for the electrochemical performance of the material.Figure 2a displays the XRD patterns of samples ZnPCN-0.5, ZnPCN-1, and ZnPCN-2.It is evident that all samples exhibit distinct broad peaks at around 23-25 and approximately 44 degrees, corresponding to the (002) and (100) crystal planes of the carbon, consistent with typical features of amorphous carbon [22,23].Two peaks are observed in the Raman spectra of all samples, corresponding to the D-band and the G-band (Figure 2b) [24,25].The ID/IG ratios for ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 1.58, 1.75, and 1.57, respectively.It should be noted that ZnPCN-1 has a higher value of ID/IG, possibly due to the formation of porous carbon material with enriched defect structures through NH4Cl It is evident that all samples exhibit distinct broad peaks at around 23-25 and approximately 44 degrees, corresponding to the (002) and (100) crystal planes of the carbon, consistent with typical features of amorphous carbon [22,23].Two peaks are observed in the Raman spectra of all samples, corresponding to the D-band and the G-band (Figure 2b) [24,25].The I D /I G ratios for ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 1.58, 1.75, and 1.57, respectively.It should be noted that ZnPCN-1 has a higher value of I D /I G , possibly due to the formation of porous carbon material with enriched defect structures through NH 4 Cl activation and nitrogen doping in an appropriate proportion.This is expected to impart improved electrochemical performance to the material. significantly affect the micropore structure.Furthermore, the average pore diameters for ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 3.32, 3.33, and 3.38 nm, respectively.Meanwhile, the pore volumes of the samples are 0.47, 0.49, and 0.48 cm 3 g −1 , respectively.Interestingly, the micropores of the carbon material provide adsorption sites for ions, and the mesopores are the electrolyte transport channels.ZnPCN-1 possesses a larger specific surface area and a suitable distribution of pore sizes.This pore structure provides favorable sites and channels for the storage and transport of electrolyte ions.X-ray photoelectron spectroscopy (XPS) is a quantitative energy spectroscopy technology to determine the elemental composition, experimental formulas, and chemical and electronic states of the elements contained in materials [29][30][31].Figure 2e presents XPS spectra of all the samples.It is evident that all samples contain C, O, and N elements, indicating successful nitrogen incorporation into the carbon framework.The nitrogen contents of ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 2.95, 4.57, and 4.40 at%, respectively (Table S1).ZnPCN-1 exhibits a higher nitrogen content, possibly due to excessive ammonium chloride hindering effective nitrogen doping.The N 1s can be divided into pyridinic nitrogen (N-6), pyrrolic nitrogen (N-5), graphitic nitrogen (N-Q), and oxidized nitrogen, with corresponding binding energies of 398.3, 400, 401.5, and 405.6 eV, respectively (Figure 2f) [32].N-6 and N-5 can enhance the wettability and specific capacitance of porous carbon, while graphitic nitrogen can improve the conductivity [33].Furthermore, the C 1s Figure 2c shows the nitrogen physical adsorption-desorption isotherms of the samples.It is evident that the curves of all samples exhibit type-I adsorption isotherms [26].Typically, the isotherms rise sharply at low pressures due to the presence of numerous micropores in materials [27,28].Additionally, a small hysteresis loop is observed at intermediate pressures, indicating a broad mesopore distribution in the structure.Overall, abundant micropores and mesopores exist in the porous carbon material after NH 4 Cl activation.The specific surface areas of ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 correspond to 1135, 1162, and 1146 m 2 g −1 , respectively.ZnPCN-1 exhibits the largest specific surface area.Excessive NH 4 Cl may cause slight disruption to the pore structure of carbon, resulting in a decrease in specific surface area.As shown in Figure 2d, the samples exhibit similar pore size distributions, possibly because the addition of ammonium chloride does not significantly affect the micropore structure.Furthermore, the average pore diameters for ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 3.32, 3.33, and 3.38 nm, respectively.Meanwhile, the pore volumes of the samples are 0.47, 0.49, and 0.48 cm 3 g −1 , respectively.Interestingly, the micropores of the carbon material provide adsorption sites for ions, and the mesopores are the electrolyte transport channels.ZnPCN-1 possesses a larger specific surface area and a suitable distribution of pore sizes.This pore structure provides favorable sites and channels for the storage and transport of electrolyte ions. X-ray photoelectron spectroscopy (XPS) is a quantitative energy spectroscopy technology to determine the elemental composition, experimental formulas, and chemical and electronic states of the elements contained in materials [29][30][31].Figure 2e presents XPS spectra of all the samples.It is evident that all samples contain C, O, and N elements, indicating successful nitrogen incorporation into the carbon framework.The nitrogen contents of ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 2.95, 4.57, and 4.40 at%, respectively (Table S1).ZnPCN-1 exhibits a higher nitrogen content, possibly due to excessive ammonium chloride hindering effective nitrogen doping.Figure 2f clearly shows the bonding configurations of different nitrogen-containing functional groups.The N 1s can be divided into pyridinic nitrogen (N-6), pyrrolic nitrogen (N-5), graphitic nitrogen (N-Q), and oxidized nitrogen by mathematical fitting, with corresponding binding energies of 398.3, 400, 401.5, and 405.6 eV, respectively (Figure 2f) [32].N-6 and N-5 can enhance the wettability and specific capacitance of porous carbon, while graphitic nitrogen can improve the conductivity [33].Furthermore, the C 1s can be divided into C=C/C-C, C-O/C-N, C=O, and O-C=O, with corresponding binding energies of 284.6, 285.3, 286.3, and 287.2 eV, respectively (Figure S1a) [34].The appearance of various of carbon-oxygen bonds and carbon-nitrogen bonds means that more defects are formed on the surface of carbon materials, which is crucial for high-performance electrochemical energy storage and good wettability.As shown in Figure S1b, the O 1s spectrum can be divided into C=O, O-C-O, and O-H, with corresponding binding energies of 531.8, 532.7, and 533.7 eV, respectively.It is worth mentioning that the doping of oxygen atoms means that the material is more prone to defects and better electrolyte wettability [35].In summary, ZnPCN-1 simultaneously possesses higher nitrogen and oxygen doping, providing a substantial pseudo-capacitance and significantly enhancing its electrochemical performance. A three-electrode system was constructed using a 6 M KOH electrolyte to test the electrochemical performance of all samples.Figure 3a displays the cyclic voltammetry (CV) curves of all samples at a scan rate of 100 mV s −1 .Clearly, ZnPCN-1 exhibits the largest enclosed area, indicating its superior electrochemical performance.Furthermore, ZnPCN-1 demonstrates the longest charge-discharge time (Figure 3b), suggesting its optimal specific capacitance, which aligns with the CV results.The CV curves of ZnPCN-1 at various scan rates are shown in Figure 3c.All CV curves exhibit rectangular shapes, suggesting favorable double-layer behavior.It is worth noting that, even at a high scan rate of 2000 mV s −1 , the CV curve remains stable, indicating the remarkable rate capability of ZnPCN-1.Moreover, distinct galvanostatic charge-discharge (GCD) curves manifest symmetric isosceles triangles, reflecting favorable double-layer behavior and Coulombic efficiency (Figure 3d).The specific capacitances of ZnPCN-0.5, ZnPCN-1, ZnPCN-2, and ZnPC at a current density of 1 A g −1 are 178, 221, 188, and 80 F g −1 , respectively.The variations of specific capacitance with current density for different samples are illustrated in Figure 3e.Undoubtedly, ZnPCN-1 exhibits the best specific capacitance and rate capability.The specific capacitance values of ZnPCN-1 correspond to 221, 198, 191, 188, 185, 179, and 174 F g −1 for current densities of 1, 2, 3, 4, 5, 10, and 20 A g −1 , respectively.When comparing these with other previously reported carbon-derived electrodes (C9-250k-12 (197 F g −1 ) [36], MA6 (182 F g −1 ) [37], SAK (129 F g −1 ) [38], SSP-900 (199 F g −1 ) [39], Gna-CA (140 F g −1 ) [13]) (Table S2), the electrodes used in this study possessed comparable or much better performances, suggesting the potential applications of ZnPCN-1 in practical fields in the future.S1a) [34].As shown in Figure S1b, The O 1s spectrum can be divided into C=O, O-C-O, and O-H, with corresponding binding energies of 531.8, 532.7, and 533.7 eV, respectively.It is worth mentioning that the doping of oxygen atoms means that the material is more prone to defects and better electrolyte wettability [35].In summary, ZnPCN-1 simultaneously possesses higher nitrogen and oxygen doping, providing a substantial pseudo-capacitance and significantly enhancing its electrochemical performance.A three-electrode system was constructed using a 6 M KOH electrolyte to test the electrochemical performance of all samples.Figure 3a displays the cyclic voltammetry (CV) curves of all samples at a scan rate of 100 mV s −1 .Clearly, ZnPCN-1 exhibits the largest enclosed area, indicating its superior electrochemical performance.Furthermore, ZnPCN-1 demonstrates the longest charge-discharge time (Figure 3b), suggesting its optimal specific capacitance, which aligns with the CV results.The CV curves of ZnPCN-1 at various scan rates are shown in Figure 3c.All CV curves exhibit rectangular shapes, suggesting favorable double-layer behavior.It is worth noting that, even at a high scan rate of 2000 mV s −1 , the CV curve remains stable, indicating the remarkable rate capability of ZnPCN-1.Moreover, distinct galvanostatic charge-discharge (GCD) curves manifest symmetric isosceles triangles, reflecting favorable double-layer behavior and Coulombic efficiency (Figure 3d).The specific capacitances of ZnPCN-0.5, ZnPCN-1, ZnPCN-2, and ZnPC at a current density of 1 A g −1 are 178, 221, 188, and 80 F g −1 , respectively.The variations of specific capacitance with current density for different samples are illustrated in Figure 3e.Undoubtedly, ZnPCN-1 exhibits the best specific capacitance and rate capability.The specific capacitance values of ZnPCN-1 correspond to 221, 198, 191, 188, 185, 179, and 174 F g −1 for current densities of 1, 2, 3, 4, 5, 10, and 20 A g −1 , respectively.When comparing these with other previously reported carbon-derived electrodes (C9-250k-12 (197 F g −1 ) [36], MA6 (182 F g −1 ) [37], SAK (129 F g −1 ) [38], SSP-900 (199 F g −1 ) [39], Gna-CA (140 F g −1 ) [13]) (Table S2), the electrodes used in this study possessed comparable or much better performances, suggesting the potential applications of ZnPCN-1 in practical fields in the future.As shown in Figure 3f, the curves of all samples in the low-frequency region approach vertical lines, indicating good diffusion ability, favorable electron conductivity, and low internal resistance.A suitable equivalent circuit diagram is shown in Figure 3f, and the component values of the system are calculated according to the equivalent circuit.It should be pointed out that the R CT and R S values of ZnPCN-1 are 1.21 and 0.63 ohms, respectively, which are lower than those of other samples, indicating that it has the best electron conduction and ion transport properties.Worth mentioning is the remarkable cyclic stability of ZnPCN-1, retaining 96% of its initial capacitance after 50,000 cycles (Figure S2).Overall, the exceptional electrochemical performance of ZnPCN-1 is attributed to the suitable nitrogen doping and pore-forming effect of NH 4 Cl on zinc gluconate-derived carbon, introducing more active sites and significantly enhancing the electrochemical performance of the carbon material. The reaction kinetics of the charge storage process can be evaluated by the CV curve.The total charge storage can be divided into capacitive effects and diffusion-controlled charge storage, which can be expressed by Equations (S1) and (S2) [40].In addition, the b value can be used as an important parameter to propose the kinetics of the electrochemical reaction of the material.In general, when the value of b is close to 0.5, it indicates that the energy storage process is mainly controlled by diffusion, and when the value of b is close to 1, this proves that the process is mainly controlled by capacitive effects.After calculation and fitting, the b value of ZnPCN-1 electrode is 0.82, which means that the energy storage process is mainly controlled by diffusion.Moreover, the capacitance contributions from the diffusion-controlled intercalation process and the surface capacitive effects can be quantitatively distinguished by Equations ( S3) and (S4) [40].As shown in Figure S3, the diffusion-controlled reaction contributes 85% of the capacitance at a scan rate of 20 mV s −1 , and it still contributes 47% of the capacitance at a scan rate of 500 mV s −1 .This suggests that the thin and porous carbon nanosheets form a unique channel, which is conducive to the rapid transport of electrolyte ions, resulting in good electrochemical performance and rate performance of the obtained carbon materials. The ZnPCN-1 was used as an electrode, with regular filter paper as the separator and 6 M KOH as the electrolyte, to assemble a symmetric supercapacitor device (ZnPCN-1//ZnPCN-1).As shown in Figure 4a, the CV curves exhibits a rectangular shape, indicating that the device displays favorable double-layer capacitance behavior.Moreover, even at a scan rate of 2000 mV s −1 , the CV curve retains its rectangular shape, suggesting excellent rate capability.The charge-discharge curves at different current densities demonstrate symmetric isosceles triangle shapes (Figure 4b), indicating good double-layer charge-discharge behavior of the prepared carbon material electrode, consistent with the CV test results.Figure 4c demonstrates the specific capacitance of the device at current densities of 0.5, 1, 2, 5, 10, and 20 A g −1 , corresponding to 124, 122, 116, 113, 111, and 100 F g-1, respectively.It is noteworthy that the device exhibits satisfactory rate performance, maintaining 85% of its initial capacitance, even at a current density of 20 A g −1 .Additionally, the EIS plots confirm the low internal resistance and high ion transfer rate of this device (Figure S4).The device demonstrates excellent cycling stability at a high current density of 50 A g −1 , retaining approximately 82% of its initial specific capacitance after 5000 cycles (Figure 4d).Furthermore, as shown in Figure 5d, the device achieves an energy density of 17.2 Wh kg −1 at a power density of 499 W kg −1 .Even at a power density of 19 kW, the energy density remains as high as 13.4 Wh kg −1 , demonstrating remarkable rate performance.As is well-known, organic electrolytes can significantly broaden the potential window of supercapacitors, thereby enhancing the power density of these devices.The potential window of the ZnPCN-1//ZnPCN-1 device can easily be extended to 0-2.5 V when using Et4NBF4 electrolyte.Figure 5a displays CV curves of the device at different scan rates.All of the CV curves show a rectangular shape, indicating that it still exhibits good double-layer behavior in the Et4NBF4 electrolyte.Figure 5b shows the GCD curves, which all manifest symmetric isosceles triangle shapes, demonstrating favorable double-layer behavior.Interestingly, the Coulombic efficiency of the device is as high as 99.8%, even at As is well-known, organic electrolytes can significantly broaden the potential window of supercapacitors, thereby enhancing the power density of these devices.The potential window of the ZnPCN-1//ZnPCN-1 device can easily be extended to 0-2.5 V when using Et4NBF4 electrolyte.Figure 5a displays CV curves of the device at different scan rates.All of the CV curves show a rectangular shape, indicating that it still exhibits good double-layer behavior in the Et4NBF4 electrolyte.Figure 5b shows the GCD curves, which all manifest symmetric isosceles triangle shapes, demonstrating favorable double-layer behavior.Interestingly, the Coulombic efficiency of the device is as high as 99.8%, even at As is well-known, organic electrolytes can significantly broaden the potential window of supercapacitors, thereby enhancing the power density of these devices.The potential window of the ZnPCN-1//ZnPCN-1 device can easily be extended to 0-2.5 V when using Et 4 NBF 4 electrolyte.Figure 5a displays CV curves of the device at different scan rates.All of the CV curves show a rectangular shape, indicating that it still exhibits good double-layer behavior in the Et 4 NBF 4 electrolyte.Figure 5b shows the GCD curves, which all manifest symmetric isosceles triangle shapes, demonstrating favorable double-layer behavior.Interestingly, the Coulombic efficiency of the device is as high as 99.8%, even at a low current density of 0.5 A g −1 , which means that it has a good prospects for applications.The specific capacitances of the device at current densities of 0.5, 1, 2, 5, 10, and 20 A g −1 correspond to 179, 178, 176, 175, 102, and 92 F g −1 , respectively (Figure S5a).The capacitance retention of the device is 51% at a current density of 20 A g −1 .As shown in Figure 5c, even after 5000 cycles at a current density of 20 A g −1 , the device still retains 85.03% of its initial specific capacitance, demonstrating good electrochemical cycling stability.As depicted in Figure 5d, the device achieves a high energy density of 153.4 Wh kg −1 at a power density of 1242 W kg −1 .Furthermore, the device maintains an energy density of 70.93 Wh kg −1 even at an ultrahigh power density of 47 kW kg −1 .As shown in Table S3, the assembled devices exhibit excellent energy storage performance in both aqueous and organic electrolytes, and compare favorably with published supercapacitor work, with performance far exceeding devices such as WC-E-100-48//WC-E-100-48 (11.0 Wh Kg −1 , 26.3 W Kg −1 ) [41], WBMs-800//WBMs-800 (9.4 Wh Kg −1 , 227 W Kg −1 ) [42], Co(OH) 2 @CW//CW (6.5 Wh Kg −1 , 236 W Kg −1 ) [43], NiCo-P//CW (12.1 Wh Kg −1 , 395 W Kg −1 ) [44], N-M-O//Carbon (20.1 Wh Kg −1 , 226 W Kg −1 ) [45], and MnOx/PANI//Carbon (30.7 Wh Kg −1 , 800 W Kg −1 ) [46].This means that the device has good application prospects. In summary, the device assembled with ZnPCN-1 exhibits excellent electrochemical performance.Firstly, the introduction of nitrogen atoms gives the porous carbon more active sites, enhancing its electrochemical performance.Secondly, the activation effect of NH 4 Cl optimizes the pore structure of the porous carbon, enabling it to store more electrolyte and providing more rapid pathways for ion transport. Preparation of porous carbon materials: Zinc gluconate and NH 4 Cl were dissolved in deionized water, separately, with mass ratios of 2:1, 1:1, and 1:2.The resulting solutions were then transferred to a 70 • C oven and dried for 24 h.The obtained mixtures were placed in a tube furnace and heated at a rate of 5 • C/min under a nitrogen atmosphere to 950 • C, where they were held for 2 h.The resulting carbon materials were labeled as ZnPC, ZnPCN-0.5, ZnPCN-1, and ZnPCN-2, based on different mixing ratios. Preparation of electrodes: Porous carbon, acetylene black and PVDF were fully stirred according to the mass ratio of 8:1:1, and then coated on nickel foam of about 1 cm × 1 cm, and the mass loading of each electrode was guaranteed to be about 2 mg.Subsequently, the nickel foam was placed in a vacuum drying oven at 80 • C for 12 h.Finally, the dried nickel foam was placed in a tablet press to obtain the electrode at a pressure of 10 MPa. Characterizations: The microstructure of the samples was characterized by X-ray diffraction (XRD, Rigaku D/Max 2500, Tokyo, Japan), field-emission scanning electron microscopy (SEM, Hitachi, FESEM, S-3400, Tokyo, Japan), and transmission electron microscopy (TEM, JEM-2010EX, Tokyo, Japan).The pore structure of obtained samples was examined via N 2 adsorption/desorption experiments at 77 K using a micromeritics apparatus (BeiShiDe Instrument-S&T 3H-2000PS2, Beijing, China).The specific surface area was calculated by the Brunauer-Emmett-Teller (BET) method, and the pore size distribution and pore volume were calculated from the BJH model.All the samples were degassed under vacuum at 200 • C for 6 h before testing.Raman spectra were collected from a Raman spectrometer (Jobin Yvon, HR800, Paris, France). Electrochemical measurements: Cyclic voltammetry (CV), galvanostatic charge-discharge (GCD), electrochemical impedance spectroscopy (EIS), and amperometry measurements were performed using a CHI760D electrochemical workstation (Chenhua, Shanghai, China).In addition, all EIS data were verified by Kramers-Kronig residual analysis to ensure the reliability of the obtained data.Briefly, we set the samples as the work electrode, the Pt foil and the Hg/HgO electrode as the counter and reference electrodes, respectively, and immersed them in a 6 M KOH electrolyte solution to form a three-electrode cell operated at room temperature (Figure S6).In addition, two electrodes of similar quality were selected as the positive and negative electrodes, and the filter paper was used as the separator.The 6 M KOH and 1 M Et 4 NBF 4 were used as the electrolyte to assemble and package the CR2032 button cell.The specific capacitance of three-electrode system (C m , F g −1 ), the specific capacitance of double-electrode system (C s , F g −1 ), energy density (Wh Kg −1 ), and power density (W Kg −1 ) were estimated from GCD using the following Equations ( 1)-( 4): where I (mA) is the rate of current, ∆t (s) is the discharge time, ∆V (V) is the voltage drop, and m (mg) is the mass of active material in working electrode. Conclusions This study uses an efficient and simple preparation method, wherein zinc gluconate and NH 4 Cl are uniformly mixed, followed by one-step pyrolytic synthesis to create nitrogen-doped porous carbon (ZnPCN-1).ZnPCN-1 exhibits a high specific surface area of 1162 m 2 g −1 and a suitable nitrogen doping content of 4.57 at%.The addition of ammonium chloride not only optimizes the pore structure of the carbon but also introduces nitrogen heteroatoms that enhance the electrochemical performance of carbon.As a result, ZnPCN-1 demonstrates outstanding electrochemical properties.The results indicate a specific capacitance of 221 F g −1 at a current density of 1 A g −1 .The assembled symmetric supercapacitor achieves a high energy density of 17 Wh kg −1 , and even after 50,000 cycles at a current density of 50 A g −1 , it retains 82% of its initial capacitance.Furthermore, when using Et 4 NBF 4 as the electrolyte, the operational voltage window of the symmetric device can easily be extended to 2.5 V, resulting in an energy density of up to 153 Wh kg −1 , while maintaining 85% of the initial specific capacitance after 5000 cycles.This method employs inexpensive industrial materials and involves only dissolution and a one-step pyrolysis process, providing a new concept for the simple and green preparation of high-performance carbon materials. Figure Figure 2a displays the XRD patterns of samples ZnPCN-0.5, ZnPCN-1, and ZnPCN-2.It is evident that all samples exhibit distinct broad peaks at around 23-25 and approximately 44 degrees, corresponding to the (002) and (100) crystal planes of the carbon, consistent with typical features of amorphous carbon[22,23].Two peaks are observed in the Raman spectra of all samples, corresponding to the D-band and the G-band (Figure2b)[24,25].The I D /I G ratios for ZnPCN-0.5, ZnPCN-1, and ZnPCN-2 are 1.58, 1.75, and 1.57, respectively.It should be noted that ZnPCN-1 has a higher value of I D /I G , possibly due to the formation of porous carbon material with enriched defect structures through NH 4 Cl activation and nitrogen doping in an appropriate proportion.This is expected to impart improved electrochemical performance to the material. Figure 3 . Figure 3. Electrochemical performance tested in a three-electrode system with 6 M KOH as the electrolyte: CV curves (a) of the electrodes at a scanning rate of 100 mV s −1 and GCD curves (b) measured at a current density of 1 A g −1 of samples; CV curves (c) at different scanning rates and GCD curves (d) at different current densities; rate capability curves (e) and EIS plots (f) of samples. Figure 3 . Figure 3. Electrochemical performance tested in a three-electrode system with 6 M KOH as the electrolyte: CV curves (a) of the electrodes at a scanning rate of 100 mV s −1 and GCD curves (b) measured at a current density of 1 A g −1 of samples; CV curves (c) at different scanning rates and GCD curves (d) at different current densities; rate capability curves (e) and EIS plots (f) of samples. Figure 4 . Figure 4. Electrochemical performance of ZnPCN-1//ZnPCN-1 in 6 M KOH: CV curves at different scanning rates (a), and GCD curves at various current densities (b); rate performance (c), and cycle performance curve for 50,000 cycles at the current density of 50 A g −1 (d) of the SSC device. Figure 5 . Figure 5. Electrochemical performance of ZnPCN-1//ZnPCN-1 in Et4NBF4 electrolyte.CV curves at various scanning rates (a), GCD curves at different current densities (b), cyclic performance for 50,000 cycles at the current density of 20 A g −1 (c), and Ragone diagram (d) of the SSC device. Figure 4 . Figure 4. Electrochemical performance of ZnPCN-1//ZnPCN-1 in 6 M KOH: CV curves at different scanning rates (a), and GCD curves at various current densities (b); rate performance (c), and cycle performance curve for 50,000 cycles at the current density of 50 A g −1 (d) of the SSC device. Figure 4 . Figure 4. Electrochemical performance of ZnPCN-1//ZnPCN-1 in 6 M KOH: CV curves at different scanning rates (a), and GCD curves at various current densities (b); rate performance (c), and cycle performance curve for 50,000 cycles at the current density of 50 A g −1 (d) of the SSC device. Figure 5 . Figure 5. Electrochemical performance of ZnPCN-1//ZnPCN-1 in Et4NBF4 electrolyte.CV curves at various scanning rates (a), GCD curves at different current densities (b), cyclic performance for 50,000 cycles at the current density of 20 A g −1 (c), and Ragone diagram (d) of the SSC device. Figure 5 . Figure 5. Electrochemical performance of ZnPCN-1//ZnPCN-1 in Et 4 NBF 4 electrolyte.CV curves at various scanning rates (a), GCD curves at different current densities (b), cyclic performance for 50,000 cycles at the current density of 20 A g −1 (c), and Ragone diagram (d) of the SSC device.
7,884.8
2023-09-01T00:00:00.000
[ "Materials Science" ]
Rapid screening of in cellulo grown protein crystals via a small-angle X-ray scattering/X-ray powder diffraction synergistic approach A rapid and sensitive detection approach utilizing high-brilliance and low-background small-angle X-ray scattering and X-ray powder diffraction to detect protein microcrystals grown within living insect cells is described. Introduction Nowadays, it is well established that living cells from all kingdoms of life possess an intrinsic ability to form intracellular protein crystals, denoted as 'in vivo grown crystals' or 'in cellulo crystals' (Schö nherr et al., 2018). The assembly of intracellular proteins into native crystalline states could provide specific advantages for the organism, mainly in terms of storage and protection. However, this phenomenon also applies to recombinant proteins produced by heterologous gene expression, as highlighted by the growing number of examples predominantly observed in mammalian and baculovirus-infected insect cells. During recent years, novel developments in serial crystallography data collection strategies on X-ray free-electron lasers (XFELs) and synchrotron ISSN 1600-5767 sources (Standfuss & Spence, 2017;Yamamoto et al., 2017;Yabashi & Tanaka, 2017) have paved the way to use in cellulo crystals with dimensions in the low micrometre or even the nanometre size range as suitable targets for X-ray crystallography (Gati et al., 2014;Schö nherr et al., 2018). Highresolution structural information on several recombinant proteins has already been obtained from diffraction of in cellulo crystals, e.g. for the coral Dipsastraea favus derived fluorescent protein Xpa (Tsutsui et al., 2015), the metazoanspecific human kinase PAK4 in complex with Inka1 (Baskaran et al., 2015) and the BinAB larvicide from Lysinibacillus sphaericus (Colletier et al., 2016), as well as of cathepsin B (CatB; Redecke et al., 2013) and IMP dehydrogenase (IMPDH; Nass et al., 2020) from the parasite Trypanosoma brucei. These results question the earlier opinion that the crowded environment in living cells might impact the order of the crystalline structure (Doye & Poon, 2006). Moreover, they indicate that in cellulo protein crystallization is able to offer exciting possibilities complementary to conventional crystallization techniques (Chayen & Saridakis, 2008). The approach is particularly important for proteins that were/are not accessible for crystallization using established in vitro screening strategies, as shown for T. brucei IMPDH (Nass et al., 2020) and fully glycosylated T. brucei CatB (Redecke et al., 2013). In cellulo crystallization provides an alternative to the time-consuming optimization of protein purification and extensive crystal screening steps. Additionally, the quasinative conditions in host cells prevent crystal distortion that could arise from non-physiological conditions imposed by recrystallization and provide the opportunity to identify native co-factors present in the highly versatile natural reservoir of compounds within living cells (Nass et al., 2020). However, exploiting the tremendous potential of in cellulo protein crystallization requires a more detailed understanding of the cellular processes involved in crystal formation. Insights into the mechanisms that control the size and shape of crystals, and also the identification of biological parameters suitable for screening approaches, could further widen the applications of in cellulo crystallization. On the basis of a detailed comparison of reported intracellular protein crystallization events, specific requirements have been proposed to favour in cellulo crystal growth in fruitful interplay (Koopmann et al., 2012;Schö nherr et al., 2015Schö nherr et al., , 2018Duszenko et al., 2015). This includes the intrinsic crystallization tendency of the target protein under the specific environmental conditions provided by the individual cellular compartments. Moreover, high local protein concentrations seem to be required, which might result from a preceding protein phase separation event (Hasegawa, 2019). In insect cells, crystals occurred in the endoplasmic reticulum (CatB; Koopmann et al., 2012) and in peroxisomes (IMPDH, luciferase; Nass et al., 2020;Schö nherr et al., 2015), depending on the native translocation signals harboured in the sequence of the recombinant proteins. Furthermore, a cytosolic localization of crystals was observed [calcineurin, avian reovirus nonstructural protein fused to green fluorescent protein (GFP-NS), IMPDH; Fan et al., 1996;Schö nherr et al., 2015;Nass et al., 2020]. Thus, different cellular environments may represent the basis for developing a more systematic in cellulo crystallization screening approach that would exploit living cells as crystallization factories for a large number of recombinant proteins. An initial strategy to test the crystallization capability of living insect cells has already been proposed and applied to recombinant CPV1 polyhedrin crystals (Boudes et al., 2016(Boudes et al., , 2017. The successful detection of protein crystals inside living cells represents a crucial -and somewhat challenging -task in the development of a versatile screening strategy for in cellulo crystallization. During recent years a variety of methods have been optimized to identify even nanometre-sized protein crystals in conventional crystallization setups and to locate these crystals after mounting at the beamline (Becker et al., 2017). Unfortunately, the environmental challenges imposed by the living cells largely prevent the direct and efficient detection of in cellulo crystals. Most frequently, bright-field microscopy methods including contrast enhancement techniques, e.g. differential interference contrast (DIC) or integrated modulation contrast, are applied to visualize the intracellular crystals (Schö nherr et al., 2015). The main advantages of these non-invasive methods include frequently accessible equipment, the lack of elaborate sample preparation steps and the good visualization of sufficiently sized crystals. However, the limited resolution of visible-light-based approaches combined with marginal differences in refractive indices makes it difficult to reliably differentiate the ordered crystalline structures in the nanometre size range from the chaotic cellular background. For nanocrystals, transmission electron microscopy (TEM) (Stevenson et al., 2014) was developed into a tool that enables the study and optimization of crystal formation processes in vitro (Stevenson et al., 2016) and can be used to characterize in cellulo crystals directly within the cellular environment. A resolution in the low nanometre size range allows the visualization of the crystal structure, which can also be applied to identify in cellulo crystals (Schö nherr et al., 2018). However, since TEM requires ultrathin sectioning (usually <90 nm), a crystal-containing cell has to be selected by chance from the entire population and the crystal must be intersected by the ultrathin cut. If intracellular crystal growth is restricted to a few cells in the entire culture or only very few nanocrystals per cell are produced, this represents a significant limitation, which, together with the time-consuming sample preparation, hampers the simple and rapid detection of crystals in a cell culture. Second harmonic generation (SHG) microscopy used in combination with UV two-photon excited fluorescence, and frequently referred to as second-order nonlinear optical imaging of chiral crystals (SONICC; Kissick et al., 2011;Haupert et al., 2012), represents another emerging technique to rapidly achieve successful crystal formation in conventional screening setups with high sensitivity, selectivity and the potential for automatization (Becker et al., 2017;Tang et al., 2020). However, UV fluorescence is less helpful for intracellular crystals owing to the high protein concentration surrounding the crystal in the cellular environment, and high research papers 1170 Janine Mia Lahey-Rudolph et al. Screening of in cellulo grown protein crystals crystal symmetry may reduce the crystal-specific SHG signal in practice by about two orders of magnitude (Haupert et al., 2012). Together with the possibility of SHG signal generation by filaments within the cells (Campagnola & Loew, 2003) this could prevent a reliable in cellulo crystal detection. A direct proof for the presence of crystallites is given by the detection of specific Bragg diffraction of electrons or X-rays from a sample. The technique of micro-electron diffraction has the potential to unravel structures of proteins and other biological molecules at 1-3 Å resolution from a few crystals in the nanometre size range, because of the strong interaction between electrons and the crystal. However, ultrathin samples are required, which are frequently obtained by milling (Shi et al., 2013;Jones et al., 2018). X-ray powder diffraction (XRPD) provides a fingerprint of every crystalline phase exhibiting a unique diffraction pattern, and differences between the various crystalline forms can be observed by examining the peak positions and intensities in XRPD patterns (Katrincic et al., 2009). Even small changes in the form of new peaks, additional shoulders or shifts in the peak positions often imply the presence of a second polymorph (Davidovich et al., 2004). Thus, information about crystalline sample composition is obtained, yielding knowledge of whether the sample consists of one or more phases. During the past decade, XRPD has moved beyond fingerprinting of microcrystalline samples by extraction of accurate lattice parameters, elucidating new structural information from biological macromolecules at low and medium resolution (Von Dreele, 2019; Karavassilia & Margiolaki, 2016;Karavassili et al., 2017;Spiliopoulou et al., 2020;Margiolaki, 2019). Densely packed, randomly oriented crystals produce Debye-Scherrer rings on the detector that allow the evaluation of the diffraction capabilities of the sample (Von Dreele et al., 2000;Margiolaki et al., 2007). Even if a relatively small number (<50) of low-angle peaks is considered to be sufficient to precisely refine the unit-cell parameters (Von Dreele, 2019), the volume of the cellular soft matter that surrounds intracellular crystals significantly restricts the crystal density. Thus, the powder diffraction intensity of intracellular crystals at synchrotron crystallography beamlines is often restricted, especially when the crystal-to-cell number ratio in the sample is low (Margiolaki & Wright, 2008). Small-angle X-ray scattering (SAXS) is performed in solution to structurally characterize biological macromolecules under dilute conditions. SAXS instruments are optimized to minimize the scattering background to detect weak scattering signals that are often orders of magnitude smaller in intensity than diffraction peaks. SAXS profiles provide information on size, shape and oligomerization state but also about interactions between particles in solution. SAXS is extremely sensitive to the formation of crystallites, and this technique has previously been used to analyse protein nucleation (Kovalchuk et al., 2016) and crystallization kinetics (Poplewska et al., 2019). Furthermore, the micro-and nano-GISAX method could even significantly exceed the sensitivity of the SAXS technique for studying protein nucleation (Pechkova & Nicolini, 2017). In this study, we exploited SAXS and XRPD for a rapid and sensitive detection of protein microcrystals grown within insect cells. We employed the high-brilliance and lowbackground P12 bioSAXS beamline of the EMBL at the PETRA III storage ring (DESY, Hamburg). Four test proteins were measured: Photinus pyralis luciferas, T. brucei IMPDH and CatB, and Neurospora crassa HEX-1. Mock-virus-infected and uninfected cells were used as a control. Combining the high sensitivity of SAXS with XRPD analysis methods, we demonstrate that it is possible to assess within seconds whether a cell culture contains microcrystalline material based on the presence of Bragg peaks in the recorded scattering profiles, even for target proteins that form crystals only in a small percentage of cells. This screening approach has the potential to overcome the methodological bottleneck of crystal detection within living cells and opens up opportunities to investigate and understand the influence of growth conditions, stress, temperature, starvation, cellular compartmentalization and the choice of cell line on the size and formation of in cellulo crystals. Cloning Cloning procedures for T. brucei IMPDH (gene bank accession number M97794) and T. brucei CatB (gene bank accession number AY508515) have been described previously (Nass et al., 2020;Koopmann et al., 2012). The genes coding for P. pyralis luciferase (Luc, gene bank accession number AB644228) and N. crassa HEX-1 (gene bank accession number XM_958614) were amplified by PCR using primers 5 0 -GAAGACGCCAAAAACATAAAGAA-0 3 (sense) and 5-CAATTTGGACTTTCCGCCCTTC-3 0 (antisense), and 5 0 -TACTACGACGACGACGCTCACG-0 3 (sense) and 5 0 -GAG-GCGGGAACCGTGGACG-3 0 (antisense), respectively. ALLin HiFi DNA polymerase (highQu) was used according to the manufacturer's instructions. The amplicons were ligated into a modified pFastBac1 vector (Thermo Scientific) containing the sequence 5 0 -ATGGGCGCCTAA-3 0 between the BamHI and HindIII restriction sites to accommodate an EheI restriction site. The vector was linearized using FastDigest EheI (Thermo Scientific) and blunt-end ligation was achieved using T4 DNA ligase (Thermo Scientific) according to the manufacturer's protocol. Plasmids were transformed into competent Escherichia coli DH5 cells (Stratagene) and purified (GeneJET plasmid miniprep kit, Thermo Scientific). The integrity of the cloned sequences was verified by Sanger sequencing. All generated pFastBac1 plasmids were transformed into competent E. coli DH10EmBacY cells (Geneva Biotech) according to the manufacturer's instructions. Recombinant bacmid DNA was purified using the GeneJET plasmid miniprep kit (Thermo Scientific) and subsequently used for PCR analysis of the transposed sequence, employing standard pUC M13 forward and reverse primers. For mock-virus generation, bacmid DNA was directly isolated from E. coli DH10EmBacY cells without prior transposition of a recombinant gene of interest. Insect cell culture Sf9 and High Five insect cells were held in suspension culture in serum-free ESF921 insect cell culture medium (Expression Systems) at 300 K on an orbital shaker at 100 r min À1 . Suspension culture cells were seeded at 0.5-1 Â 10 6 cells ml À1 , in a total volume of 25 ml in an uprightstanding 75 cm 2 disposable T-flask. Cell density was counted daily and cultures were split when the density reached 4 Â 10 6 cells ml À1 for High Five or 6 Â 10 6 cells ml À1 for Sf9 cells. Recombinant virus generation Recombinant bacmid DNA was used for lipofection with Sf9 insect cells grown in ESF921 serum-free medium at 300 K using Escort IV reagent (Sigma-Aldrich) according to the manufacturer's instructions. In brief, 0.45 Â 10 6 Sf9 cells per well in a 12-well plate were transfected with 1 mg of bacmid DNA and 3 ml of Escort IV reagent for 18 h. After 4 days of incubation at 300 K the first supernatant (P1) was harvested by centrifugation at 21 000 relative centrifugal force (r.c.f.) for 30 s. For high-titre stock production (third passage, P3), 0.9 Â 10 6 Sf9 cells per well in a six-well plate were infected with 100 ml of P1 or 20 ml of P2 viral stock and incubated for 4 days. Viral P2 and P3 stocks were harvested as described above. Viral titre determination A serial dilution assay was used to calculate the titre of the viral P3 stocks. In a 96-well plate, a suspension of 3 Â 10 4 High Five cells in 180 ml of antibiotic-free ESF921 insect cell culture medium was added to each well and incubated for 30 min to let cells attach to the bottom. Then, a 1:10 dilution of the virus solution with medium was prepared and 20 ml portions of this solution were added to each of six wells of the first row. For each serial dilution step the medium containing the virus was mixed in the well using a multi pipette and 20 ml of the supernatant was transferred into the next row. Pipette tips were discarded after each row; eight rows were prepared per titration. After 4 days at 300 K, enhanced yellow fluorescent protein (EYFP) fluorescence indicating a successful infection was evaluated, and wells with at least two fluorescent cells were counted as positive. The virus titre was calculated using the TCID 50 (tissue culture infectious dose; Reed & Muench, 1938). Sample preparation for X-ray measurements In one well of a six-well cell culture plate, 8 Â 10 5 Sf9 or High Five cells were plated in 2 ml of ESF921 insect cell culture medium and subsequently infected with P3 stock of the recombinant baculovirus (rBV) using a multiplicity of infection (MOI) of 1. Cells were incubated as a semi-adherent culture at 300 K for 40-96 h until needed for the diffraction experiments. The cells were then gently flushed from the well bottom with a 1000 ml pipette and centrifuged for 30 s at 270 r.c.f., and the cell pellet was resuspended in 25 ml of Trisbuffered saline (TBS; 20 mM Tris, 150 mM NaCl pH 7.0). 40-45 ml of this suspension was transferred into the sample tubes and immediately used for the X-ray scattering experiments. For dilution series of crystal-carrying cells, High Five insect cells expressing the target gene were mixed with mock-rBVinfected cells in a 1:2 ratio. Up to seven serial dilution steps were carried out directly prior to the X-ray scattering experiments with samples prepared in TBS as previously mentioned. Light microscopy For cell and crystal counting, cell cultures were imaged with a Leica DM IL LED microscope equipped with a 20Â objective and a Leica MC170 HD microscope camera prior to the diffraction experiment. The crystal-containing cells and those without crystals were manually counted, and their ratio was calculated. The images of the cell cultures were generated using a Zeiss Observer.Z1 inverted microscope with a 20Â objective and an AxioCam MRm microscope camera. Propidium iodide staining of infected cells To visualize the effects of the sample preparation procedure on the cell viability, High Five insect cells were infected as described above for diffraction experiments. Four days after infection, cells were imaged within the wells on a Zeiss Observer.Z1 microscope using differential interference contrast mode and wide field fluorescence. The cells were then gently flushed from the well bottom with a 1000 ml pipette and centrifuged for 30 s at 270 r.c.f., and the cell pellet was resuspended in 25 ml of TBS containing 500 ng ml À1 of propidium iodide. Cells were incubated for 10 min at room temperature and then spread on a glass coverslip and imaged again as described above. All samples were prepared in triplets, imaged and manually counted. X-ray data collection Data were collected at the EMBL P12 beamline (PETRA III, DESY, Hamburg, Germany) . A photon energy of 10 keV (1.24 Å ) was used throughout the experiments, with a photon flux of about 10 13 ph s À1 at the sample position. Data [I(s) versus s, where s = 4 sin()/, 2 is the scattering angle and is the X-ray wavelength] were recorded at a sample-detector distance of 3.00 m using a Pilatus 6M detector (setup 1) or a Pilatus 2M detector (setup 2), both from DECTRIS, Switzerland. 40-45 ml of the insect cell suspension prepared as described above was loaded bubble free into the reaction vessels of the SAXS setup, of which 30 ml was transferred into a temperature-controlled 1.8 mm quartz capillary using the automatic bioSAXS sample changer (Arinax) (Round et al., 2015). The high cell density prevented cell settling in the sample tube during the automated loading of up to eight consecutive samples by the sample changer robot. Using a focal spot of 0.2 Â 0.12 mm (FWHM) in a fixed-flow measurement at 293 K, 40 detector frames were recorded per sample separated by 40 buffer frames, all with a single-frame exposure time of 0.045 s and a readout time of 0.005 s, resulting in a total exposure time of 4 s per data set. For each cell sample, a single data set was collected with the corresponding buffer (TBS), enabling the buffer subtraction during data analysis. research papers 2.9. Data processing For each sample and corresponding buffer measurement, the 40 individual 2D-detector data frames collected during the course of exposure were summed to produce a final 2D image that was subsequently radially averaged using im2dat (Franke et al., 2017) to generate 1D scattering profiles (data deposited with the Small Angle Scattering Biological Data Bank, SASBDB; http://www.sasbdb.org). The data measured from the TBS control were then subtracted, applying the ATSAS program suite (Petoukhov et al., 2012;). 1D profile plots were created with PRIMUS (Konarev et al., 2003). The data were converted from I(s) to I(2) to facilitate indexing and profile refinement with software packages designed for the analysis of XRPD data as described in the following sections. Data clustering and Pawley analysis Since indexing of acquired data was not feasible owing to the paucity of diffraction peaks, information about data similarities has been evaluated via principal component analysis (PCA) on the I(2) data of all data sets over the 0.4-2.0 2 range, using HighScore Plus (Degen et al., 2014). This program was also used to extract accurate unit-cell parameters by applying the Pawley approach (Pawley, 1981) for whole powder pattern fitting (WPPF). In the absence of indexing solutions, reasonable starting values for unit-cell parameters were retrieved from relevant PDB entries (Supplementary Table S1). Peak profiles were simulated using a pseudo-Voigt function with the standard description for FWHM and peak asymmetry variation over the 2 range (Von Dreele, 2019). The background was initially estimated and later it was modelled after a shifted Chebyshev polynomial with varying number of terms ($10-14), depending on the data set, which were refined during Pawley analysis. Parameters were included for refinement of the polynomial background, as well as for instrumental angular offset (zero shift). In the case of highly overlapping reflections, the intensity was equipartitioned to the constituent peaks and gradually refined. Results and discussion Four test proteins were measured to evaluate the capability of the low-background SAXS beamline P12 for a reliable intracellular crystal detection in living insect cells. Of these proteins, three are known to crystallize in living insect cells infected by rBV, but they differ in crystallization efficiency, as well as in crystal volume and morphology. T. brucei IMPDH and CatB have previously been reported to form micrometresized needle-shaped crystals in most cells from populations that diffract XFEL pulses and synchrotron radiation to high resolution, enabling the elucidation of the corresponding protein structures (Koopmann et al., 2012;Redecke et al., 2013;Gati et al., 2014;Nass et al., 2020). Needle-shaped in cellulo crystals were also observed for firefly (P. pyralis) luciferase, growing up to a remarkable length of more than 180 mm, but the spontaneous disintegration after cell membrane disruption has prevented the validation of X-ray diffraction so far (Schö nherr et al., 2015). Additionally, HEX-1, a natively self-assembling protein that forms the solid, crystalline core of Woronin bodies in the fungus N. crassa (Tenney et al., 2000), assembles into regular spindle-shaped crystals with a hexagonal cross section in almost all insect cells of the culture, which has not been reported previously. Detection of in cellulo crystals using SAXS and XRPD Prior to the diffraction experiment, the previously observed intracellular crystallization tendency of the test proteins in rBV-infected High Five insect cells was verified by light microscopy at day 4 post infection (p.i.). No ordered structures have been detected in the uninfected and in the mock-rBVinfected cells, which served as controls for the subsequent diffraction experiments. The percentage of crystal-containing cells within the entire culture, subsequently denoted as 'crystallization efficiency', was estimated to be around 70-80% for Luc, 40-60% for IMPDH, 50-90% for CatB and more than 90% for HEX-1, slightly varying depending on the individual culture (Fig. 1). Immediately before X-ray experiments at P12, dense cell suspensions were prepared in TBS. At this stage, 60-80% of the rBV-infected cells are still vital in all samples, as confirmed by propidium iodide staining (Fig. 2). Thus, neither virus infection and intracellular crystal growth nor the sample preparation procedures affected the integrity of the predominant fraction of the High Five cells. Moreover, the percentage of crystal-containing cells remained almost constant during sample preparation. Only for luciferase-producing cells was the proportion of crystal-containing cells significantly reduced, from approximately 50 to 30% [ Fig. 2 The samples were automatically loaded with a robotic sample changer into the quartz capillary for X-ray diffraction [ Fig. 3(a)]. The short exposure time of 0.045 s and readout time of 0.005 s per frame in the steady-state mode resulted in a measurement time per sample of 4 s, since 40 detector frames have individually been recorded for each sample, followed by 40 frames of buffer irradiation. Additionally considering the time required for the automated sample loading and removal as well as capillary cleaning, eight consecutive diffraction data sets were collected within 24 min without opening the hutch, representing the optimal agreement between efficient data collection and settling and survival of the insect cells in TBS. If the previously scored crystal-containing cells had been irradiated, summation of the detector frames consistently revealed the presence of Debye-Scherrer rings [ Fig. 3(b)] resulting from the orientational average of the Bragg reflections from the many small crystals randomly oriented in the cell suspension, a typical observation during XRPD measurements. No rings were observed for the control samples. Subtraction of the buffer signal and radial averaging resulted in 1D plots representing the intensity versus momentum transfer s (Fig. 4). The corresponding real-space distances are determined as d = 2/s. The scattering curves of the crystal-containing cell samples exhibit clear peaks at defined s values, representing the Debeye-Scherrer rings [ Fig. 4, curves (a)-(d)]. Depending on its unit cell, each crystal type produced a distinct XRPD profile that can act as a fingerprint of the crystallite. The intensity of the peaks depends on the overall scattering capability of the irradiated part of the sample. Using an X-ray beam of 0.20 Â 0.12 mm and a 1.8 mm quartz capillary, a volume of 0.043 mm 3 is irradiated, which could incorporate several thousand cells, estimating a diameter of approximately 0.030 mm per cell. Comparable magnitudes of the scattering intensity can be recorded in the case when just a few relatively large crystals are present within the irradiated volume, or when a large number of small crystals are illuminated -it is important that the total number of crystallographic unit cells is above the detection limit defined by the photon flux of the X-ray beam. Thus, the comparatively low crystallization efficiency of IMPDH (40-60%) and Luc (70-80%) is compensated by the significantly increased scattering volume of these long needle-shaped crystals (Fig. 1), resulting in a comparable intensity of the dominant scattering peaks observed for the more abundant but smaller crystals of HEX-1 (>90% efficiency) and CatB (up to 90% efficiency). Consequently, the presence of specific peaks in the scattering curve reliably indicates the presence of crystalline structures with the scattering volume suitable for detection at the given experimental conditions, but the peak intensity on its own does not repre-sent a suitable measure to compare the number and/or size of different crystallites in the living cells (Fig. 3). Extraction of refined unit-cell parameters The low-angle region of XRPD data usually allows for a precise refinement of the unit-cell parameters of the diffracting crystals, if pure and highly dense microcrystalline suspensions are used in conventional powder diffraction experiments (Margiolaki, 2019). In our samples, a significant volume is occupied by the solvent and the soft matter of the cells, limiting the accessible crystal density and thus the intensity of the Bragg scattering patterns. Only a few significant peaks at low s values can be detected in the scattering curves of intracellular Luc, IMPDH, CatB and HEX-1 crystals (Fig. 4), preventing ab initio indexing. It has been demonstrated in earlier studies (Norrman et al., 2006;Fili et al., 2015;Valmas et al., 2015) that information about data similarities can be evaluated via PCA. PCA reduces the dimensionality of data sets by projecting them to distinct principal component (PC) axes, which are planes in the multidimensional space (Hotelling, 1933). By definition, the first PC is the plane where data exhibit the largest variance when projected along it. Subsequent PCs must be orthogonal to the first one. Once the required number of PCs is identified (typically two or three), data are projected into a new coordinate system defined by these PCs. The position of each observation in the PC coordinate system and its distance to other observations is indicative of the similarities between the observations. Analysis performed on the I(2) data over the 0.4-2.0 2 range produced four distinct clusters for the samples under study, each containing one of the four different phases observed in our experiments ( Supplementary Fig. S1). Clustering not only allowed us to detect the existence of four well separated crystalline phases in our data (marked A-D in Supplementary Fig. S1), even before their identification, but also enhanced the rapidity of the analysis. Even when only a few peaks are present, accurate unit-cell parameters can be extracted from XRPD data sets using WPPF procedures (Karavassilia & Margiolaki, 2016;Margiolaki, 2019). On the basis of the starting lattice parameters, Pawley analysis (Pawley, 1981) theoretically simulates the experimental profiles in terms of peak shapes and background and, most importantly, allows for their refinement. Here, a structural model is not required, since peak intensities are considered as refinable parameters, contrary to Rietveld refinement (Rietveld, 1969). Using the reported unit-cell dimensions and space groups determined by X-ray crystallographic structure elucidation of T. brucei IMPDH and CatB (using in cellulo grown crystals), as well as of P. pyralis Luc and N. crassa HEX-1 (using crystals grown by microbatch and vapour diffusion techniques in vitro), as reasonable starting values (Supplementary Table S1), accurate lattice parameters were extracted for each data set (Fig. 5). A complete list of the refined reflections and their position in 2, d spacing and momentum transfer is presented in Supplementary Tables S2-S5. For the Luc data set, the refined lattice parameters revealed a significant increase in the length of the unit-cell axes a and b by approximately 10 Å , compared with the expected values extracted from PDB entry 1lci (Conti et al., 1996) (Supplementary Table S1). The unit cell of Luc in cellulo crystals has not been determined so far, but these differences indicate that the intracellular crystal growth affects the unit-cell geometry of Luc crystals. On the other hand, the detection of specific Bragg reflections from the intracellular Luc structures represents the first proof of the crystalline character. This result confirms our hypothesis that the intact cell protects the crystals from deterioration induced by environmental changes, e.g. during cell lysis and crystal isolation (Schö nherr et al., 2015). For the other three data sets, Pawley analysis resulted in reasonable agreement of the refined and the expected unit-cell parameters (Supplementary Table S1). At least for IMPDH and CatB, this was expected, since the starting parameters have been obtained from the corresponding X-ray structures elucidated using these in cellulo grown crystals [IMPDH, PDB code 6rfu (Nass et al., 2020); CatB, PDB 4hwy/4n4z (Redecke et al., 2013;Gati et al., 2014)]. However, the intracellular environment obviously did not change the unit-cell geometry of the HEX-1 crystals, as shown by the agreement with the parameters of crystals grown by applying the sitting drop vapour diffusion method (PDB code 1khi; Yuan et al., 2003). Sensitivity of in cellulo crystal detection One of the major obstacles in intracellular protein crystallization is the observation that the proportion of crystalcontaining cells within the entire culture can be very low. By applying light microscopy, one sometimes detects well ordered structures of a recombinant target protein in only 1% (or even less) of cells, rendering the proof of successful in cellulo crystallization a laborious and time-consuming effort. We have therefore further assessed the sensitivity of the scatteringbased detection method by irradiation of a dilution series of High Five insect cells containing intracellular crystals of CatB and HEX-1. Starting from 100% infected cells, infected cells were diluted in a 1:2 ratio with mock-virus-infected cells. The intensity of the distinct diffraction peaks in the scattering curves consistently drops with each dilution step owing to the reduced number of crystals in the irradiated sample volume (Fig. 6). However, the overall course of the scattering curve was not affected. At a dilution of 16-fold, corresponding to 0.34 and 5.68% of cells in the sample that contain in cellulo CatB and HEX-1 crystals, respectively, even the originally most intense peaks can barely be distinguished from the background scattering from the cell suspensions. Progressive dilution yields scattering curves superimposable to that of the mock-virus-infected cells, defining the detection limit of the crystalline material in the irradiated volume at the specific conditions defined by this experimental setup. Considering the uncertainties in the determination of the detection limit, e.g. a slight volume increase of the insect cells after baculovirus infection (Schopf et al., 1990) and individually varying cell sizes, this scattering approach enables the rapid detection of intracellular crystals of CatB and HEX-1 if present in at least 0.3-6% of all cells in the culture, depending on the individual protein. A comparable detection limit was determined for IMPDH in cellulo crystals in High Five cells ( Supplementary Fig. S2). Impact of the insect cell line It was previously reported that the crystallization efficiency of recombinant target proteins in living insect cells varies depending on the individual cell line (Fan et al., 1996). In High Five cell cultures, a larger proportion of cells produced intracellular crystals of the heterodimeric calcineurin complex, compared with Sf9 cell cultures. Our study clearly confirms this correlation. A significant drop in crystallization efficiency, ranging between 45 and 84%, was observed after infection of Sf9 cells with the same MOI of recombinant rBV stocks encoding CatB and HEX-1 (Fig. 7). Expectedly, the reduced crystalline scattering volume of the Sf9 cell samples leads to a decreased intensity of the distinct Bragg peaks in the scattering curves [Figs. 7(c) and 7( f )]. The peak positions, however, which directly depend on the symmetry and the unitcell parameters of the irradiated crystals, did not change. Next to the important proof of the presence of crystalline material, the peak fingerprint obtained from the scattering data represents a precise and highly sensitive marker for the crystal architecture, which is not affected by the insect cell line, at least for the IMPDH, CatB and HEX-1 ( Supplementary Fig. S2) proteins analysed in this study. This marker is much more reliable than the visual inspection of the crystals by light microscopy, which basically confirmed the needle-shaped tetragonal morphology of the IMPDH and CatB crystals and the elongated spindle-shaped hexagonal morphology of HEX-1 crystals, if grown in Sf9 cells (Fig. 7). Timeline of intracellular crystal growth The timing of the X-ray measurements represents another parameter that essentially affects a reliable scoring of an in cellulo crystallization experiment. In the applied baculovirus expression vector system (BEVS; Smith et al., 1983) recombinant target gene expression is controlled by the Autographa californica multiple nucleopolyhedrovirus (AcMNPV) polyhedrin promotor. Owing to its activation late in the infection cycle (Chambers et al., 2018), target protein production starts approximately 24 h after rBV infection of the insect cells. First indications of intracellular crystal formation can be detected by light microscopy at least 72 h (3 days) p.i., as previously shown by real-time investigation of the spontaneous crystallization processes of P. pyralis Luc and GFP-NS from avian reovirus (Schö nherr et al., 2015), as well as of T. brucei IMPDH (Nass et al., 2020). Crystal growth usually continued up to day 5 p.i., when the majority of cells started to gradually lyse, triggered by the ongoing viral proliferation process. The associated environmental change can significantly affect the integrity and thus the X-ray diffraction capacity of in cellulo crystals (Schö nherr et al., 2015), defining the optimal time slot for intracellular crystal detection as between 24 and 120 h p.i. 1D radially averaged X-ray scattering data of High Five insect cells containing intracellular crystals of the target proteins CatB (a) and HEX-1 (b) (SASBDB IDs: SASDH76 and SASDH66). The percentage of crystal-containing cells within the entire culture of each sample, as determined by light microscopy, is presented next to the scattering curves. The detection limit for in cellulo crystals using X-ray scattering at the P12 beamline setup 1 was determined to range between 0.3 and 6% of a crystal-containing cell fraction, depending on the respective protein. The insets show the scattering curve of the 16-fold diluted sample compared with that of mock-infected cells. However, intracellular Luc crystals showed an unexpected dynamic degradation and reassembly within the same living cell over the entire growth period (Schö nherr et al., 2015), which turns the definition of the optimal time for detection into a more complicated task. Samples with different offsets between the insect cell infection and the X-ray diffraction experiment have been prepared to monitor the time-dependent powder diffraction of High Five cells infected with rBVs encoding all four test proteins used in this study. On the basis of the results mentioned above, offsets ranging between 40 and 94.5 h were tested. In the scattering curves of cells producing Luc, CatB and HEX-1, Bragg diffraction peaks clearly distinguishable from the background scattering of the cells consistently appeared at approximately 51 h p.i. (Table 1 and Supplementary Fig. S4). Subsequently, the peak intensities increased up to approximately 81 h p.i. and remained constant. The Bragg peak intensities of IMPDH-producing cells exhibited a comparable trend, but the onset of crystal detection was delayed by 10 h, starting approximately at 61 h p.i. However, after 64 h p.i. the ratio of quantity, volume and intrinsic order of the crystalline material formed by all test proteins in the infected insect cells was consistently sufficient for detectable Bragg scattering, even if the single parameters were strongly dependent on the individual protein crystallization process. Our data indicate that the high brilliance and low background afforded by the SAXS instrument setup (e.g. all-in-vacuum beam path) enables a reliable scoring of in cellulo crystallization trials. Obtaining insights into the kinetics of intracellular protein crystallization represents another reason to monitor the time dependence of in cellulo crystal growth. The associated molecular mechanisms are difficult to determine in the cellular context, preventing a comprehensive understanding so far. Initial insights have been obtained by live cell imaging techniques, but only after the size of the tracked crystal exceeded the detection limit of DIC light or fluorescence microscopy Table 1 Timeline of intracellular crystal growth in High Five cells. X-ray diffraction experiments using setup 2 have been performed at indicated time-points after insect cell infection with rBVs encoding Luc, IMPDH, CatB and HEX-1. The earliest time-point where a Bragg diffraction peak was detected is indicated with a plus sign (+). Arrows illustrate changes in the intensity of the Bragg peaks compared with the previous time-point. The consistent decrease of the signal intensity at the 64 h time-point is most likely attributable to a cell culture problem rather than to an effect of the intracellular crystallization process. (Schö nherr et al., 2015), which is far beyond the nucleation event and the initial growth phase. Microscopy-based techniques are particularly problematic for proteins that exhibit a low in cellulo crystallization efficiency. These techniques focus on a few individual cells in the culture that have been selected by chance without guarantee that crystals will form inside these cells. Kinetic analysis by SONICC strongly depends on the orientation and symmetry of the growing crystals, which affects the signal intensity (Haupert et al., 2012) and thus also the crystal detection. However, the cell selection problem is overcome by monitoring a large and representative fraction of all cells in the culture at the same time, as performed in X-ray powder-diffraction-based approaches. On the other hand, probing all cells at the same time without spatial resolution prevents the elucidation of the growth kinetics of a single crystal, since the total crystalline volume hit by the comparatively large X-ray beam contributes to the diffraction signal. Consequently, the low-background diffraction approach using high-brilliance X-ray beams will not provide more detailed insights into the crystallization process of individual crystals. However, it is able to provide information on the timing of the formation of detectable crystalline structures and a good estimate for overall crystal production inside the living cells, which is important to choose the optimal time point of further diffraction data collection at a synchrotron or XFEL for elucidation of the respective protein structure. Conclusion Detection of intracellular crystals in cell cultures can be a time-consuming and challenging task, particularly if the target protein forms crystalline structures of unknown morphology only in a small fraction of cells. Furthermore, lightmicroscopy-based detection of well ordered structures yields a promising indication, but not a proof, of crystallinity. The presented SAXS-XRPD screening approach has the potential to overcome these major limitations of in cellulo crystallization. Owing to the automated robot-assisted sample handling, the flow-through setup, the short irradiation time and an exceptionally low background scattering of the SAXS beamline setup, this approach allows one within seconds to prove if diffracting crystalline structures of any order and morphology are present in at least a low percentage of cells within a culture. Such information cannot be obtained by other established detection methods in this time frame. Applying light microscopy, a comparable result would usually require several hours of tedious screening. Since the intensity of the X-rays determines the minimum diffractive volume that is required for reliable detection, a further increase in peak brilliance will allow the detection of smaller crystals or even a smaller percentage of crystal-containing cells, e.g. using fourth-generation synchrotrons or XFELs in the future. Highthroughput SAXS-XRPD screening of potentially crystalcontaining samples can be directly linked to subsequent serial diffraction data collection at a macromolecular crystallography beamline to streamline the structure determination. Moreover, since the Bragg peak positions in the 1D scattering curves depend on the unit-cell composition of the protein crystals, this approach also provides the possibility to investigate the impact of environmental conditions, e.g. the cellular compartment, cellular stress or the cell line itself, on the size and the composition of the intracellular protein crystals. This information could contribute to more detailed insights into the understanding of the in cellulo crystallization process.
9,493.4
2020-09-25T00:00:00.000
[ "Materials Science" ]
Do anti-amyloid beta protein antibody cross reactivities confound Alzheimer disease research? Background Alzheimer disease (AD) research has focussed mainly on the amyloid beta protein (Aβ). However, many Aβ-and P3-type peptides derived from the amyloid precursor protein (APP) and peptides thought to derive from Aβ catabolism share sequence homology. Additionally, conformations can change dependent on aggregation state and solubility leading to significant uncertainty relating to interpretations of immunoreactivity with antibodies raised against Aβ. We review evidence relating to the reactivities of commonly used antibodies including 6F3D, 6E10 and 4G8 and evaluate their reactivity profiles with respect to AD diagnosis and research. Results Antibody cross-reactivities between Aβ-type, P3-type and Aβ-catabolic peptides confound interpretations of immunoreactivity. More than one antibody is required to adequately characterise Aβ. The relationships between anti-Aβ immunoreactivity, neuropathology and proposed APP cleavages are unclear. Conclusions We find that the concept of Aβ lacks clarity as a specific entity. Anti-Aβ antibody cross-reactivities lead to significant uncertainty in our understanding of the APP proteolytic system and its role in AD with profound implications for current research and therapeutic strategies. Introduction Research into the causes and progression pathways of Alzheimer disease (AD) has focussed primarily on the roles of the amyloid beta protein (Aβ) derived from the amyloid precursor protein (APP) via sequential proteolytic cleavages [1,2]. In summary, there are two main APP cleavage pathways, Fig. 1. The α-pathway involves an initial α-cleavage to release the large extracellular soluble sAPPα leaving the 83 amino acid (aa) residue carboxy terminal fragment (CTF) in the membrane. This is further processed by γ-secretase containing Presenilin (PS) to release the variable length P3 peptide and the APP intracellular domain (AICD). This pathway is thought to be constitutive and α-cleavage precludes processing by the β-secretase BACE1 as it cuts within the Aβ sequence. In competition with α-cleavage and with APP expression as rate limiting [3], β-cleavage releases the large extracellular soluble sAPPβ leaving a 99 aa residue CTF in the membrane that is further processed by the shared sequential γ-secretase to release the variable length Aβ and the AICD. The main fragments expressed are the large sAPPα and sAPPβ domains, the smaller variable length Aβ and P3 fragments and the AICD, all sharing sequence homology to varying degrees with each other and with full length APP. Additional APP cleavages include β'-cleavage by BACE2 [4], δand η-cleavage [5,6] and cleavage by caspase [7]. BACE2 may also be involved in catabolism of Aβ [8]. Evidence relating to Aβ from autosomal dominant genetic mutations in the amyloid precursor protein (APP) and presenilins (PS) in familial AD (FAD) [9,10], coupled with the neuropathological diagnostic value associated with the presence of deposits of Aβ in the brain in both FAD and sporadic AD (SAD) [11,12], has been interpreted in the amyloid cascade hypothesis as showing a causal role for Aβ in disease progression [13,14] and has been updated to reflect the ratios of Aβ (1-42)/ Aβ [14,15] or oligomers [16,17]. However, this interpretation of the evidence relating to Aβ has not been fully accepted and alternative interpretations including the presenilin hypothesis [18,19] and the APP matrix approach [20,21] have been put forward. In addition to Aβ40 and Aβ42, the peptides at the main focus of research, there are many soluble [22] and insoluble Aβ-type peptides, including N-terminal extended peptides [23], that have yet to be fully described and accounted for in theoretical and experimental disease models. In addition to different sequences, Aβtype peptides can exist in a variety of aggregation states including monomers, dimers, oligomers and fibrils. Evidence that behaviour profiles differ between the various Aβ-type sequences and aggregation states suggests that some Aβ species, such as Aβ42 or oligomers, may be more important in disease progression than others. Evidence from population studies [24][25][26] suggests that correspondence between clinical dementia status and neuropathological diagnosis blind to clinical dementia status in the older population where most dementia occurs, do not correspond well. The relationships between Aβ, neuropathology and clinical dementia status are not clear. In order to investigate these relationships an understanding of the different presentations of Aβ across the different sequence lengths, aggregation states and neuropathological associations is required. AD research has depended greatly on the use of antibodies. Concerns regarding the interpretation and reliability of antibodies relating to reproducibility of science in general have been previously highlighted [27]. Antibodies have been raised against various Aβ epitopes and these recognise slightly different pathological profiles [28][29][30][31]. Because Aβ-type peptides share sequence homology and conformations to varying degrees, cross reactivity can potentially confound interpretations of immunoreactivity. Here we look at evidence relating to the reactivities of the commonly used antibodies 6F3D, 6E10 and 4G8 immunoreactive with Aβ and ask how the reactivity profiles of commonly used antibodies relate to AD diagnosis and research. Antibody reactivities with peptides from α-, βand γcleavages The epitopes recognised by 6F3D, 6E10 and 4G8 to various forms of Aβ, Fig. 2a and Table 1, are usually interpreted to be sequence specific and relate to proteolytic fragments released following sequential βand γ-cleavages. 6E10 recognises an epitope in the N-terminal region of both Aβ40 and Aβ42. The 6E10 N-terminal epitope is also recognised in Aβ (1-16/17), a fragment that could reflect catabolism of Aβ [3,32], or additional processing of the C99 carboxy terminal membrane bound fragment (CTF) following β-cleavage [33]. The fragment Aβ (1-11/12) detected in soluble fractions [34] and generated following catabolism of full length Aβ by BACE2 [8] is predicted to react with 6E10, Fig. 2b, but this has not been investigated. Fig. 1 APP cleavage pathways. Green: sequential αand γcleavages of the αpathway, red: sequential βand γcleavages of the βpathway, grey: alternative fragments from β' cleavage or shared full length APP and AICD. Other cleavage pathways such as δ and η are not shown Antibody 6F3D recognises an N-terminal epitope present in full length Aβ42, Aβ40 and is predicted to react with Aβ (1-16/17) but unlike 6E10, not Aβ (1-11/ 12). Neither 6F3D nor 6E10 are predicted to react with P3-type peptides, equivalent to Aβ (16/17-40/42) derived from sequential αand γcleavages of APP [1] that lack the amino acid sequence of the epitope. As such 6E10 and 6F3D represent initial β-cleavage but do not inform on C-terminal variability due to carboxypeptidase activities of γ-cleavage [35]. This may be generally applicable to other antibodies recognising N-terminal epitopes, such as 6C6, which also recognise N-terminal epitopes [34]. Interpretation could be further complicated by reactivities with shorter N-terminal peptides derived from full length Aβ by catabolism or additional processing of membrane bound CTF and shorter C-terminal endings, seen in conditioned media from cell culture [22]. Antibodies specific for C-terminals ending at aa Aβ40 or Aβ42 are traditionally interpreted as representing Aβ. However, antibodies specific for Aβ40 MBC40 and Aβ42 MBC40 were noted to react with shorter N-truncated Aβ peptides including P3 (40) and P3 (42) respectively [29]. Antibodies reactive with either Aβ40 or Aβ42 are also predicted to react with shorter peptides from Aβ catabolism, Fig. 2, though this will depend on whether the exact epitope recognised is still present in the shorter sequence. Characterisation of epitopes recognised by antibodies in general is a widely held concern. While studies using antibodies raised against Aβ40 or Aβ42 will monitor the specificity of C-terminal epitopes or cross reactivity with full length APP [36], very few account for N-terminal variation. Therefore we cannot be certain that any antibody thought to represent Aβ40 or Aβ42 derived from sequential βand γcleavages actually represents full length rather than peptides from other cleavages lacking the N-terminal epitopes. Immunoreactivity with the antibodies recognising fragments ending in either aa40 or aa42 of Aβ should be Fig. 2 Epitopes recognised by commonly used antibodies in various Aβ -type peptides. a fragments associated with the main αand βcleavage pathways, b fragments associated with BACE2 catabolism. Note: Aβ can exist as monomers, dimers, oligomers and fibrils; epitopes may be lost due to conformational change due to aggregation/solubility etc.; antibodies do not react with specific Aβ sequences in all conditions; amino acids of epitopes for MBC40 (Aβ40) and MBC42 (Aβ42) not well described interpreted as representing fragments from γ-cleavage regardless of initial αor βcleavages unless cross reactivity checks prove otherwise. While these epitopes are assumed to be sequence specific, this cannot be guaranteed. Aβ exists in many aggregation states from monomers, dimers, oligomers and fibrils. These conformation changes can potentially lead to changes in the presentation of epitopes and neither 6E10 nor 4G8 were found to react with all samples of Aβ when aggregated under various conditions [39]. Reactivity was found to change depending aggregation, suggesting that epitopes can be revealed or hidden by different conformations and at least two different aggregated conformations may be present depending on specific conditions. This study shows that antibodies reactive with Aβ-type peptides are both sequence and conformation dependent [39]. 4G8 specifically was found not to react with Aβ40 at higher molecular weight oligomers and reacted with high molecular weight Aβ42 only when aggregated under conditions with agitation. Therefore 4G8 immunoreactivity cannot be assumed to visualise "total" Aβ. The use of antibodies in diagnosis and research AD diagnosis and research in the human brain depends on the use of antibodies reactive with Aβ. It is essential Raised against Aβ (13-28); epitope not well described Solanezumab [53] that results from studies across the world are comparable and attempts to standardise inter-laboratory comparisons of amyloid pathology [30,40] have found that immunohistochemical approaches are more reliable than silver stain based techniques. Further, because 4G8 shows more Aβ immunoreactive pathology than either 6E10 or 6F3D, it has been recommended as the antibody of choice for diagnostic work to visualise deposits of Aβ [30], perhaps implying that increased immunoreactivity represents increased sensitivity for Aβ. However, if we consider the reactivity of 4G8 with fragments from the wider APP proteolytic system, not all reactivity necessarily represents Aβ. Thal et al. [29] stained sequential sections from a case with extensive Aβ pathology with 6E10, 6F3D, 4G8, MBC40 and MBC42 Table 1. They found strong reactivity of plaques with MBC42 and 4G8 but little reactivity with MBC40 and 6F3D suggesting that the majority of staining was due to N-truncated peptides equivalent to P3 (Aβ17-42) [29]. However, this interpretation of the staining patterns is not straightforward as loss of staining with 6F3D could also reflect a change in aggregation state that hides the 6F3D epitope especially for Aβ42 which may be more prone to aggregation and insolubility or may be lost due to membrane binding [41]. Therefore antibodies to N-terminal epitopes of Aβ, such as 6F3D and 6E10 may not be revealing all Aβ (1-4x). Over 40 soluble Aβ-type peptides are biologically present [22]. Interestingly, the peptide P3 (42), representing the peptide thought to be associated with neuropathology of diffuse senile plaques [29] is not listed in Table 2 in Wang et al 1996 [22] even though the use of 4G8 as a capture antibody is predicted to react with it. This suggests that the more insoluble P3 (42) aggregated in plaques could adopt a significantly different conformation to P3 (40) found in the soluble compartment and this requires clarification. The results obtained by Thal et al. [29,31] are compatible with those using different monoclonal antibodies, BS85 reacting with multiple forms of Aβ, BC05reacting with Aβ42/3 and BA27-reacting with Aβ40 in a different study that did not account for N-terminal variation [28]. This study found similar reactivity profiles for BS85 and BC05, marking multiple cored and fleecy amyloid senile plaques whereas BA27, reactive for Cterminal Aβ40, detected cored senile plaques only. In a follow-up study, Iwatsubo et al. [42] used an antibody raised against the N-terminal of P3 to measure the deposition of P3 and found little reactivity, suggesting that P3 is not involved in neuropathological deposition. However as with Aβ42, this could reflect the different solubilities and aggregation states of P3-type peptides where P3 (40) is seen in the soluble pool of fragments whereas P3 (42) is not [22]. It is important to note that synthetic peptides of soluble P3 (40) were used to select the Nterminal antibody AβN17 (Leu) in this study and although reactivity was noted with P3 (42) in western blot analysis, the aggregation of the synthetic P3 (42) peptide was not considered. As with the reactivity of 6F3D with Aβ40 discussed above, the contributions of aggregation state of P3 (42) and consequent loss of epitope cannot be dismissed. Indeed these staining patterns may indicate that the epitopes contained in the N-terminal of P3 are solubility dependent and are lost in aggregated P3 (42), associated with diffuse amyloid deposition and therefore no reactivity in diffuse plaques with antibodies recognising P3 (40) or non-aggregated P3 (42) would be expected. This interpretation is compatible with the scant reactivity of MBC40 and BA27, showing no reactivity for peptides ending with aa40 [29,42]. Consideration must be given to solubility and aggregation state when interpreting antibody reactivities. Study designs using a capture antibody reactive with epitopes within the first 16 amino acids of Aβ to select only Aβ peptides resulting from β-cleavage as an initial step e.g. Moore et al. [3,46] leave any P3 type peptides unrecorded and not accounted for. Where studies use a capture antibody reactive with the N-terminal of Aβ, further characterisation with antibodies detecting C-terminal aa 40 or 42 can be interpreted as representing Aβ40 or Aβ42 from β-cleavage. However, a study using this approach to quantify Aβ40/42 in wet tissues then investigated location using only antibodies reactive with Aβ42 in formalin fixed, paraffin embedded tissues [34] where results have been interpreted as showing both quantity and location of Aβ42. However, this study design is potentially confounded by cross reactivity with P3 (42) in formalin fixed, paraffin embedded tissue that has not been checked. Keeping experimental approaches consistent both within and between studies is a challenge but one that requires urgent attention. Some commercial ELISA kits use BA27 to detect Aβ40 and BC05 to detect Aβ42, however, since both these antibodies are known to also recognise P3 (40) and P3 (42) respectively, we cannot be certain that any results obtained are not confounded by P3. Antibody reactivity profiles with potentially similar peptides from the APP proteolytic system should always be checked. How do we best interpret the available evidence derived from antibody reactivities? Interpretation of the reactivities of antibodies immunoreactive with Aβ-type peptides is not straightforward and is compounded by the lack of systematic definitions of Aβ-type peptides. On the one hand Aβ is often discussed as a homogenous whole, where the different sequence lengths and aggregation states are collapsed under "Aβ" as an umbrella concept. Yet, because the different fragments sequences and aggregation states show discrete behaviours, this umbrella concept may not be useful for more detailed research questions investigating the role (s) of Aβ in disease pathways. Should each possible fragment derived from α-, β-, and γcleavages and Aβ catabolism be experimentally controlled for in a systematic approach? The different behaviours of the Aβ-type and P3-type fragments depending on aggregation state suggest that this may be an important issue that has yet to be fully incorporated in experimental design. This is not straightforward as the contributions of each possible sequence can potentially vary with solubility and aggregation state, certainly increasing experimental costs as each fragment is controlled for. Because 4G8 is increasingly recommended for diagnostic work [30], and because reactivity is interpreted as Aβ (umbrella concept) it is probable that the contributions of P3 (42) to neuropathological classifications have been hidden in current experimental designs and therefore neglected. However, not all laboratories use 4G8 and instead use 6E10 or 6F3D, specific for N-terminal epitopes of Aβ that do not detect P3-type fragments. These antibodies visualise qualitatively different aspects of Aβ deposition, i.e. lacking contributions from P3-type fragments [31] potentially confounding comparisons between studies from different laboratories using different antibodies [29,30]. If all laboratories were to use 4G8, this would potentially confound how we understand the deposition of specific Aβ-species as it detects a wide range of fragments, not all necessarily Aβ. This confounding would also be relevant to the use of antibodies reactive with The C-terminal residues from Aβ40 and Aβ42, as N-terminal variation is not detected. The only option to systematically detect specific peptides is to use multiple antibodies reacting with the different epitopes or use a capture antibody relevant to the experimental design, such as 4G8 as in [22] or 6E10 as in [3,22] and then analyse any fragments further with for example, mass spectroscopy. However, it must be born in mind that a single antibody will not capture all possible Aβor P3 type fragments in all aggregations states [39] and this must be explicitly accounted for in any experimental design. The different antigen retrieval methods [47], different profiles of Aβ-type fragments in soluble [22] and insoluble fractions and potential loss of epitopes due to aggregation state [39] add further difficulties in systematically characterising Aβ. A "panel" of antibodies to consistently and reliably characterise Aβ-type, P3-type and catabolic fragments in all aggregation states (monomer, dimer, oligomer and fibril) is not currently possible. Antibody reactivities and their relevance to APP proteolytic pathways and disease Contrary to our current understanding, the immunoreactivity profiles of commonly used antibodies do not correspond directly to APP cleavage pathways [1,2,8], summarised in Fig. 1. Immunoreactivity of antibodies predicted to have a wide reactivity profile such as 4G8 and potentially those that react with C-terminal epitopes representing aa40 or aa42 cannot be interpreted as giving evidence for initial αor βcleavages. Given that antibodies are central to AD research and biomarker development, it is not clear whether the antibodies currently being used to identify C-terminals do indeed reflect Aβ40/42 or whether signals are confounded by P3 (40/42). Additionally, very little account is taken of the different soluble and insoluble compartments, therefore P3 (42), present in neuropathological deposits, may not be present in soluble fractions and can be easily missed if only soluble fractions are investigated. How then do we best approach the search for reliable biomarkers for AD? Various morphologies of Aβ deposits have been noted and these differ in their immunoreactivity profiles [48] how immunoreactivity differences associate with the different pathological morphologies have not been systematically investigated with respect to clinical dementia status. The insoluble fragment P3 (42) may be a major constituent of diffuse amyloid deposition and may be relevant to disease pathways however, current approaches have almost completely neglected any contributions it might have. Cross reactivity of commonly used antibodies between the Aβ, P3 type and catabolic peptides confounds our current understanding and may in part explain why clinical and neuropathological diagnoses of AD do not correspond well in the older population. To what extent a lack of understanding of the APP proteolytic system as a whole derives from a misunderstanding of antibody cross reactivities requires careful consideration. Neuropathological characterisation of human brain donations, essential to our understanding of AD, requires re-evaluation. Current immunotherapeutic approaches to target Aβ have used passive humanised antibodies to enhance removal of Aβ from the brain with the aim of slowing or halting the progression of AD [49]. To date these have had little success [49][50][51][52]. Bapineuzumab, Table 1, is based on the monoclonal antibody 3D6 directed towards an N-terminal epitope and Solanezumab, directed at an epitope from the Aβ (13-28) central region [53]. Given the uncertainty surrounding which fragments are responsible for disease progression we highlight here, we have to ask whether antibodies directed only at N-terminal epitopes of Aβ, such as Bapineuzumab, would be expected to change disease course. Following the failure of both Bapineuzumab [51,52] and Solanezumab [50] in phase III clinical trials, refinements to the therapeutic approach call for earlier, perhaps preventative, use of the antibodies during the prodromal phase of AD, i.e. where a high amyloid signal is seen on MRI but before any cognitive change has occurred. However, the failure of these trials suggests that a return to basic science and a reevaluation of our current understanding of the role of Aβ in AD is also warranted. How prodromal AD relates to those in the oldest old who have extensive pathology after death but with intact cognitive function in life remains unclear. Clarification of the physiological roles [54][55][56] of the APP proteolytic system and all its fragments [20,21] in both in disease and normal ageing in the human population is urgently required. The implications arising from the cross reactivity of commonly used antibodies to Aβ are profound. Cross reactivity may be hiding more complex relationships between AD and fragments from sequential α-, βand γcleavages that the current favoured model, the amyloid cascade hypothesis, cannot account for. If P3 is indeed involved in disease progression then a more flexible approach to understanding the relationships between all APP proteolytic fragments may be required and both the presenilin hypothesis [18,19] and the APP matrix approach [20,21] may be better guides to systematically investigate this complex proteolytic system. Conclusions The concept of Aβ lacks clarity in terms of what we mean by Aβ as a specific biological form and this is further confounded by antibody cross-reactivities. The different solubilities and aggregation states of proteolytic fragments from γ-cleavage and their catabolism add further complexity. These cross reactivities, often overlooked, require urgent attention by the AD research community. More than one antibody is required to adequately characterise Aβ. We do not currently have reliable evidence to identify any specific APP proteolytic fragment as causal in AD progression. The correspondence between Aβ immunoreactivity from any specific antibody, neuropathology and proposed APP cleavages is not clear and may in part explain the lack of correspondence between clinical and neuropathological diagnoses of dementia. These cross reactivities question current therapeutic approaches to reduce Aβ via directed immunotherapies, call for a detailed re-analysis of biomarker results and call into question approaches aimed solely at reducing β-cleavage. A detailed consideration of anti-Aβ antibody cross reactivities reveals significant uncertainty in our current understanding of the APP proteolytic system and how this relates to disease with profound implications for current research and therapeutic strategies.
5,118.4
2017-01-26T00:00:00.000
[ "Biology" ]
Voltage Hysteresis Model for Silicon Electrodes for Lithium Ion Batteries, Including Multi-Step Phase Transformations, Crystallization and Amorphization Silicon has been an attractive alternative to graphite as an anode material in lithium ion batteries (LIBs). The development of better silicon electrodes and optimization of their operating conditions for longer cycle life require a quantitative understanding of the lithiation/delithiation mechanisms of silicon and how they are linked to the electrode behaviors. Herein we present a zero- dimensional mechanistic model of silicon anodes in LIBs. The model, for the fi rst time, considers the multi-step phase transformations, crystallization and amorphization of different lithium-silicon phases during cycling while being able to capture the electrode behaviors under different lithiation depths. Based on the model, a linkage between the underlying reaction processes and electrochemical performance is established. In particular, the two sloping voltage plateaus at low lithiation depth are correlated with two electrochemical phase transformations and the emergence of the single broad plateau at high lithiation depth is correlated with the amorphization of c-Li 15 Si 4 . The model is then used to study the effects of crystallization rate and surface energy barriers, which clari fi es the role of surface energy and particle size in determining the performance behaviors of silicon. The model is a necessary tool for future design and development of high-energy-density, longer-life silicon-based LIBs. of Attribution including the sloping voltage curve with voltage hysteresis at small lithiation depths and the shift to a single distinct voltage plateau on discharge from the initial sloping curve upon deep lithiation. Comparisons show a good agreement between the model and experimental results. The processes of phase transformations, crystallization and amorphization underlying the electrode behaviors are resolved in the model. The model correlates the electrochemical behaviors of silicon with the underlying reaction processes in a quantitative manner. We show that the voltage hysteresis is path-dependent and the asymmetric hysteresis originates from asym- metric reaction pathways. The model is then used to study the effects of crystallization rate and surface energy barriers. The crystallization rate constant k cryst can affect the shape of the crystalline growth curve, and a lower k cryst will delay the appearance of the crystalline phase. The extra potential increase E * induced by surface energy barriers between crystalline and amorphous phases is shown to be the underlying cause of the elevated voltage plateau for silicon electrodes. Even though there are two electrochemical reactions, the differential analysis can only detect one visible voltage peak when E * is large enough. The surface energy barrier also explains qualitatively why smaller silicon particles present a sloping voltage curve even charged to 0 V. The model is a necessary tool for future design and development of high-energy-density, longer-life silicon-based LIBs. List of symbols Molar fraction of amorphous phase z C cryst Molar fraction of c-Li 15 Si 4 E (j) Equilibrium potential of electrochemical reaction j (V) E (j),0 Standard equilibrium potential of reaction j (V) E * Extra potential increase induced by surface energy barrier (V) F Faraday constant (C mol −1 ) G D * Surface energy barrier of amorphization (eV) i (j) Current density of reaction j (A m −2 ) i (j),0 Reference current density of reaction j (A m −2 ) I Total current density (A m −2 ) J Nucleation rate (mol s −1 ) k 0 Nucleation rate constant (m s −1 ) k cryst Crystallization rate constant (s −1 ) m t,Si Total specific mass of silicon (g m −2 ) M si Molar mass of silicon (g mol −1 ) n s * Concentration of critical nuclei (mol m −3 ) Q (j) Total capacity obtained through reaction j (C m −2 ) R universal gas constant (J mol −1 K −1 ) S Surface area of nuclei (m 2 ) T Temperature (K) V Electrode potential (V) w (j) Interaction coefficient between neighboring ions in a host lattice W * Impingement rate on the nuclei (m 3 s −1 ) x (j) Fraction of occupied sites through reaction j x cryst Molar fraction of c-Li 15 Si 4 Δx (j) Fraction of the maximum inserted lithium ions through reaction j Greek symbols d Stoichiometry of super-alloyed lithium Overpotential of reaction j (V) Subscripts 0 Standard state z Amorphous phase index: 0 for a-Si, 1 for a-Li x Si 2 , 2 for a-Li 15 Si 4 , 3 for a-Li 15+δ Si 4 (j) Reaction index Silicon has been an attractive alternative to graphite as an anode material in lithium ion batteries (LIBs) because of its high theoretical specific capacity, abundance in the Earth's crust and environmental benignity. [1][2][3][4][5] Due to its alloying nature, the reaction of lithium with silicon leads to a theoretical capacity of 3579 mA h g −1 -Si 1 , which is 10 times higher than that of graphite. However, silicon anodes suffer large volume change (up to 300%) along with high internal stress during lithiation/delithiation cycles, 1 which further leads to fracture and pulverization and accelerates battery degradation. The development of better silicon electrodes and optimization of their operating conditions for longer cycle life require a quantitative understanding of the lithiation/delithiation mechanisms of silicon and how they are linked to the electrode behaviors. Previous studies have shown that LIBs using silicon anodes exhibit unique electrochemical behaviors. Pure crystalline silicon electrodes were found to have two distinctive voltage profiles between the first and subsequent cycles. During the initial lithiation, a broad flat voltage plateau at ∼0.1 V (vs Li/Li + ) was observed. 6,7 While in the subsequent cycles, the voltage plateau disappeared and the charge (lithiation) curves changed to be sloping-shaped. 6 The electrochemical behaviors of silicon electrodes also depend on lithiation depth. 8 The charge and discharge voltage curves were found to be round shaped without any distinct voltage plateau when the lower cut-off voltage was higher than 0.05 V. However, further alloying silicon electrodes to below ∼0.05 V would lead to a single wide voltage plateau at about 0.4 V in their de-lithiation process, resulting in an asymmetric voltage hysteresis. The size of silicon particles was also found to affect the electrode behaviors. Using Si/C composited electrodes made of silicon powders, Saint et al. 9 found that electrodes with micron-sized Si (1-10 μm) exhibited flat delithiation voltage curves with a voltage plateau at ∼0.4 V, while those with nanosized Si (10-100 nm) showed sloping curves even if fully lithiated to 0 V. 9 Interestingly, when the Si particle size was reduced to less than 20 nm, even the flat voltage plateau in the first lithiation process became a sloping shape. 10,11 Several efforts have been made in recent years to understand the mechanisms underlying those unique electrode behaviors. Mechanistic studies of the first lithiation of crystalline silicon were performed independently by Limthongkul et al., 12 Chon et al. 13 and Liu et al. 14 respectively using X-ray diffraction (XRD), scanning electron microscopy (SEM) and high-resolution electron microscopy (HRTEM). These independent, complementary studies revealed that the amorphization of crystalline silicon (c-Si) proceeded layer by layer following a "peeling off" mechanism, indicating a two-phase reaction. This can explain the broad flat voltage plateau at ∼0.1 V. In the subsequent cycles, the amorphous intermediate phase a-Li x Si was found not to recrystallize to c-Si, 6 which can be the reason for the sloping shape of the voltage curves. In situ XRD measurements were carried out by Li et al. 8 under different lithiation depths. They showed that when the voltage fell to nearly 0 V, the highly lithiated amorphous Li x Si phase crystallized rapidly into a metastable Li 15 Si 4 at the end of lithiation process. Therefore, the single wide plateau at ∼0.4 V in the subsequent delithiation curves can be attributable to the two-phase reaction from crystalline Li 15 Si 4 (c-Li 15 Si 4 ) to amorphous lithium-silicon phases. When the lower cut-off voltage was higher than 0.05 V, the formation of Li 15 Si 4 was avoided and only a solid-solution reaction happened, 8 thereby giving a sloping voltage curve. Despite many advances in understanding the electrochemical performance and lithiation/delithiation mechanisms of silicon electrodes, existing efforts are scattered and disjointed. There has been no systematic, quantitative research to date that has quantified the phase transformations during cycling and correlated them with the cycling behaviors of silicon electrodes. Macroscopic models have been developed for simulating the charge/discharge curves of silicon-based LIBs, 3,4,[15][16][17] however, none of them have considered the multi-step phase transformations and crystallization/amorphization involved in the silicon electrodes. A few studies 18,19 described the lithiation-induced amorphization of crystalline silicon based on first-principles and molecular dynamics simulations, but they can hardly be used for battery cell design and optimization due to their small computational domains. Zhang 11 reviewed the lithiation/ delithiation mechanisms of alloy electrode materials for LIBs and suggested that the surface (or interface) energy could play a crucial role in determining the unique behaviors of alloy materials like silicon, but they provided no quantitative evidence. In this study, we present a zero-dimensional mechanistic model of silicon anodes in LIBs. The model, for the first time, considers the multi-step phase transformations, crystallization and amorphization of different lithium-silicon phases during cycling while being able to capture the electrode behaviors under different lithiation depths. Based on the model, a linkage between the underlying reaction processes and electrochemical performance is established. The effects of crystallization rate and surface energy barriers are analyzed, which clarifies the role of surface energy and particle size in determining the performance behaviors of silicon electrodes. The model presented can be easily extended to model full-size LIB cells with silicon additives to account for mechanical degradation associated with the heterogeneous phase changes in the silicon electrode. Model Development Reaction pathways and physical mechanisms.-Reaction pathways when the lower cut-off voltage is above 0.05 V.-When silicon is lithiated to above 0.05 V vs Li/Li + , two distinguishable characteristic voltage peaks occur in incremental capacity analyses, 6,20,21 indicating that there exist two major structure transformations during the cycling within this voltage range. The details of the two structure transformations have been confirmed in the in situ TEM experiment by Wang et al. 22 The first step is a heterogeneous two-phase transformation from amorphous silicon (a-Si) to amorphous Li x Si (a-Li x Si), through which a distinct phase boundary was observed in the study of Wang et al.. 6 So far, no agreement has been reached on the exact value of x, and x is generally considered less than 2. 1,8 The second step of lithiation was found to proceed without a visible interface, forming the final amorphous product a-Li 15 Si 4 . Based on the above evidence from the literatures, as shown in Fig. 1, a twostep reaction mechanism consisting of two reversible electrochemical steps 1 and 2 is proposed for cycling of silicon in a voltage range above 0.05 V. Reaction pathways when the lower cut-off voltage is below 0.05 V.-When the voltage of silicon falls below 0.05 V vs Li/Li + , another voltage plateau which is visibly short appears at the end of charge, 23 accompanied by an abrupt formation of crystalline c-Li 15 Si 4 . 1 The rapid appearance of a crystalline phase suggests a homogeneous crystallization from the amorphous composition of Li 15 Si 4 . This homogeneous crystallization process is analogous to the freezing of super-cooled water where liquid water can stay completely free of ice for a long period before being instantaneously crystallized to ice. However, the process is too fast to have been captured in any in situ studies. A homogeneous crystallization typically involves a preceding nucleation from bulk composition and a subsequent grain growth based on the existing nuclei sites. 24 Hence, it is reasonable to describe the process using a two-step mechanism, i.e., an electrochemical step f3 that forms critical nuclei of Li 15+δ Si 4 (δ represents a very small increment of lithium) from bulk solid solution followed by a chemical step f4 that grows the nuclei to the metastable crystalline phase c-Li 15 Si 4 . Using the supercooled water analogy, a "super-alloyed" phase a-Li 15+δ Si 4 is assumed as the product of the reaction step f3, 25 which is assumed to have the local highest energy and an unstable structure. The electrochemical step f3 is driven by overpotentials which push a-Li 15 Si 4 phase to overcome the surface energy barrier of forming a nucleus. 26 During the process, the original bulk phase of Li 15 Si 4 is expected to grow to its critical size by accommodating more lithium ions, which explains the slight capacity increase during crystallization of Li 15 Si 4 . 23 The extra lithium atoms (represented by δ) are believed to act as a "catalyst" which enables the crystallization process by activating the amorphous Li 15 Si 4 , despite the small amount. In contrast to the sloping voltage curve in the charge process, the discharge process exhibits a single distinct voltage plateau at ∼0.4 V. 1,7 It was found by Li et al. 1 using in situ XRD that the Pathway diagram.-The proposed pathways of electrochemical lithiation/delithiation of silicon at room temperatures are summarised in Fig. 1. The lithiation/delithiation process follows a twostep mechanism consisting of two reversible electrochemical steps 1 and 2 when the electrode potential is maintained above 0.05 V, and will undergo homogeneous crystallization steps f3 and f4 for lithiation and heterogeneous amorphization b2 for delithiation when the electrode potential goes below 0.05 V. Electrochemical reactions.-Thermodynamics.-The equilibrium potentials of electrochemical reactions 1, f3 and b2 are calculated as 27,28 : where E (j) and E (j),0 are respectively the equilibrium potential (V) and standard equilibrium potential (V) of reaction j (j = 1, 2, f3, b2), F is the Faraday constant (F = 96485 s A mol −1 ), R is the universal gas constant (R = 8.314 J mol −1 K −1 ), T is the temperature (K) and w (j) is an adjustable parameter describing the interactions between transferred electric charge and its surrounding in reaction j. It is noted that Eq. 1 is a general description of equilibrium potential, which accounts for the short-range interactions between neighboring ions (via w (j) ) and has been widely used in the literatures. An detailed interpretation of the parameter w (j) can be found in the literatures. 27,28 The term (Δx (j) -x (j) ) represents the remaining vacant sites for lithium through reaction j, where Δx (j) is the fraction of total host sites for lithium ions through reaction j and x (j) is the fraction of the already occupied sites in that reaction. For each electrochemical reaction, the fraction of total host sites for lithium ions, Δx (j) , is calculated as the ratio of the maximum capacity through that reaction (Q (j) ) to the total capacity when silicon is fully lithiated (Q tot ) and can be further expressed in terms of the compositions of reactants and products where M si is the molar mass of silicon (M si = 28 g mol −1 ). It is noted that the sum of Δx (j) (j = 1, 2, f3) should be equal to 1. Electrochemical amorphization.-Reaction b2, the heterogeneous amorphization of c-Li 15 Si 4 , occurs only on the crystalline surface where the atoms have higher free energy than the interior ones, thereby following a "peeling off" pattern. Compared to the conversion of a-Li 15 Si 4 to a-Li x Si in reaction 2, the amorphization of c-Li 15 Si 4 to a-Li x Si needs to overcome an extra energy barrier G D *(eV) where ΔG (2b),0 and ΔG (2),0 respectively denote the standard Gibbs energy changes of reactions b2 and 2. The change in the Gibbs free energy leads to a higher equilibrium potential of reaction b2 than that of reaction 2 where e is the elementary charge (e = 1.6 × 10 −19 A s), and E * is an extra voltage increase induced by the surface energy barrier ΔG * (V). In Eq. 8, the Gibbs free energy changes are all positive since the delithiation process is nonspontaneous. Charge transfer kinetics.-The reaction rate for each electrochemical steps 1, 2, f3 and b2 is assumed to follow a simplified Butler-Volmer equation, where i (j) is the current density (A m −2 ) for reaction j and i (j),0 is the reference current density (A m −2 ) of reaction j. i (j) is defined to be positive for charge and negative for discharge. η (j) is the overpotential (V) of reaction j, and expressed as where V is the electrode potential (V). The total current density of the silicon electrode, I, is the sum of the partial current densities Time evolution of species.-By ignoring spatial heterogeneity within silicon electrode, the rate of change in the molar fraction of the inserted lithium for each electrochemical reaction is given by [ ] = Computational implementation and initial conditions.-The model consists of differential equations Eqs. 1, 9-11, 14 and 15 for (4j + 2) unknowns: x (j) , x cryst , i (j) , V, η (j) and E (j) . The fractions C z and C cryst are further determined using Eqs. 16-21 with the values of x (j) and x cryst solved from the differential equation system. The equation system was solved by the Runge-Kutta method using MATLAB ode23t. The initial conditions for all variables were calculated self consistently from chosen values of V and x cryst for charge and V for discharge. The partial current densities were initialized as i (1) = I, i (j) = 0 (j = 2, f3) for charge and i (2) /i (b2) = I, i (1) = 0 for discharge. The values of model parameters used for base case simulations are summarized in Table I. The lithium/silicon ratio in a-Li x Si is fitted to be 1.6 from the experimental data 8 used in this study. The potential increase in amorphization E * is set to be 0.15 V, corresponding to a surface energy gap of 0.3225 eV/atom. This has the same order of magnitude as the energy gap between amorphous and crystalline bulk silicon. 29,30 The values of reference current densities varies dramatically in the literatures, 17,31-33 ranging from 1 to 10 9 A m −2 . Here we assume the values of reference current density to be 0.002, 0.008, 0.008 and 0.004 A m −2 respectively for reactions 1, f3 and b2 due to the lack of precise experimental data. Although internal stress can also be attributed to the voltage gap between lithiation and de-lithiation curves of silicon electrodes, 5,31 we only consider the kinetic contribution to the hysteresis in this study. The adjustable parameter w (j) were reported in a range of 0.7-6.0, 20 and we take the values of 1.5 and 1.0 for different reaction steps. All simulations are performed at 298 K. Results and Discussion Model-experiment comparisons.- Figure 2 compares the model results with the experimental data reported in the literature. 6 The simulated charge-discharge curves agree well with the measured ones. As can be seen in the figure, the model successfully reproduces the sloping voltage curves of both charge and discharge processes when the amorphous silicon electrode is cycled above 0.05 V (vs Li/Li + ). The voltage curves of discharge and charge processes appear to be approximately parallel to each other with a huge voltage hysteresis in between which is caused by sluggish kinetics (small i 0 ). The model is further validated against the experimental data for deep lithiation of silicon. 8 It can be seen in Fig. 3a that the model results are consistent with experimental results by showing the same asymmetric feature, where the charge curve exhibits a sloping shape and the discharge curve has a single voltage plateau. For both scenarios of >0.05 V and <0.05 V, it is noted that the biggest difference between our model and experimental results occurs at the end of discharge (EOD) and the end of charge (EOC). This can be caused by asymmetric internal stress, which makes a larger voltage hysteresis at the EOD than that at the EOC. 5 Hence, the neglect of the stress effect in this study may lead to an underestimation of voltage hysteresis at the EOD and an overestimation of voltage hysteresis at the EOC. The predicted composition change during deep cycling is compared with the XRD results in Figs. 3b and 3c. As shown in Fig. 3b, a sudden formation of c-Li 15 Si 4 at a charge voltage of 0.05 V is shown in both the model predictions and experimental measurements. Figure 3c shows that the linear decrease of c-Li 15 Si 4 observed by in situ XRD is also well captured by our model. It is worth mentioning that the asymmetric voltage hysteresis as well as phase changes of silicon electrodes have never been well described by previous models which have taken into account various types of overpotential (i.e., charge-transfer, diffusion and ohmic overpotentials) and internal mechanical stresses. 5,34 Charge/discharge behavior.-The model is used to study the charge/discharge behaviors of silicon in details. Figures 4a and 4b respectively show the charge/discharge curves and the corresponding differential voltage spectroscopy when silicon is lithiated above 0.05 V. The sigmoidal shaped voltage curve with two sloping plateaus in Fig. 4a as well as its associated characteristic peaks in Fig. 4b have been widely reported in experimental studies on silicon electrodes. 35,36 This sigmoidal shape is a typical feature of electrochemical phase changes, and the voltage tends to hold during each phase transformation process, thus exhibiting a voltage plateau during charge/discharge. In Fig. 4a, both the charge and discharge curves have two voltage plateaus which imply two transformation reactions, i.e., reactions 1 and 2. The voltage characteristic peaks in Fig. 4b suggest that the two discharge reactions respectively happen at 0.25 and 0.52 V vs Li/Li + , while the two charge reactions occur at 0.11 and 0.22 V vs Li/Li + . The variations of different silicon phases during cycling are further examined in Figs. 4c and 4d. When silicon is charged from D 1 to D 2 (Fig. 4a), it is seen in Fig. 4c that a-Si declines almost linearly and a-Li x Si increases at the same time, and reaction 1 dominates over reaction 2. It is also found that a-Li 15 Si 4 grows slower than a-Li x Si in the D 1 -D 2 regime, confirming that reaction 2 is a slower step in this regime. When silicon is further charged from D 2 to D 3 , a-Li x Si stops growing and transforms to a-Li 15 Si 4 via reaction 2. These two electrochemical reactions proceed in the reverse direction during discharge. It is shown in Fig. 4d that a-Li 15 Si 4 will firstly transform to a-Li x Si, followed by the phase transformation from a-Li x Si to a-Si. The charge/discharge behaviors and the corresponding phase transformations for cycling silicon below 0.05 V are studied in Fig. 5. In contrast to Fig. 4a, the lower voltage plateau is elevated and even merges with the higher plateau during the discharge process in Fig. 5a, which displays a wide voltage plateau at ∼0.4 V. This phenomenon is confirmed in Fig. 5b, where P c2 moves to a higher voltage level and P c1 becomes unnoticeable. Figures 5a and 5b confirm that the electrochemical behaviors of silicon electrodes are path-dependent: (a) When the voltage remains higher than 0.05 V, silicon only follows the lithiation steps 1 and 2, leading to two sloping plateaus during both charge and discharge; (b) When the voltage falls below 0.05 V, silicon will undergo an additional crystallization process, thereby showing a distinct flat voltage plateau during de-lithiation. Figure 5c shows variation of different silicon phases during charge. It is found that the initial two electrochemical steps are similar to those in Fig. 4c, where a-Si is firstly lithiated to form a-Li x Si before transforming to a-Li 15 Si 4 . The nucleation step f3 starts at a normalized capacity of ∼0.5, slowing down the formation of a-Li 15 Si 4 in the second half of the capacity range. At the same time, the critical nuclei a-Li 15+δ Si 4 starts to grow until it reaches to a fraction of 0.3, and then the fraction of c-Li 15 Si 4 starts to increase exponentially. For the reverse process, Fig. 5d shows that c-Li 15 Si 4 is firstly amorphized to a-Li x Si which is further delithiated to form a-Si. Compared to the case in Fig. 4d, the fraction of a-Si in the reverse process grows at the beginning, indicating that the reaction rate of step 1 is comparable to that of step b2 through the discharge process. In addition, the maximum fraction of a-Li x Si is found to be less than 0.4, which is less than half of that in Fig. 4d. Surprisingly, the depletion of Li 15 Si 4 delays from the normalized capacity of 0.4 to 0.15. During the whole discharge process, the silicon electrode is composed of a mixture of a-Li x Si and a-Si during amorphization, which implies that there may be no pure intermediate component. Figure 6 shows the electrochemical behaviors of silicon during micro-cycling operation between different voltage limits. The transition from the lower lithiation voltage branch to the upper lithiation voltage branch for silicon electrodes was well identified and explained by Baker et al. 37 in a slow voltage scan. In Fig. 6, the silicon electrode is cycled at C/100, and the current is reversed immediately after the lower voltage limit is reached. In Fig. 6b, during the first cycle when the voltage falls to 0 V, the silicon electrode undergoes two phase transformation stages I and II, respectively corresponding to the reaction steps 1 and 2. After the current is reversed, the voltage curve presents a distinct plateau V which suggests the amorphization process b4. In the subsequent cycles, at lower voltage limits of 0.05 V, 0.15 V and 0.25 V, the lower voltage branch (lithiation branch) follows the same trace as that in the first cycle, while the higher voltage branch turns to be sigmoidal. The last three cycles do not involve crystallization thus exhibit two sloping plateaus in de-lithiation voltage curves as shown in Figs. 6b and 6c, where the stages III and IV respectively correspond to the reaction steps 1 and 2. It is worth mentioning that when the lower voltage limit increases, the first voltage plateau III in the higher voltage trace becomes shorter and can even vanish. This is because the reaction step 2 dominates at lower voltages, and the voltage plateau of the step 2 becomes less observable as the lower voltage limit increases. It is noted that in Fig. 6c the voltage increases abruptly at the step change of the current. This implies that the silicon electrode has not yet reached equilibrium when the voltage limit is reached, the voltage increase is due to the kinetic loss. Effect of crystallization rate. -Figures 7a and 7b respectively show the effect of crystallization rate constant k cryst on the growth of c-Li 15 Si 4 and a-Li 15+δ Si 4 . It is seen in Fig. 7a that the slower the crystallization happens, the more abruptly the crystalline phase will appear. If k cryst is very small (<0.0002 s −1 ), as shown in Fig. 7b, there will be excessive nuclei due to the slow f4, which lead to an exponential growth in silicon crystalline following the relationship [ ] = As k cryst increases to 0.00036 s −1 , the reaction rate of step f4 becomes comparable to that of step f3. The growth curve presents a characteristic s-shaped, or sigmoidal profile where the transformation rates are low at the beginning and the end of the process, but fast in between. This s-shaped growth curve is a typical characteristic of homogeneous crystallization. 38 If k cryst is larger than 0.0007 s −1 , c-Li 15 Si 4 grows fast initially until most nuclei are consumed. In this case, the crystallization rate is limited by the nucleation step f3. The envelop line plotted in Fig. 7b is determined by x (3) /Δx (3) , which is equal to the total fraction of a-Li 15+δ Si 4 and c-Li 15 Si 4 . The intersection points A 1 -A 6 in Fig. 7b correspond to the starting points of the appearance of c-Li 15 Si 4 in Fig. 7a, which tend to start earlier with increasing k cryst . Amorphization with different surface energy barriers.-The effects of surface energy barriers are studied in Fig. 8. As shown in Fig. 8a, with increasing E * , the voltage curve during the discharge process changes from a sloping shape to a flat shape. As E * is proportional to the extra surface energy barrier to overcome for amorphization, a larger E * means higher surface energy barrier. When the particle size is smaller, more free surfaces of silicon phases are exposed and the silicon phases are more active on average. This elevated activity means less surface energy barrier per atom to be overcome, corresponding to a smaller E * . Hence, a sloping voltage curve is expected during the discharge process even if the silicon particle is crystallized. This well explains the effect of particle size, which is consistent with the experimental observations in the literatures. Figure 8b indicates that a higher surface energy barrier will shift the voltage peak to a higher level and increase its width. Furthermore, when E * is large enough, the higher voltage peak will vanish. Hence, only one visible voltage characteristic peak can be detected in the differential analysis even though there are two phase transformation reactions. This may lead to a failure to detect the phase transformation step b2. To unveil the underlying reactions, it is necessary to use other more reliable experimental techniques to complement the results from differential analyses. Figure 8c shows that the decrease of crystalline phase is decelerated with increasing E * because of the slowing down in amorphization. Correspondingly, as shown in Fig. 8d, the growth of a-Li x Si during amorphization also slows down, and the amorphization process can thus last for a longer time. Conclusions A zero-dimensional mechanistic voltage model is developed for silicon anodes in LIBs. The model is able to capture key electrochemical phenomena during cycling of silicon electrodes for the first time, including the sloping voltage curve with voltage hysteresis at small lithiation depths and the shift to a single distinct voltage plateau on discharge from the initial sloping curve upon deep lithiation. Comparisons show a good agreement between the model and experimental results. The processes of phase transformations, crystallization and amorphization underlying the electrode behaviors are resolved in the model. The model correlates the electrochemical behaviors of silicon with the underlying reaction processes in a quantitative manner. We show that the voltage hysteresis is pathdependent and the asymmetric hysteresis originates from asymmetric reaction pathways. The model is then used to study the effects of crystallization rate and surface energy barriers. The crystallization rate constant k cryst can affect the shape of the crystalline growth curve, and a lower k cryst will delay the appearance of the crystalline phase. The extra potential increase E * induced by surface energy barriers between crystalline and amorphous phases is shown to be the underlying cause of the elevated voltage plateau for silicon electrodes. Even though there are two electrochemical reactions, the differential analysis can only detect one visible voltage peak when E * is large enough. The surface energy barrier also explains qualitatively why smaller silicon particles present a sloping voltage curve even charged to 0 V. The model is a necessary tool for future design and development of high-energy-density, longer-life siliconbased LIBs.
7,453.6
2020-10-05T00:00:00.000
[ "Materials Science", "Engineering" ]
The status of the French language in British North America : from the conquest to the confederation The Act of Union established English as the only official language of the legislature and of legislative documents. The Act of Union was comprised of 62 sections and over 10,000 words, yet the words “English” and “translated” are mentioned only once each. Although the Act seriously undermines the linguistic rights of French Canadians, the word “French” is nowhere to be found. This article examines the linguistic situation of French Canadians after the Conquest of Canada by British forces in 1760. Firstly, a short history of the linguistic situation in Canada will be provided, followed by an analysis of the status of the French language in the various constitutional acts which affected the governing and the use of the French language in the territory known as Quebec. The focus will specifically be on the period surrounding the Act of Union, since the linguistic situation of French Canadians changed drastically thereafter. Second, the linguistic policies put forward by the British Crown about the use of French in Lower Canada will also be examined. What was the status of the French language as a non-official language of the United Province of Canada? What were the linguistic barriers encountered by monolingual French speakers in Lower Canada? Therefore, one of the important aspects of this article is to examine the historical access (or non-access) of French Canadians to language facilities when they needed to interact with anglophone government institutions. Introduction And be it enacted, That [all documents of] Legislative Council and Legislative Assembly, and of each of them respectively, and all written or printed Proceedings and Reports of Committees of the said Legislative Council and Legislative Assembly respectively, shall be in the English Language only: Provided always, that this Enactment shall not be construed to prevent translated Copies of any such Documents being made, but no such Copy shall be kept among the Records of the Legislative Council or Legislative Assembly, or be deemed in any Case to have the Force of an original Record.(Act of Union, 1840, Section 41.Emphasis added). This article examines the linguistic situation of French Canadians after the British Conquest of 1760 and subsequent Treaty of Paris in 1763.From the theoretical and methodological standpoint of descriptive studies in translation history, I will first provide a short history of the linguistic situation in Canada, followed by a presentation and analysis of the status of the French language in the all constitutional acts which affected the governing and the use of the French language in the territory known as Quebec.I will focus more specifically on the period surrounding the Act of Union, since the linguistic situation of French Canadians changed drastically thereafter. The main purpose of the Act of Union, enacted in 1840, was to reverse the effect of the 1791 Constitutional Act which had split the colonial province of Quebec into the colonies of Upper and Lower Canada (so named for their relative positions on the St Lawrence River, upstream or downstream of its junction with the Quebec River).This was to form the new Province of Canada, united under a single assembly and administratively divided into Canada West (pre-union Upper Canada, post-confederation Ontario), and Canada East (preunion Lower Canada, post-confederation Quebec).Thus, in this chapter, both historical names of Lower Canada or Canada East will be used for this francophone area according to the period (i.e.pre-or post-Union) under discussion.Nevertheless, if necessary, and to avoid historical confusion, the term "French Canada" will also be used. However, more importantly for my analysis, this was the first time that the use of French was alluded to implicitly in a Constitutional text.As stated in the above excerpt, the Act of Union established English as the only official language of the legislature and of legislative documents.It was comprised of 62 sections and over 10,000 words, yet the words "English" and "translated" are mentioned only once each.Although the Act of Union seriously undermined the linguistic rights of French Canadians, the word "French" is nowhere to be found. The status of French after the British Conquest Let us begin by providing a short history of the linguistic situation in Canada.Historians consider that the establishment of Samuel de Champlain's permanent colony in Quebec City, in the early seventeenth century, marks the beginning of the history of French in North America.Similarly, Anglophone religious groups, the Puritans, settled farther south during the 1620s.Thus, French and English colonies sprang up simultaneously in North America. The Conquest of New France by British forces seriously threatened the survival of French in the colonies.Furthermore, following the American Revolutionary War (1775War ( -1783)), Loyalists (Americans loyal to the British Crown) settled in Canada, thereby increasing the English-speaking population in Canada.After the Conquest, the anglicization of Canada appeared inevitable.This, however, proved not to be the case at all, as newspapers and the legal system, in particular, would be bilingual.Moreover, French Canadians demanded the right to use the French language1 . From 1760 to 1764, New France was governed by the British Army.As Horguelin notes, the articles of Capitulation of Montreal and Quebec make no mention of language.Vaudreuil and Lévis do not address the protection of the French language.Nevertheless, French almost had official status under the military governmentafter all, one must govern a nation in a language its people actually understand.This was simply how things were, since no directives had been included in the Capitulations or ordered by London.(Horguelin, 1977, pp. 15-16).Even though French Canadians now faced a dimmer future, a new career was available to them: that of translator.The governors of Montreal, Quebec and Trois-Rivières appointed "secretarytranslators", whose job was to translate orders and proclamations into French (Delisle, 2011, p. 363).In this context, bilingualism and translation progressively took root in the fields of official proclamations, justice and commerce (Horguelin, 1977, p. 16). In 1764, military rule was replaced by civil government.At the same time, the parallel drafting of orders was also replaced by translation in the stricter sense of the term.For Hoguelin this results in a drop in the quality of the French texts produced (1977, pp. 19-20).Those translations are included in The Quebec Gazette/la Gazette de Québec, a bilingual journal founded in 1764 (Delisle, 2011, pp. 363-364).The Gazette's translators had no experience and sadly produced French texts of questionable quality (Horguelin, 1977, p. 20). In 1767, Guy Carleton succeeded to James Murray as governor of the Province of Quebec.Carleton was much more open than his predecessor to the linguistic needs of the francophone majority.Indeed, the following year, he appointed François Joseph Cugnet as official translator (for this role, he received 5 sterling shillings sterling per day).In 1789, he was succeeded by his son Jacques-François Cugnet.Later, Xavier de Lanaudière, Philippe Aubert de Gaspé and Edward Bowen occupied the position of official translator (Delisle, 2011, p. 364).Moreover, from 1777 to 1786, an official interpreter was assigned to the provincial courts (Delisle, 1987, p. 8). Historically, under British rule, inhabitants of the former New France saw their religion and language protected under the law, by virtue of the 1774 Quebec Act.The Quebec Act explicitly gave French Canadians freedom of religion, permitting to practise their Catholic faith: That his Majesty's Subjects, professing the Religion of the Church of Rome of and in the said Province of Quebec, may have, hold, and enjoy, the free Exercise of the Religion of the Church of Rome […] and that the Clergy of the said Church may hold, receive, and enjoy, their accustomed Dues and Rights, with respect to such Persons only as shall profess the said Religion.(Quebec Act, 1774, section 5). This provision implicitly protected the use of the French language because Catholicism was the religion of the French-speaking population.In this way, the provision unofficially recognized francophone religious life, as members of the clergy spoke French and also taught in that language.Moreover, the Quebec Act restored French civil law, which allowed for the use of French in the courts for civil matters (ibid., section 8).The restoration of French civil law would greatly contribute to the development of the petite bourgeoisie, and professionals such as law clerks, lawyers, notaries and judges flourished. The use of the French language was officially recognized for the first time in the Constitutional Act of 1791.As noted above, one of the Act's major provisions was the splitting in two of the Province of Quebec, which led to the creation of Lower and Upper Canada, separated by the Ottawa River.Sections 24 and 29 stated that voters or members of the Legislative Council or Assembly of Lower Canada would be allowed to take an oath in either English or French (Constitutional Act 1791).Therefore, we may conclude that the Constitutional Act implicitly recognized that the French language was already being used in the Legislative Assembly and Legislative Council of Lower Canada.Sections 24 and 35 reiterated that the people of Lower Canada could practise the Catholic faith and be subject to civil law, tacitly guaranteeing the use of French in Church and in civil courts. In 1793, the Legislative Assembly of Lower Canada voted on a resolution in support of the French translation of laws.Though passed, it was never actually implemented, even though a translator was appointed (Delisle, 1987, p. 9).Nevertheless, the situation progressed positively until, with the assignation of two translators to the Legislative Assembly in 1809, the laws of the colony were eventually translated into French (Delisle, 2011, p. 364). As Jacques Gouin explains, translation in Canada from 1791 to 1812 was mainly entrusted to the former seigneurial elites2 .However, he notes that from 1789 up until 1850, the quality of translation steadily diminished.In his view, it is because translations were increasingly assigned to British nationals (Gouin, 1977, p. 29). As we have seen, after the surrender of New France to British forces in 1760, no British law officially recognized French as the language of the inhabitants of the Province of Quebec (Lower Canada).While the Quebec Act of 1774 guaranteed freedom of religion and upheld the civil code, thereby allowing the use of French in churches and in the courts, French remained a language without official status. Against this backdrop, French Canadians and English Canadians inhabited separate worlds.The tensions between the two groups ran deep, and there would be little contact between them before the nineteenth century.As a result of the Conquest, the economy fell into the hands of the English, and the industrialization of Quebec was carried out by large British companies.English thus became the language of the economy and of trade (Corbeil, 1974, pp. 5-7).The two societies evolved separately: "From this point forward, contacts and alliances between the two groups are not only rare but also tense; it is the creation of the 'two solitudes'."(Plourde, 2000, p. 56, our translation). The Patriotes Rebellions and the 92 Resolutions The rise of British powerthrough immigration and industrializationin Quebec served as a catalyst for nationalist discourse and the emergence of organizations such as the Société Saint-Jean Baptiste and the Parti Patriote.After the War of 1812, the elected Assembly of Lower Canada was dominated by representatives from the French-Canadian middle class.The emergence of this new professional elite led to the development of a national consciousness within the francophone population.Elected to the Assembly in 1815, Louis Joseph Papineau became the leader of the Parti Canadien, which would become the Parti Patriote in 1826.The Parti Canadien sought greater independence from the Church, especially with regard to education, and from the British Government.Papineau demanded the right to spend the revenue raised in Lower Canada and challenged the authority of an appointed Legislative Council.In short, the Patriotes were seeking the sovereignty of the Legislative Assembly.They also fought to safeguard the French language (Dumont, 1993, pp. 188-9).During the 1820s, the demands of the Assembly of Lower Canada were met with resistance from the Governor General, the Earl of Dalhousie.The situation continued to deteriorate until the Rebellions (Buckner, 2015, par. 3-5), despite the appointment of conciliatory governors. The French-Canadian ethnic majority began to be undermined in the 1830s by the demographic increase of the English-speaking population of British origin.A wave of immigration brought epidemics, such as cholera, to Lower Canada, creating fear and xenophobia within the francophone population.Sketched by Papineau, drafted by Augustin-Nobert Morin and presented by Elzéar Bédard in February 1834, the 92 Resolutions of the Parti Patriote embodied a point of no return in the political destiny of the British colony of Lower Canada.In essence, the resolutions constituted a long list of demands for political reform, the main one being that for Responsible Government (Lamonde, 2000, pp. 122-3).Resolutions 51-55 concerned the defence of the rights and language of the French Canadian people, and denounced the lack of language planning.Resolution 52 clearly stated that because of their use of the French language, French Canadians had been not only marginalized, but also ridiculed and rendered politically inferior.Papineau claimed that they were proud of their French origin, which formed the basis of the civil and ecclesiastical laws of Canada: Resolved, That since a circumstance, which did not depend upon choice of the majority of the people, their French origin and their use of the French language, has been made by the colonial authorities a pretext for abuse, for exclusion, for political inferiority, for a separation of the rights and interests; this House now appeals to the justice of His Majesty's Government and of Parliament, and to the honour of the people of England; that the majority of the inhabitants of this country are in nowise disposed to repudiate any one of the advantages they derive from their origin and from their descent from the French nation, which, with regard to the progress of which it has been the cause in civilization, in the sciences, in letters, and the arts, has never been behind the British nation, and is now the worthy rival of the latter in the advancement of the cause of liberty and of the science of Government; from which this country derives the greater portion of its civil and ecclesiastical law, and of its scholastic and charitable institutions, and of the religion, language, habits, manners and customs of the great majority of its inhabitants.(Papineau 1834, résolution 52;English Translation: Kennedy, 1930, pp. 280-281) A few points are key in this resolution.Papineau and his followers stated that French Canadians did not choose to be conquered by British forces.He also stated that his people were subject to discrimination because of a condition beyond their control: their French language and culture.He called upon the British Crown and Parliament to rectify the situation.French Canadians, he insisted, would under no circumstance turn their backs on their French culture and the French language.The French language should be recognized, he urged, along with a certain francophone way of life, including schools, charitable institutions, religion, habits and beliefs of the French-Canadian population.However, as Yvan Lamonde explains, Papineau and other public figures of Lower Canada did not cling to their relationship with France.They wished to protect their language and their traditions against a perceived English political antagonist (Lamonde, 1997, p. 10).Papineau had a strong attachment to the French language and personally fought for improvement of the justice system and for the use of French in courts of law.He believed that it was not necessary to stop speaking French in order to know and love the Constitution (ibid., p. 30).In other words, for Papineau the French language was an integral part of the Lower Canadian identity. In March 1834, the resolutions were sent to London, where they were ignored for three years.During this time, the political situation in Lower Canada became increasingly tense as the Legislative Assembly (led by the Parti Patriote) paralyzed the colonial government.The consent of the Assembly was required for the use of public funds.Furthermore, clashes between extremists within both the Parti Patriote and the British Party exposed the profound ethnic divisions.In March 1837, the British Parliament published its official response to the Patriotes' 92 Resolutions, in the form of 10 Resolutions drafted by British Colonial Secretary Lord John Russell. In his Resolutions, Lord Russell clearly rejected the transformation of the Legislative Council into an elected body (one of Parti Patriote's key demands).There was no mention whatsoever of the linguistic rights of the French Canadians.Once again, this issue was simply overlooked (Russell, 1837).In the spring of 1837, as the Legislative Assembly was no longer in session, the Parti Patriote organized public assemblies and protest rallies across Lower Canada.The public protests were banned by Governor Gosford and the political climate deteriorated rapidly throughout the fall.French-Canadian patriotic groups, such as the Fils de la Liberté, clashed with British troops, mainly in the countryside. On November 16, several Patriotes leaders were arrested by British troops.. Several hundred rebels were killed or wounded during the fighting, and many more were captured by British forces, while Papineau and other important figures of the Patriotes were forced to flee to the United States.The 1837 rebellions of Lower Canada would become ingrained in the collective imagination of French Canada (and later Quebec). The Durham Report Following the failure of the 1837 Patriotes Rebellions, the linguistic situation of French Canadians deteriorated considerably (Dumont, 1993, p. 205).The French-Canadian political elite was not only defeated, but had also lost all credibility.English increasingly gained ground: British immigration grew in Lower Canada, and Montreal became an essentially anglophone city.The Constitution of 1791 was suspended on March 27, 1838, and a Special Council put in place.In the Montreal district, habeas corpus was suspended from April to August 1838.The British statesman, John George Lambton, first Earl of Durham, arrived in Quebec City in April, staying until November.His amnesty measures toward prisoners angered London and ultimately led to his resignation. In 1839, Lord Durham, who considered French Canadians to be a people with neither a history nor a literature, tabled the so-called 'Durham Report', in which he advocated their assimilation (Biron et al., 2007, p. 57).To his mind, the English population was decidedly superior to the French Canadians whom he depicted as: "An utterly uneducated and singularly inert population, implicitly obeying leaders who ruled them by the influence of a blind confidence and narrow national prejudices […]" (Durham, 1839, p. 11).His view of the colony's British population, on the other hand, was completely different and much more positive overall: I have found the main body of the English population, consisting of hardy farmers and humble mechanics, composing a very independent, not very manageable, and sometimes, a rather turbulent democracy.Although constantly professing a somewhat extravagant loyalty and high prerogative doctrines, I found them very determined on maintaining, in their own persons, a great respect for popular rights, and singularly ready to enforce their wishes by the strongest means of constitutional pressure on the government.(ibid., p.11) Durham went much further, however, qualifying the French-Canadian farmers or the habitants, as ignorant and illiterate: "no means of instruction have ever been provided for them, and they are almost universally destitute of the qualifications even of reading and writing" (ibid., p. 13).He believed that the French Canadian majority's lack of education had made them ungovernable and was responsible for the political unrest which climaxed with the Patriotes Rebellions. Durham seemed very conscious of the social differences between the French Canadians and the English in Lower Canada.In his view, English and French speakers were not only educated separately, but their respective languages led to different ways of thinking which inhibited any effort towards mutual comprehension.The differences were so profound that they could easily be perceived in the press, insofar as articles were written with the goal of being incomprehensible to the other group.More importantly, he stated that there was very little contact between the two peoples at school, in business and in the social sphere.He highlighted the fact that the two linguistic groups evolved along parallel paths, their only meeting ground being the jury box, and even then never by choice (ibid., pp. 18-19). A few pages later, Durham criticized the course of action taken by the British Government.The creation of two provinces, Upper and Lower Canada, was a mistake.In other words, allowing Lower Canada to be a French community in which the French Canadians kept their language and institutions was unwise, especially when one considered that London also encouraged English emigration to the province.Moreover, French civil law and the "legal provision for the Catholic clergy" were limited to the French portions of Lower Canada (ibid., p. 30).In other words, Lower Canada was not the French-Canadian entity it was initially designed to be.Likewise, according to Durham, constant contact with the English population which settled in the Townships was the cause of conflict, as French Canadians experienced jealousy and animosity towards a people who were clearly superior to them.Durham believed that the English population would soon be greater than the French-Canadian population, even in Lower Canada, and that the former was already superior in "knowledge, energy, enterprise and wealth."He felt, therefore, that it would be a mistake to try to preserve a French-Canadian identity (ibid., p. 31). In the final analysis, Lord Durham had a very demeaning view of French Canadians, their language and their institutions.His attitude was decidedly colonialist and, as stated above, he believed that at the root of the problem lay the fact that French Canadians had up to this point preserved their uniqueand distinctly Frenchidentity: There can hardly be conceived a nationality more destitute of all that can invigorate and elevate a people than that which is exhibited by the descendant of the French in Lower Canada, owing to their retaining their peculiar language and manners.They are a people with no history, and no literature.(ibid., pp.126-7, our italics) Durham's solution was as simple as it was shocking: the assimilation of the French Canadians.Although he was aware that assimilation would not happen overnight, he was convinced that everything had to be put in place for it to happen; "that in any plan which may be adopted for the future management of Lower Canada, the first object ought to be that of making it an English Province" (ibid., p. 127).In order to do so, he placed all governing power in the hands of the English-speaking population: "Lower Canada must be governed now, as it must be hereafter, by an English population."Durham's solution was what he called a "federal union" or the Union of Upper and Lower Canada, because, once combined, the English-speaking population of both Canadas would outnumber the French Canadians and enable governance by an English majority: "I believe that tranquility can only be restored, by subjecting the Province to the vigorous rule of an English majority: and that the only efficacious Government would be that formed by a legislative union" (ibid., p. 131). As we have seen, Lord Durham was a British colonialist who strongly believed in the superiority of the English population over the French Canadians.In the end, since French Canadians were "a people with no history, and no literature," their best option was to adopt the English language and way of life. The union of Upper and Lower Canada: English as the sole language The Act of Union was passed by the British Government in July 1840 and proclaimed in February of the following year.As mentioned above, the Act of Union unified Upper and Lower Canada under one government, thereby creating the Province of Canada.Under the Act of Union, London did not grant responsible government (i.e.representation by population, the number of representatives being proportional to the population), as each former province had 42 representatives in the new entity's unified Legislative Assembly.This was unfair to Canada East, whose population was larger than that of Canada West.In concrete terms, it meant that the representatives of Canada West could form alliances with the Anglophone representatives of Canada East and therefore undermine the francophone population. The desire to assimilate the French-speaking population was visible.Significantly, this was the first time the use of French was officially banned in a constitutional document.The Act of Union established English as the only official language of the legislature and of legislative documents.It also made it impossible for francophones to protect their language and their institutions in the Assembly, given that the English held the majority of the votes.Not surprisingly, the opposition to the Union was universal within the French-Canadian population.As Étienne Parent wrote in Le Canadien on January 27, 1840: "[The union's] goal is nothing short of stripping us of what is dearest to us: our langue, our customs, our rights, in other words, our nationality."(Our translation).The francophone political class was very conscious of the fact that the Union could mean the end of French-Canadian identity and lead to the disappearance of the French language. Étienne Parent reacted swiftly and tabled a bill in 1841: An Act to provide for the translation into the French language of the Laws of this Province, and for other purposes connected therewith.This bill demanded the French translation of all the laws of the new Canadian Parliament as well as all the laws concerning Canada emanating from the British Parliament.The bill was adopted on September 18, 1841. The Catholic clergy was also fiercely opposed to the Union, fearing that it would not only "anglicize" French Canadians, but more importantly "decatholicize" them and cause the erosion of the church's power (Lamonde, 2000, p. 285).The clergy's opposition to the Rebellions had resulted in London giving legal status to the Catholic Church in 1839, which allowed it to invest in and possess goods without danger of confiscation.The clergy took advantage of the situation to tighten its grip on education, the press and public welfare (Lemire and Saint-Jacques, 1999, p. XV).This was the birth of ultramontanism. While one had the right to argue for secular education, in Canada East ultramontanism had one undeniably positive effect: education was to be provided in French.In the end, the control the Catholic Church managed to exercise over the French-Canadian population probably saved their language.Schools, hospitals and charitable institutions were all to be run by the clergy.The Church also had its own newspapers and dictated what could and could not be read, going so far as to produce a recommended reading list.From the 1840s onwards, libraries were threatened by ecclesiastic censorship, whichamong other thingstried to prevent a project to build a public library in Montreal (Robert, 1989, p. 107).The French language was saved, but French-Canadian identity was radically altered as a result of the clergy's strong hold on society. The other factor that would save the French language in Canada was the friendship between Louis Hippolyte Lafontaine and Robert Baldwin.Lafontaine and Baldwin formed a government in 1842 and again in 1848.Their leadership had profound effects on public administration and the legal system, and they are remembered as the architects of responsible government (Colombo, 2011, p. 52).In September 1842, Lafontaine accepted the nomination for Attorney General.He addressed the members of the Assembly in French, even though until then all debates had taken place in English only: I refuse to submit myself to speaking the English language […] Even if my knowledge of the English language was as familiar as my knowledge of the French language, I would still make this address in the language of my French Canadian compatriot if only to solemnly protest against this cruel injustice of this part of the Act of Union which forbids the mother tongue of half of the population of Canada.I owe it to my compatriots, I owe it to myself… If we must succumb, we will succumb, but we will command respect.(Lafontaine, cited in Groulx, 1960, p. 190. Our translation) Lafontaine therefore put his foot down: he would speak French in the Assembly even if it might displease certain members.To summarize, he firmly believed that it was his duty as a French Canadian to use his mother tongue.It was a matter of respect: self-respect, but also demanding the respect of his English neighbours.In this context, French was once again used in the Assembly, even though it had no official status. In December 1844, Papineauwho had just returned from exileannounced his intention to demand the revocation of section 41 of the Act of Union, the infamous provision that had made English the only official language of Canada.Although Papineau was a controversial political figure, his actions were met with support from the other members of Parliament.The final text was adopted on February 21, 1845 by the unified Legislative Assembly, approved by the Legislative Council, and sent to London in March.On January 18, 1848, Lord Elgin (Governor of Canada) delivered the Speech from the Throne in English and in French, in which he announced that the British Parliament had passed a law revoking Section 41 of the Act of Union (Plourde, 2000, p. 70).It is important to stress, however, that section 41 was not modified in order to recognize French as an official language of Canada.The section was simply revoked, meaning that it was erased from the law.Once again, the French language was left in a constitutional vacuum.This led to the reinstauration of parliamentary bilingualism, a move back to the non-legislative status of both the French and the English language that had been in place from the Conquest (1763) until the Act of Union (1840). Starting in the 1850s, translation saw a revival.In 1854, Antoine Gérin-Lajoie suggested a reorganization of translation within the Legislative Assembly.Translation was divided into three sections: 1) laws, 2) documents, 3) votes and proceedings (Delisle, 1987, pp. 10-11, and2011, p. 364).The Legislative Assembly's translation bureau was made up of seven people.Although the team itself was admittedly quite small, Gérin-Lajoie worked towards the recognition of translation generally.François-Xavier Garneau, best known for his History of Canada (1852), was a translator at the Legislative Assembly.(Gouin, 1977, pp. 30-31). The 1867 British North America Act: French appears in a legal text In 1867, the British government passed the British North America Act, subsequently known as the Constitution Act 1867, which created the Canadian Confederation by bringing together the British colonies of Nova Scotia, New Brunswick and the Province of Canada (the latter comprised of Canada West and Canada East, which were thenceforward known respectively as Ontario and Quebec).It was the first time French was recognized as an official language of Canada and Quebec.In concrete terms, this meant that members of the Legislature would have the right to use either French or English in the Parliament of Canada and in the Legislative Assembly of Quebec.Moreover, both languages could be used in cases brought before the federal courts of Canada and all the courts of Quebec.The entire linguistic debate that had been going on for a century was encapsulated in one section of the Constitution Act 1867: This was the first step towards official bilingualism as we know it today in Canada.The significance of section 133 lies in the fact that it contains the first occurrence of the word "French" in a constitutional document, thereby according official recognition to the French language for the first time since the 1763 British Conquest.Nevertheless, it would be another 100 years before the enactment of the Official Languages Act (1969). The place of translation in constitutional acts and orders In the above section, I summarized the status of the French language in British North America, looking periodically at the translation practices that emerged as a result of the changing political and linguistic context.I should now like to briefly focus on the place of translation in the Constitutional Acts and Orders which were presented in this article.Indeed, of all the documents examined above (Quebec Act, Act of Union, Constitution Act, 1867, Official Languages Act, Ninety-Two Resolutions of the Legislative Assembly of Lower Canada, Report on the Affairs of British North America), only the Act of Union references translation.The word "translated" appears once, in section 41: Provided always, that this Enactment shall not be construed to prevent translated Copies of any such Documents being made, but no such Copy shall be kept among the Records of the Legislative Council or Legislative Assembly, or be deemed in any Case to have the Force of an original Record.(Act of Union, 1840, section 41.Emphasis added). The Act of Union provides readers with insight into the translation practices used for texts produced by the Assembly and the Legislative Council.Clearly, through their non-official status, translated texts were perceived as derivative and inferior.Moreover, because English had sole official status, the unspecified default direction of translation was into Frenchthat is, the language's existence had only implicit recognition.The inferior status of the French language and of French translations is made clear when it is stated in a constitutional text that translations would not be kept or considered official. Translation and the English criminal justice system In the territory of the province of Quebec, the criminal justice system is a key example of how the English and the French languages and their respective translation interacted.Indeed, after the Conquest of 1760 and the implementation of English common law, the criminal justice system resorted exclusively to the English language, which did result in more translation.Justices of the peace were the basis of the new judicial system.For the most part, justices of the peace were not educated in law.Thus, in order to carry out their duties, the magistrates of Lower Canada referred to handbooks such as The Justice of Peace and Parish Officer by Richard Burn (Fyson, 2006, pp. 123-4).Burn's handbook was only available in English, which was a major obstacle for many of the justices of the peace in Lower Canada.As a result, Joseph-François Perrault undertook the translation, with the aim of disseminating it through subscription: "I translated from 'Burns Justice,' the chapters which are most needed by my fellow citizens to perform their duties as magistrates, jurors and constables […]" (Casgrain, 1898, p. 54, our translation).However, despite the importance of publishing French versions of available English legal books and a rather large subscription base, Perrault only produced a partial translation, published in 1789 (Fyson, 2006, p. 124;Casgrain, 1898, p. 162). According to Fyson, the translationwhich included the powers and duties of justices of the peace, as well as general procedures for arrest warrants and admissions, but excluded infraction descriptions and commentswas very useful, and he suggests that owning a copy influenced the competence of justices of the peace (Fyson, 2006, p. 124).Perreault also proposed translations for many English legal terms in his 1814 book, Questions et réponses, which have subsequently been described as follows: His suggestions were sometimes perfectly sound, yet at other times quite awkward."Indictement," "assault et batterie," "nuisance," "offense," "quartiers généraux de la paix," "affidavit," and "termes de la cour" were not his most clever findings, but these expressions lasted; they are found not only in legal literature and the language of lawyers, but in legal texts, over the past century.(Morel, 1976, p. 115-6, my translation). Generally speaking, it was not difficult to access legislative texts in Lower Canada before the Act of Union: acts and colonial orders were published in English and French in the Gazette du Québec (Fyson, 2006, pp. 124-5).According to Morel, however, monolingual French Canadians saw the criminal justice system as a closed-off world, until Jacques Crémazie's translation Les lois criminelles anglaises, traduites et compilées de Blackstone, Chitty, Russel et autres criminalistes anglais et telles que suivies en Canada was published in 1842.Despite the efforts of Perrault and Crémazie, case law, the cornerstone of English criminal law, was only available in English (Morel, 1976, p. 115). With regard to Francophones taking the bench, Fyson clearly states that bilingualism was ensured before nominating a French-Canadian justice of the peace.G. W. Allsopp a businessman, seigneur and politician who was appointed justice of the peace in 1794, stressed the importance the importance of speaking French: "altho' the English language is a desirable acquirement the French is the most necessary in the country parishes" (Allsopp, cited in Fyson, 2006, p. 89).In fact, francophone justices of the peace were particularly active: in Montreal, they represented between 40% and 50% of all justices of the peace and were responsible for an even greater proportion of lawful processes (Fyson, 2006, p. 114). That said, English continued to be the language of the criminal justice system, which was perceived by many inhabitants to be inaccessible to the average French Canadian (by contrast, most francophone law professionals worked in the civil courts).However, Fyson refines this perception, since the Court of King's Bench did accommodate francophones to some degree.Thus, while most documents and procedures were in English, francophone witnesses could provide testimony and be cross-examined in French, although indictments were always written in English.Furthermore, a large number of justices of the peace used French if it was the language spoken by the parties, particularly in rural areas; however, not all did so, and many opposed its use in the criminal justice system, particularly during the very tense 1820s and 1830s (ibid., pp. 249-53).Accordingly, the best-case scenario for French Canadians who had to face the criminal justice system was that their case might be tried in French. In 1849, a law was passed that stipulated that English was the only language of the Court of King's Bench, rendering the use of English or French at the justice's discretion no longer applicable (Morel, 1976, p. 118).A worstcase scenario therefore becomes the norm for French Canadians facing the justice system: though they might still hope to address the court in French, post-1849 they would likely be unable to understand the proceedings because all matters were tried in English.The lack of translation and interpretation resulted in an inaccessible criminal justice system for French Canadians; it also confirmed the inferior status of French and francophones. Conclusion After the Conquest, no British document or law officially recognized the French language as the language of the inhabitants of the Province of Quebec (Lower Canada)the territory we now call Quebec.However, as we have seen, the French language lived on in the territory.Moreover, the coexistence of the English and French languages enabled the birth of a new profession and a new reality in the colony: translator and translation.Although not fully recognized or organized, the practice was very much alive from day one of English governance in the former New France. Furthermore, French Canadians fought diligently for their linguistic rights.In 1777, the Quebec Act provided inhabitants with the constitutional right to freedom of religion and re-established French civil law (Quebec Act, 1774, sections 5, 7, and 8).The Quebec Act unofficially recognized the use of French as the language of the Church and civil courts.Members of the Parti Patriote fought for the rights of French-speaking Canadians, which led to the 1837 Lower Canada rebellions.This led to the production of the "Durham Report" which perceived French Canadians as "a people with no history, and no literature," who should be assimilated to the English way of life.As a result, the Act of Union (1840) officially prohibited the use of French and, in fact, recognized English as the sole language of the legislature (Act of Union, 1840, section 41).London's ban on the French language lasted for a period of eight years.During that time, key political figures such as Louis Hippolyte Lafontaine defended their right to speak French in the Legislative Assembly.Moreover, this era of political turmoil led to the rise of ultramontanism.The Church successfully sought control of education, the press and public welfare; greatly contributing to the preservation of the French language. Despite the British government's effort to repress the French language and assimilate the francophone population, the language survived even in the most English of institutions: the criminal law courts.Criminal law manuals were made available in French through translation.Many magistrates spoke French and allowed it to be used in the courts when it was the language of both parties.Francophones also had the right to testify in their language.French Canadians fought to preserve their language and their fight was successful: the Constitution Act, 1867 recognized French as the language of the newly formed Dominion of Canada and newly formed province of Quebec.As a result, French and English could be used by the Parliament of Canada and the Legislature of Quebec (Constitution Act, 1867, section 133). 133. Either the English or the French Language may be used by any Person in the Debates of the Houses of the Parliament of Canada and of the Houses of the Legislature of Quebec; and both those Languages shall be used in the respective Records and Journals of those Houses; and either of those Languages may be used by any Person or in any Pleading or Process in or issuing from any Court of Canada established under this Act, and in or from all or any of the Courts of Quebec.The Acts of the Parliament of Canada and of the Legislature of Quebec shall be printed and published in both those Languages.(Constitution Act 1867, section 133)
9,462.6
2019-07-25T00:00:00.000
[ "Linguistics" ]
Mobility trajectory generation: a survey Mobility trajectory data is of great significance for mobility pattern study, urban computing, and city science. Self-driving, traffic prediction, environment estimation, and many other applications require large-scale mobility trajectory datasets. However, mobility trajectory data acquisition is challenging due to privacy concerns, commercial considerations, missing values, and expensive deployment costs. Nowadays, mobility trajectory data generation has become an emerging trend in reducing the difficulty of mobility trajectory data acquisition by generating principled data. Despite the popularity of mobility trajectory data generation, literature surveys on this topic are rare. In this paper, we present a survey for mobility trajectory generation by artificial intelligence from knowledge-driven and data-driven views. Specifically, we will give a taxonomy of the literature of mobility trajectory data generation, examine mainstream theories and techniques as well as application scenarios for generating mobility trajectory data, and discuss some critical challenges facing this area. Introduction The mobility trajectory dataset includes a wide range of information generated by diverse moving objects, consisting of a sequence of ordered points (Kong et al. 2018a).This data holds significant importance as it provides valuable insights into movement patterns and behaviors.In the field of urban computing, trajectory data enables the development of intelligent transportation systems, optimization of traffic flow, and prediction of congestion (Yan et al. 2014;Wang et al. 2019;Kong et al. 2022).In the area of city science, mobility trajectory data aids researchers in understanding urban dynamics, identifying activity hotspots, and improving resource allocation and public services (Halim et al. 2022;Bao et al. 2020;Han et al. 2020;Zhao et al. 2021a).Additionally, in the context of self-driving cars and intelligent transportation systems, mobility trajectory data is essential.It assists in training algorithms, allowing autonomous vehicles to navigate complex urban environments and make informed decisions (Kong et al. 2017;Waqas et al. 2020;Benko Loknar et al. 2023). Currently, large-scale mobility trajectory data has been extensively utilized in practical applications.For instance, the study conducted by Hu et al. (2023) demonstrates the utilization of historical trajectory datasets and road networks for traffic predictions, thereby mitigating potential threats stemming from abrupt surges in traffic volume and ensuring the safety of public transportation.The analysis of mobile phone data conducted by Fan et al. (2021) and Li and Mostafavi (2022) improves the general public's capacity to respond effectively to natural disasters.Furthermore, Wang et al. (2017) investigate taxi trajectory recognition to discern trip purposes and offer insights for smart city planning. Although a large amount of mobility trajectory data is collected through sensors and various applications, there are challenges in direct utilization of this data in practice due to privacy concerns, commercial considerations, missing values, and expensive deployment costs.Firstly, there are privacy issues associated with mobility trajectory data, as it involves sensitive information about individuals' activities and behaviors (Kong et al. 2018;Gursoy et al. 2019;Romero-Tris and Megías 2018).Secondly, there are commercial considerations as mobility trajectory data holds commercial value, but data sharing can be challenging due to conflicts of interest (Pan et al. 2019;Wang et al. 2020).Thirdly, data may contain missing values.In real-world mobility trajectory datasets, it is common to encounter corrupted or missing values due to sensor failures, communication loss, and data transmission issues (Ren et al. 2021;Hou et al. 2023).Finally, obtaining high-quality mobility trajectory data can be costly in terms of deployment.Setting up and maintaining sensors, data collection infrastructure, and computational resources require substantial investment (Halim et al. 2016;Zhang et al. 2020b;Kanaya et al. 2012).These factors mentioned above limit the accessibility and availability of mobility trajectory data.Therefore, trajectory data generation addresses the challenges of privacy protection, commercial considerations, missing values, and high investment costs faced in data collection.It helps professionals such as traffic managers, urban planners, and decision-makers optimize traffic systems, predict congestion, evaluate urban policies, and improve resource allocation. The research topic of mobility trajectory data generation attracts sustained attention in recent years, and many impressive models or methods have been proposed.Some of these transform the generation problem as predicting the origin-destination matrix with spatial interaction theory (Roy and Thill 2003;Yan et al. 2017;Yan and Zhou 2019).These works model the mobility patterns based on gravity theory (Odlyzko 2015), Weber-Fecher Law (Slovic et al. 1977), intervening opportunities (Stouffer 1940), and game theory (Su et al. 2007) to estimate coarse-grained mobility preferences between two regions in urban.The generation or simulation process is carried out by the microscopic traffic simulation engines such as VISSIM (Fellendorf and Vortisch 2010) and SUMO (Simulation of Urban Mobility) (Brockfeld et al. 2001).With the development of artificial intelligence, various technologies related to it are used in different fields.Among all available mobility generation methods, the deep neural network is the stand out (Liu et al. 2020;Park et al. 2018;Zang et al. 2021;Zhang et al. 2020b;Bao et al. 2022).The idea behind this work is to learn the nonlinear spatio-temporal correlations preserved in traffic datasets by leveraging the strong approximation ability of deep neural networks. The increasing popularity of mobility trajectory data generation has led to numerous publications in interdisciplinary fields.For example, in transportation and operational research areas, traffic patterns are simulated or modeled by related knowledge or theories of human mobility.However, most existing research generated data by estimating the possible distributions from the already existed dataset incapable of generating trajectories across different types.For example, the patterns learned from taxi trajectories cannot be directly applied in generating trajectories of private cars.Therefore, theory-guided or knowledgebased models also play an important role in mobility trajectory data generation. In this paper, we attempt to solve this issue by presenting a comprehensive survey of mobility trajectory data generation.The main audience and readers of this survey are practitioners interested in studying the mobility trajectory of data generation from different research perspectives.We will first outline the problem of mobility trajectory data generation and introduce some related fundamentals.Then, the framework of this survey is given, and the categorization is discussed.Afterward, based on our categorization, we will elaborate on 55 mobility trajectory data generation papers.These papers mainly cover work in the field of transportation, but we also cover several publications from the data science and deep learning fields.Finally, we will discuss the current and future challenges of mobility trajectory data generation.The insights readers can extract from this survey are: • Comprehensive definitions of mobility trajectory data generation in different application scenarios.• The strengths and weaknesses of different categories methods and models in mobility trajectory data generation. • Commonly used open datasets in mobility trajectory data generation and the associated open code.• Future challenges facing mobility trajectory data generation and possible opportunities to deal with these challenges. Comparison to other survey papers. There are some previously published works focusing on the topic of mobility trajectory data generation.One of the early surveys on this topic is Harri et al. (2009).This work mainly presents a framework to introduce vehicle mobility models, which can be used to generate realistic vehicular motion patterns based on Vehicular Ad Hoc Networks (VANETs).This survey mainly focuses on introducing the knowledge-based models, resulting in neglecting the deep learning-related work.Our survey aims to provide a more comprehensive view in reviewing the work in mobility trajectory data generation.Recently, Shin et al. (2020) provides a survey about mobility trace generation.This survey focuses on synthesizing user mobility traces by Generative Adversarial Network (GAN) categorizes the review papers according to different types of GAN.However, this survey pays attention to GAN techniques without much focus on the domain knowledge in trace generation.Furthermore, this survey offers limited insights into future challenges, which may not be sufficient to inspire readers who are dedicated to the generation of mobility trajectory data.Our work provides several deep discussions about the challenges and future directions in Sect.8. The work of Gao et al. (2020) provides another survey of spatio-temporal data mining.This survey presents a detailed categorization based on different application scenarios of GAN in spatio-temporal modeling.However, this survey focuses on spatio-temporal data mining without consideration of the mobility trajectory generation work.Our work provides a deep and comprehensive survey of mobility trajectory data generation. To the best of our knowledge, we are the first survey to organize and introduce the mobility trajectory data generation from the perspectives of different paradigms: knowledge-driven and data-driven.In this survey, we first provide a deep insight into these two paradigms and introduce the categorization and framework of our survey.Then, we give a detailed definition of mobility trajectory generation according to different scenarios.Moreover, we elaborate on the fundamentals (theories and techniques) commonly used in knowledge-driven and data-driven methods.We review each specific work based on the scenarios we presented and the fundamentals we discussed.Finally, we provide future challenges and possible trends in mobility trajectory data generation. The rest of this paper is organized as follows.In Sect.2, we introduce the detailed methodology that explains how we conducted the literature survey and identified the articles to be included in the study.In Sect.3, we discuss the taxonomy of this survey.In Sect.4, fundamentals and comprehensive definitions of mobility trajectory data generation are given.Our work is focused on Sect. 5. We split this section into two subsections: Sect.5.1 discusses the knowledge-driven methods, while Sect.5.2 elaborates on the data-driven methods.In Sect.6, we introduce the evaluation metrics commonly used in mobility trajectory data generation.Then, in Sect.7, we conclude the existing sources of mobility trajectory data generation including datasets, simulation tools and related open codes.Section 8 describes the challenges and future opportunities in mobility trajectory data generation research.Finally, we summery our work in Sect.9. Methodology In the initial stage of the study, in accordance with the recommendations by Wohlin (2014), we utilized Google Scholar to conduct a literature search by employing diverse keywords, thereby mitigating potential publisher bias.The search was carried out in March 2020 without specifying a particular time frame.Duplicate papers and non-English articles were excluded, while all relevant journal articles, conference papers, and book sections pertaining to mobile trajectory data were included.Subsequently, a snowballing approach was employed on the identified papers.Firstly, the reference lists of each paper were scrutinized to identify potentially relevant new publications pertaining to the research topic.Subsequently, papers were selected or excluded based on the aforementioned criteria, and the process was concluded when no further relevant papers were discovered.Overall, 55 papers were utilized in this study. Taxonomy From the model's perspective, we categorize the mobility trajectory generation works into knowledge-driven and data-driven.From application scenarios, we divide mobility trajectory generation into three scenarios.Figure 1 shows the categories of mobility trajectory generation.We will make a detailed discussion about our categorization. In the early stage, hypotheses or theories are proposed by researchers.Then the collected various datasets are used to confirm or refute these hypotheses or theories, e.g., gravity model in traffic flow estimation.However, we have to agree that the data mining techniques or deep learning techniques have become a mainstream paradigm of the current mobility trajectory generation topic.Some researchers even propose that the rise of data science is the end of theory (Karpatne et al. 2017).The underlying idea is to leverage abundant data to construct models by optimizing a loss function, without relying on scientific theories. Nevertheless, black-box deep learning methods have many limitations in applications.Firstly, deep learning methods rely largely on high-quality training samples.However, it is not easy to collect the representative labeled data involving complex and many physical variables.Generalization has become a major problem that plagues deep learning methods.The second limitation is the interpretability of deep learning methods.Although an 'end-to-end' or a 'task-specific method' achieves impressive performance on real-world datasets or application tasks, the process of knowledge discovery in the scientific domain does not end at that.Interpretable models or methods are based on explainable theories, which helps prevent the acquisition of erroneous patterns from noisy data.This ensures the model's capacity for generalization.Methods of mobility trajectory generation can be categorized into two classes from the macroscopic view.Some works designed their models based on theories or hypotheses, while others learned the mobility patterns from a large number of datasets.In this survey, we aim to introduce the mobility trajectory generation methods from these two paradigms.We hope that readers can get more in-depth insights or inspirations from the advantages and disadvantages of these two classes of methods we reviewed. We divide the reviewed literature into categories knowledge-driven and data-driven.Moreover, we class the data-driven methods based on the specific techniques into Recurrent Neural Network (RNN-based) approaches and GAN-based approaches. Definitions and fundamentals In this section, we first give definitions of mobility trajectory and mobility trajectory generation as shown in Table 1.We introduce three common application scenarios of mobility trajectory generation.Then, we give a detailed discussion about the fundamentals used in existing mobility trajectory generation work. Definitions Mobility trajectory mobility trajectory is defined as a set contained sequential spatio-temporal moving records S = {x 1 , x 2 , … , x n } ∈ ℝ N×2 , where ith element is a record defined as a tuple (l i , t i ) .l i denotes the spatial information, which can be GPS coordinates (longi- tude, latitude) or a region ID.t i represents the temporal information such as the timestamp of ith record. Figure 2 shows an example of mobility trajectories of two objects.The top mobility trajectory is recorded by the GPS location identification, which is the most common manner of mobility trajectory data.The bottom mobility trajectory is obtained by transforming the GPS coordinates into other representations such as region ID to help model the latent semantic information from trajectories. Domain knowledge domain knowledge is a set K that contains the information related to the trajectories or mobility patterns. In this paper, we will mainly introduce four types of domain knowledge information that are commonly used in existing mobility trajectory generation work.• Report information the government will publish various information about the transportation, urbanization, and mobility analysis report per year.Information contained in these reports can reflect mobility or transportation situations in a macroscopic view, assisting in generating trajectories.For example, Kong et al. (2018) generated trajectories of social cars by estimating the parameters by the 2015 Beijing Transport Annual Report.1 • Demographic information the size of the population directly affects the formulation and improvement of policies for employment, elderly care, medical care, and social security.It also affects the distribution of education and medical institutions in the area where citizens are located, the construction of service facilities for humans, the distribution of commercial service outlets, the supply of urban housing, and the construction of urban roads.Demographic information is related to the travel demand and decides the mobility patterns in a city.Researchers use it to compute the demand and then provide a schema to solve some urban problems such as traffic congestion (Kong et al. 2018).POI contains text descriptions of spatial entities and can be utilized to extract the latent semantic information preserved in trajectories.The mobility trajectory can be transformed into the mobility activities between POIs and the mobility patterns can be extracted by learning the relationships among POIs (Yao et al. 2018).Common ways to obtain spatial information are Google Maps,2 AMAP,3 and Open Street Map (OSM)4 • Demand information demand information can be seen as a hybrid fine-grained information affected by various factors such as demographic information, economic information, environment information, etc.To simplify the discussion and help readers build a clear understanding, we list demand information as one of the information to be reviewed in the following discussions.Demand information decides the flow from the origin and destination.It is structured by an Originated-Destination (OD) matrix, which can convert into the individual trips of vehicles.Thus, the OD matrix describes each vehicle's departure and arrival place in a specific region during the simulation. It should be noted that domain-specific knowledge is varied, and within this survey, we have selected four frequently employed sources of information in works on generating mobility trajectories for inclusion. Mobility trajectory generation given a predefined information set M ⊆ S ∪ K , the mobility trajectory generation aims to learn model or function The information set M consists of two compo- nents: S , which is a set of sequential spatial-temporal movement records, and K , which is a set of domain knowledge containing various types of information.The set Ŝ represents the collection of trajectories generated using a model or function. The generated mobile trajectory data has similar statistical characteristics to real data and can be used for analysis and verification.The requirements for generating mobile trajectory data vary in different scenarios.In the context of smart cities, generated trajectory data is used to assess traffic congestion and accidents, thereby improving urban transportation.Therefore, generated mobile trajectories mostly consider factors other than just location, such as weather, peak hours, and holidays (Fan et al. 2021).For autonomous driving, generated trajectory data is used for training to enhance the vehicle's understanding and response capabilities to the surrounding environment.Therefore, there is no need to generate long-term trajectories for autonomous driving; instead, the focus is on considering the interactions among different objects in the same space (Alahi et al. 2016).In terms of optimizing basic transportation infrastructure, generated trajectory data is used to evaluate the deployment of new infrastructure in cities and provide recommendations for urban planners and managers.Therefore, generated data is often generated for a specific area based on given historical conditions (Zhang et al. 2020b).In this paper, we divide the mobility trajectory generation into three application scenarios. • Scenario 1 the first scenario of mobility trajectory generation is about validation in VANETs and traffic simulation.For validation, some research (Codeca et al. 2015) used real information to build traffic scenarios for evaluating and comparing new communication protocols.For traffic simulation, the urban traffic state is estimated by generating trajectories (Dian Khumara et al. 2018).• Scenario 2 the second scenario of mobility trajectory generation is missing value imputation for urban.The complete mobility trajectory dataset is hard to obtain due to the limitations of privacy and security, power outage malfunctioning, and transfer errors. To solve this problem, Xia et al. (2017) and Kong et al. (2018) introduce relevant domain knowledge to generate trajectories.Besides, this work can also be used to fill in missing data.• Scenario 3 the third scenario of mobility trajectory generation is autonomous driving.To enhance autonomous driving safety, researchers (Alahi et al. 2016;Gupta et al. 2018) start to focus on making the algorithm understand the surrounding environment and the behavior of pedestrians and vehicles through generating possible trajectories. Fundamentals In this subsection, we first introduce the theories and tools used in the knowledge-driven methods, including spatial interaction models, traffic models, and two simulation tools.Then, we introduce the widely used techniques in data-driven methods, including Convolutional Neural Network (CNN), RNN, and GAN. Spatial interaction models Researchers have successively presented many models for predicting the flow of people, goods, and information between origins and destinations for more than 100 years.These models have different names in different disciplines and they are called travel distribution prediction models (Yan 2017) in transportation science.Prediction of flows can reduce the cost of spatial interaction while maintaining the diversity of choices in transportation.The gravity model is successfully applied in mobility pattern analysis.There is a law similar to Newton's law of universal gravitation in the flow distribution phenomenon between multiple places.In 2008, Jung et al. (2008) found that the traffic flow in the Seoul subway network in South Korea can be calculated using the following model: where T ij is the passenger flow from station i to station j , m i , m j are the populations of sta- tions i and j , d ij is the distance between two stations i and j , and are two parameters. In addition to the law of gravity in the railway network, this law also exists in the commuting travel (Viboud et al. 2006), population migration (Tobler 1995), international trade (Fagiolo 2010).However, the gravity model parameters have different values in different regions and may also have different values for the same region in different periods; that is, its applicability is limited.Stouffer (1940) provided another spatial interaction model called the intervening opportunities (IO) model.This model does not use the actual distance but sorts the destinations from near to far.The decision-maker will select the destination with a certain probability according to the ranking.In actual application, the IO model does not need to enter the actual distance; only the population and the number of trips in each location can complete the travel distribution forecast for the entire region.But its theoretical basis is not easy to understand and contains many parameters to be estimated; it is rarely adopted in practical applications. Traffic models Traffic models have a history of more than a hundred years.They are generally divided into the macro model at the strategic planning level and the micro model at the operational planning level.Establishing a traffic model is the basic method for traffic analysis.The four-step model (FSM) is currently the most commonly used macroscopic traffic model (McNally 2007). The FSM is one of the first trip demand models that attempt to link the use and behavior of land for transportation planning.It includes the generation and distribution of trips, the choice of mode, and the assignment of traffic.Trip generation is determined by the population size, social economy, land use, travel frequency, and other factors.Trip distribution is used to predict the inter-regional trip flow related to the regional trip volume growth trend, trip resistance, and other factors.Due to the difference in time and other factors of various modes of transportation and the different preferences of travelers for different modes of transportation, the choice of trip mode is different.Traffic assignment will load OD traffic to each intersection section through route selection. Simulation tool Traffic simulation is the utilization of simulation technology to assist in the study of traffic.It contains random characteristics, which can be microscopic or macroscopic.It involves a mathematical model that describes the real-time movement of the transportation system within a certain period of time.In this part, there are two mainly simulator tools, Simulation of Urban Mobility (SUMO) and VISSIM, which are widely used. • SUMO SUMO was provided in 2001 and first released in 2002 (Brockfeld et al. 2001;Krajzewicz et al. 2002).SUMO is an open-source tool with a simulation package that can process and simulate traffic-related data.Behrisch et al. (2011) introduced the developments and prospects of SUMO in different research topics.SUMO is an effective simulation tool with characteristics of highly portable, microscopic, and continuous.SUMO contains multiple application packages.The common ones are dfrouter which can build the path of the vehicle, duarouter which use the Gawron model (1998) to compute the shortest path and dynamic user balance, netconvert which use to translate the road network, od2trips which import the OD matrix and translate the travel path, and TraCITestClient which can explore the possibility of communication with external applications such as network simulator version 2/3 (NS2/3). 5As shown in Fig. 3, the important two modules of simulating the mobility for vehicles are road network import and demand modeling components.With the help of SUMO, urban traffic conditions change easily to study for researches.For instance, the combination The signal generator is a signal control software that implements traffic flow control through programs.They exchange data and signal status information through the interface.VISSIM can perform functions such as road network evaluation and optimization, traffic impact evaluation.And, it can realistically simulate the behavior of cars, trucks, buses, subways, light rails, bicycles, and pedestrians.For example, VISSIM supports the location layout of light rail and public transportation systems, supports the evaluation of public transport priority schemes (such as bus lanes), supports indoor and outdoor pedestrian flow analysis, and public short-distance traffic simulation.Similar to SUMO, VISSIM can also simulate trajectories by importing the OD matrix. Convolutional neural network (CNN) CNN usually plays an important role in hybrid deep network design, whose main purpose is to gradually learn inherent features, beginning with low-level features and then building more complex concepts by a series of layers.Similar to the traditional neural network, the architecture of a typical CNN (Fig. 4) includes an input layer, an output layer, and hidden layers in general.Convolution layers, pooling layers, fully connected layers, and Rectified Linear Unit (ReLU) activation are the most commonly used in hidden layers.The purpose of convolution is to extract features from input layers.In contrast, pooling aims to gradually reduce the spatial size of the data volume but preserve vital information.Convolutional layers can handle temporal dependencies (Nikhil and Morris 2019).Moreover, pooling layers, which commonly include max-pooling and average pooling, perform downsampling or upsampling between successive convolutional layers on the spatial dimensions.ReLU layer will perform activation function operations by the element; the data size of this layer has not changed.Fully connected layers are similar to the traditional multilayer perceptron (MLP), in which every single neuron connect all neurons in the previous layer.CNN is widely utilized not only for image data and natural language processing tasks (Krizhevsky et al. 2017;Nagarhalli et al. 2021), but also for addressing spatio-temporal data mining challenges.In transportation, CNN serves as a prevalent technique for extracting features that capture the spatial characteristics of traffic.For instance, Chen et al. (2020) proposed a methodology to extract spatio-temporal features across multiple layers, where CNN is employed to transform road representations into image format.This approach enables the extraction of pertinent information by considering the spatial structure of the roads.Similarly, Lv et al. (2018) treats trajectory data as two-dimensional images and utilizes multi-layer CNNs to integrate trajectory patterns at different scales, facilitating accurate prediction tasks.More about the use of CNN in mobility trajectory generation tasks can be found in Sect.5.2. Recurrent neural network (RNN) RNN (Mikolov et al. 2010) is a type of neural network that attaches great importance to capture temporal information in sequential data.RNN can take diverse sizes between inputs and outputs compared to another neural network such as CNN. A classical RNN cell also consists of three layers (input, hidden, and output).It can be seen as a chain of nodes depicted in Fig. 5.Where X represents the input data, Y represents the output data, H refers to the hidden state, W and b refer to the parameters.Specifically, the state of node H t not only process the input data x t at time t but also process the informa- tion stored in H t−1 and memorize the important sequence parts.Then, the state of node H t conveys the processed information to the next node state H t+1 .To calculate the loss, the result of output Y t can compare with the ground truth.In mobility trajectory generation, the input of RNN is composed of the historical trajectories.A continuous time period is divided into multiple time steps, and the historical trajectory is read from each time step and sent to the RNN (Ma et al. 2019). Fig. 4 The structure of the typical CNN model However, RNN suffers from vanishing gradients with auto-regressive learning manner for long sequences input.To address this problem, Long Short-Term Memory (LSTM) has been proposed (Hochreiter and Schmidhuber 1997) and further improved in Gers et al. (2000). LSTM also consists of multiple layers and possesses memorization capability to compare with simple RNN.LSTM add a element of memory state C, current C t include previous time C t−1 and current new part.In addition, LSTM has three more gates, which control the propagation of information in the network.The first is the input gate, which determines how much current information to reserve, such as remembering some new information.The second is forget gate, which determines how much current or previous information to reserve or forget.The third is the output gate, which determines the output of the information or controls how relevant and current information deliver for the next step.As shown in Fig. 6, LSTM maintains the recurrent structure of the RNN, but the difference is that LSTM has three gates to control the transmission of information.The RNN-based method possesses the main advantage in memorization capability.Knowing when to memorize or forget the information led RNN-based to be the popular method for sequence data.However, the time of training is remarkably longer than other deep neural network models because of its recurrent structure. In transportation, RNNs are primarily utilized to capture the temporal and spatial movement patterns of individuals.These models often incorporate various types of data, such as weather conditions and holiday schedules, for modeling purposes.For instance, Feng et al. (2018) introduces a mobility prediction model based on a recurrent neural network with an attention mechanism.This attention mechanism captures multi-level periodic characteristics, thereby improving the prediction performance of the recurrent neural network.Additionally, Kong and Wu (2018) proposed the Hierarchical Spatio-temporal LSTM (HSTLSTM) model to address data sparsity and capture periodic variations for predicting short-term correlations among individuals.In Sect.5.2.1, we will provide an overview of the common usage of RNN-based models. Generative adversarial network (GAN) GAN is proposed by Goodfellow et al. (2014).As shown in Fig. 7, the basic architecture of GAN comprises two fundamental components: the generator G z; g and discriminator D x; d , which compete against each other.On the one hand, the generator can capture the data x distribution p g from noise variables p z (z) learning to generate fake data that look real, which can fool the discriminator.On the other hand, the discriminator can distinguish between different classes fake or not as a classifier to model the probability of each class.In an ideal state, the generator G can generate the fake data with the real data G(z), and the discriminator difficult to distinguish whether the data generated by G is real or not.Finally, the two components reached a dynamic equilibrium, so D(G(z)) equals 0.5. However, the original GAN also exists some inadequate.For instance, GAN is not suitable for processing discrete forms of data, such as text.In addition, GAN has problems with unstable training, disappearing gradients, and mode collapse/dropping.To cope with this problem, many variants of the vanilla GAN are presented.Mirza and Osindero (2014) proposed conditional generative adversarial net (CGAN), which adds some prior conditions on Fig. 7 The structure of the typical GAN the original basis, making GAN more controllable.Arjovsky et al. (2017) proposed Wasserstein generative adversarial network (WGAN), which used Wasserstein distance instead of JS divergence to solve that the two distributions do not overlap, the Wasserstein distance can still reflect their distance.WGAN not only solves training instability but also provides a reliable training progress indicator. In the field of transportation, GANs have become a significant paradigm in data-driven generation methods, supplanting conventional stochastic models.GANs capture spatial dimension features that are beyond the reach of traditional methods and encompass additional information, including temporal dimension, social dimension, and complex nonlinear relationships in the data (Gupta et al. 2018;Ouyang et al. 2018).Recent studies have employed GANs as a stochastic generator for synthesizing realistic mobile trajectory data.For detailed information on GANs-based methods, refer to Sects.5.2.2 and 5.2.3. Mobility trajectory generation techniques In this section, we will elaborate on the representative methods of mobility trajectory generation based on the categorization we presented.For each work, we will introduce the scenario in which it is applied and discuss the theories or techniques it has developed. Knowledge-driven approaches Early generation of mobility trajectories was mainly used for simulating human daily dynamics in regional planning or observing and dealing with traffic congestion.Raney et al. (2003) designed a multi-agent traffic system that simulated 24-h micro-traffic in Zurich, Switzerland.They generated vehicle trajectories covering metropolitan areas with a population of 10 million, which were used for regional planning.They utilized demographic information and spatial information as knowledge, using micro-queue simulation and the Dijkstra algorithm for generating routes.Likewise, Cetin et al. (2003) also conducted dynamic micro-simulation of car traffic throughout Switzerland using traffic flow queue models based on Scenario 2. The generated dataset has a long duration but mainly focuses on the morning peak period and does not consider daily traffic conditions.However, both of these studies solely focus on car traffic, overlooking other modes of transportation.Kanaya et al. (2012) combined spatial information and SUMO to propose a human sensing system simulator that synthesizes realistic human movements.Under Scenario 2, it can assist in locating individuals for navigation purposes.In the simulation part, they utilized map data, sensor information, and network data as prior knowledge for simulation.However, it is challenging to set up sensors in different cities to validate the system's cost.Moreover, this method only simulates human behavior in urban areas. Considering the previously discussed constraints of privacy and security protection, the absence of authentic, publicly accessible mobile trajectory datasets capable of capturing regional traffic dynamics poses a challenge for evaluating and validating vehicular networking protocols outlined in Scenario 1.To mitigate this concern, Ferreira et al. (2009) provided an alternative method to get urban mobility of vehicles and the respective drive speeds based on traffic image.They extracted the trajectory-related knowledge, e.g., distribution of buildings, from the Spatial information contained in the stereoscopic aerial photos.This work generates the fine-grained through SUMO and the spatial knowledge is utilized to estimate an accurate O/D matrix of two regions.The spatial knowledge is mainly learned by feature selection. where X, Y, Z represents the three events and I, J denotes two partitions of events space.Given two events X and Y (already occurred), the probability that C happens can be represented as a conditional probability as (2).This work transforms P(Z) as destination choice event and utilizes the demand information and spatial information to estimate the corresponding probability in (2).The estimated choice probability can be represented as an O/D matrix to be input into simulation tools to generate the trajectories.However, the short duration of connectivity in the aircraft and the cost of aerial photography make the data collection hard. Subsequently, Thakurzx et al. (2012) acquired traffic flow data from roadside surveillance cameras in cities including London, Sydney, and Toronto, in order to calibrate the mobility of micro-vehicles.However, like previous research, this approach is also burdened by high filming costs and the need for advanced image processing techniques.Moreover, aerial photography has a limited time interval, rendering it unsuitable for generating large-scale datasets. For knowledge-driven methods to generate mobility trajectories, traffic simulation tools are indispensable.Typically, they combine prior knowledge to generate macroscopic traffic flow, which refers to traffic volume between regions, for the purpose of trajectory generation tasks.Uppoor et al. (2014) synthesized real vehicle trajectory datasets for the City of Cologne based on Scenario 1 using SUMO.This work combines Spatial information, Demographic information, and Report information to generate possible mobility distributions in urban areas.Firstly, they obtained road topology information from the OpenStreetMap database.Secondly, they utilized population, Points of Interest (POI), and time usage patterns (i.e., residents' time planning) as knowledge to calculate traffic demand.Then, the authors chose to utilize the Gawron algorithm for traffic assignment to achieve dynamic user equilibrium.Compared to the Dijkstra algorithm, the Gawron algorithm maximizes the road network capacity more effectively.The authors provided solutions to the issues encountered during the simulation process.Codeca et al. (2015) described the process of creating realistic scenarios based on SUMO in a medium-sized European city, Luxembourg, using Scenario 1.The authors extracted the road topology structure using OpenStreetMap (OSM).With the help of the simulator, they needed to verify the accuracy of the manually corrected topology structure.They generated realistic traffic patterns based on activity-based demand using data easily obtained from government websites, such as population data.Additionally, they considered the reasonableness of traffic patterns.Bedogni et al. (2015) provided an openly available real trajectory dataset.Knowledge was extracted from Spatial information, particularly road network information.They implemented the SUMO road network conversion tool NETCONVERT, which allows automated and clean importing of OSM data, generated original circular movement trajectory datasets for the Bologna region in Italy.This work considered fine-grained road features such as connectivity and traffic lights when simulating trajectories.All three works mentioned above have long ( 2) trajectory durations and wide coverage areas.However, these methods cannot be used for trajectory generation without relevant government research reports.Gramaglia et al. (2016) generated a trajectory dataset based on the Scenario 1 to characterize the vehicular network connectivity.Intelligent Driver Model (IDM; Liebner et al. 2012) is utilized to estimate the statistical driving status to simulate the traces.IDM estimates the driver behavior of a vehicle i through the instantaneous acceleration dv i (t)∕dt as: where v i (t) is the current speed of i, v max i denotes the maximum speed, and Δx des i (t) repre- sents the desired dynamical distance (leading distance driver would keep from).This work analyzed the data collected by sensors deployed on highway loops and incorporated the Demand information into traffic models to generate trajectory data.This work generated trajectories with a duration of 24 h and a coverage range of 10 km.However, it focuses primarily on the study of vehicle networks in highway environments. SUMO is a highly regarded simulation tool primarily designed for right-hand traffic.However, countries that follow left-hand traffic need to make specific modifications to the SUMO files.Lim et al. (2017) proposed a method that enables the simulation of left-hand traffic in Malaysia using SUMO, building upon the foundation of Scenario 1.The research focused on making primary modifications to the road connections and traffic signal files.Nevertheless, due to the challenges of modifying extensive maps, this method is not suitable for large-scale areas. The acquisition of inter-regional traffic flow is vital for simulating traffic using the SUMO platform, and numerous studies have relied on publicly available government data for estimation purposes.Kong et al. (2018) utilized floating car data in Beijing to generate a dataset of social vehicle trajectories within SUMO.It is important to highlight that the objective of their work was to produce trajectory datasets specifically for private cars, based on the floating car dataset, which is applicable to Scenario 1.The study integrated Report information, Demographic information, and Spatial information into a spatial interaction model to estimate the macroscopic travel distribution across different areas in Beijing.In a subsequent work, Kong et al. (2022) introduced an alternative method for generating mobility trajectories in the same application scenario.They proposed a three-layer framework, wherein the first layer focused on developing a regional partition scheme.The second layer presented a novel spatiotemporal interaction model to estimate traffic flow between two regions and conducted simulations using SUMO.Lastly, the third layer analyzed the validation results from both macroscopic and microscopic perspectives.However, it is important to acknowledge that this method exhibits certain limitations, performing better in high-density scenarios compared to low-density scenarios.Moreover, it lacks a comprehensive consideration of factors that influence travel behavior and requires specific urban road segmentation in the regional partition.The two aforementioned studies encompass an analysis of macroscopic traffic flow and microscopic driving behavior, resulting in extended duration and coverage of the entire Beijing Fifth Ring Road.Nevertheless, it is essential to note that utilizing simulation tools for route selection may contribute to traffic congestion. (3) In summary, knowledge-driven methods are predominantly used in Scenario 1 due to their application for validating works related to VANETs protocols or simulating traffic, which requires larger volumes of data, wider coverage, and longer duration.While realistic data, such as traffic flow or traffic average speed, can be collected (Gramaglia et al. 2016), these data are solely utilized for estimating statistical characteristics rather than learning features.When simulation tools are employed, knowledge-driven methods demonstrate a more effective capability in generating large-scale and long-term datasets.However, this approach heavily relies on supplementary information in addition to specific datasets.Furthermore, this generation paradigm is primarily based on spatial theories and traffic models.Nonetheless, theories or models tend to oversimplify real-world variables, leading to suboptimal performance in capturing intricate correlations or dependencies at a finegrained microscopic level. Data-driven approaches Compared to the method of knowledge-driven, data-driven methods make use of largescale real datasets of sensors by incorporating deep learning techniques into mobility trajectory generation.This paradigm aims to learn the spatio-temporal dependencies preserved in the realistic data and then generates the trajectories by the learned spatio-temporal correlations. RNN-based models RNNs and their variations have achieved certain accomplishments in generating pedestrian trajectories.Alahi et al. (2016) proposed Social LSTM for generating pedestrian trajectories.The model designed an aggregation strategy to connect neighboring LSTM units and learn the interactive behaviors among individuals in a larger spatial context.Social pooling aggregates the hidden states of adjacent pedestrians within a certain spatial distance, as shown in Fig. 8.However, this method neglects the influence of other factors, such as scene layout.Additionally, in crowded scenes, the strategy becomes more complex due to the use of LSTM for each individual.Inspired by the aforementioned work, Fernando et al. (2018) presented an attention-based LSTM model that considers the past interactions between pedestrians and their neighbors in the contextual scene to generate future trajectories.The introduced attention model can handle highly congested scenarios.Xue et al. (2017) further extended the previous work and presented a framework named Bi-Prediction for predicting pedestrian trajectories in a scene.Bi-Prediction designed a two-stage architecture based on bidirectional LSTM to learn fine-grained entry and exit trajectories in a given scene.Unlike the previous work that clusters trajectories, Bi-Prediction divides the scene into multiple regions and utilizes bidirectional LSTM classification to predict the destination selection probability of pedestrians. Unlike previous studies that disregard the present intention of nearby pedestrians while concentrating solely on their adjacent hidden states, Zhang et al. (2019) introduced a states refinement module based on LSTM network.Acting as a feature extractor, this module employs an information passing mechanism to engage neighboring pedestrians' intentions and jointly handles the current states of all pedestrians in congested scenarios.Furthermore, an information selection mechanism is introduced to selectively extract valuable features from individual neighbors. In contrast to Social LSTM and Bi-Prediction, Lisotto et al. (2019) proposed three tensors to enhance the performance of the basic LSTM model.The first tensor is the Social Tensor, which aggregates neighboring interactions using a pooling mechanism.The Social Tensor follows a similar pooling strategy as in Social LSTM.The second tensor is the Navigation Tensor, which incorporates environmental content information for path selection.Specifically, a Navigation Map N was developed to quantify the frequency of crossings during navigation.Average pooling is employed to mitigate abrupt frequency transitions.The third tensor is the Semantic Tensor, which captures the semantic characteristics of spatial areas.The study defined a semantic class C = grass, building, obstacle, bench, car, road, sidewalk and encoded it using one-hot rep- resentations.However, this approach also models each pedestrian as an LSTM network, making it equally unsuitable for crowded scenarios. In real-life scenarios, pedestrians influence each other's movements and are also affected by the presence of obstacles in their surroundings.Therefore, it is essential to consider various factors when generating future trajectory predictions.The application of attention mechanisms has proven to be effective in generating more plausible trajectories, and its effectiveness has been demonstrated in many tasks. Haddad et al. ( 2019) introduced a graph-based LSTM framework for generating pedestrian trajectories.In contrast to previous approaches, this framework represents spatial and temporal interactions using a spatio-temporal graph, as shown in Fig. 9.The graph components are decomposed into three LSTM-based modules: temporal edge LSTM, spatial edge LSTM, and node LSTM.Vanilla LSTM is employed to incorporate spatial and temporal relationships into deep representations.Al-Molegi et al. (2018) proposed a neural network model that combines RNN and attention mechanisms.This model employs representation learning techniques to extract essential information from sequential trajectories.It tends to generate pedestrian trajectories that correspond to specific locations.However, the model lacks the capability to handle unseen locations.Similarly, Vemula et al. (2018) incorporates attention mechanisms to capture the relative importance of each individual in the crowd, irrespective of their proximity.However, the computational complexity increases due to the larger number of model parameters.The attention mechanism was also incorporated by Jiang et al. (2019) to distinguish the importance of different neighbors and tackle the issue of generating pedestrian trajectories.However, their initial extraction of destination information from past trajectory data led to the model neglecting the influence of pedestrians on one another.Consequently, this led to a deviation in the intended destination, resulting in trajectories that deviated from their actual paths. The utilization of soft attention and hard attention (Fernando et al. 2018), implemented with the LSTM model, addresses pedestrian interactions in densely populated scenarios by incorporating the trajectory information of nearby neighbors into future trajectory generation.In a similar vein, Bhujel et al. (2019) propose two attention mechanisms within the LSTM framework.The first one is physical attention, which leverages input images to identify locations and generate contextual information.The second one is social attention, which computes social context vectors based on the encoder's hidden states.Furthermore, the authors employ CNN as an extractor to acquire scene information.Notably, this study employs a single LSTM, effectively reducing the complexity of the training process.In the study conducted byXue et al. ( 2020), the generation of pedestrian future trajectories relies exclusively on the observed partial trajectories.The model adopts the LSTM architecture and incorporates temporal attention mechanisms into the location and velocity LSTM layers.However, the emphasis of this research is not placed on the integration of comprehensive background information, including static obstacles and scene details. The main objective of the aforementioned trajectory generation tasks is to generate pedestrian trajectories.However, there have also been several studies that focus on generating trajectories from the perspective of vehicles.Park et al. (2018) proposed a framework specifically designed for vehicle trajectory generation.In this framework, an LSTM encoder is utilized to capture the trajectory samples and state information of the ego vehicle.Subsequently, the LSTM decoder leverages a beam search algorithm to generate future trajectories.The architecture of the LSTM encoder-decoder framework is visually shown in Fig. 10. The following works are built upon the LSTM Encoder-decoder framework.Deo and Trivedi (2018) introduced a unique approach by enhancing the social pooling layer with convolution, enabling robust learning of interdependencies in the data.Messaoud et al. (2019) tackled the challenge of long-term trajectory prediction (5 s) on highways by integrating attention mechanism and LSTM to capture spatio-temporal dependencies.Khakzar et al. (2020) aimed to overcome the limitations of existing methods, including computational complexity and dataset dependence, by employing ConvLSTM.This replaces the inner product of LSTM with convolution, ensuring the preservation of spatio-temporal motion patterns. Existing LSTM models inadequately capture the spatial interactions and temporal relations among distinct vehicles.Furthermore, basic LSTM models encounter challenges with the vanishing gradient problem, impeding their training on long time series.Choi et al. (2019) proposed an attention mechanism to enhance the basic RNN and elucidate the impact of network-level traffic state information on generating trajectories for urban vehicles.Ma et al. (2019) devised an algorithm comprising two primary levels: an instance level for capturing agent mobility and interactions, and a category level for learning from agents of the same type.Nevertheless, its practical application is limited by the algorithm's high computational cost and overreliance on traffic conditions and historical trajectories.Dai et al. (2019) integrated spatial interactions and temporal relations into the LSTM model to quantify the interactions among diverse vehicles.Additionally, they mitigated the vanishing gradient problem by introducing two consecutive LSTM layers between the input and output. It is crucial to emphasize that the previously mentioned trajectory generation works utilizing RNN are employed in Scenario 3 with the goal of comprehending pedestrian and vehicle behaviors, and preventing collisions with obstacles in the surrounding environment.These works play a crucial role in the future advancement of socially compliant agents and autonomous vehicles. GAN-based models GAN has proven effective in generating pedestrian mobility trajectories.For instance, Gupta et al. (2018) designed an early GAN model named SocialGAN, which utilized a purely data-driven approach to model interactions among individuals.L2 loss was employed in this work to measure the distance between generated samples and real samples, as illustrated in Eq.4.In contrast to the conventional GAN discussed in Sect.4.2.6,SocialGAN integrates a new pooling mechanism within the Encoder-decoder framework to capture information about individuals and generate trajectories in a scene. where m is a hyperparameter.Ouyang et al. (2018) designed a non-parametric trajectory generator that combines WGAN-GP (Gulrajani et al. 2017) to capture high-order geographic and semantic features.Non-parametric means that the generator does not assume any explicit parameters for the movement trajectories.They evaluated the synthetic trajectories by comparing the geographic and semantic features with real trajectories.In the model proposed by Amirian et al. (2019), L2 loss was excluded during the training of the generator to avoid mode collapse issues.This work not only integrated the Info-GAN structure into their network but also defined an attention aggregation mechanism to capture interactions between humans.Song et al. (2019) analyzed data from macro and micro perspectives within the GAN framework.The former applied the k-means clustering method, while the latter focused on understanding the correlations between different points.They used a four-layer CNN to generate trajectories represented as matrices.However, due to the limitations of specific locations, the model's capability is bound by high randomness and has some drawbacks.Additionally, this approach lacked quantitative evaluation of the model's realism.Subsequently, Liu et al. (2020) applied a generator called CoL-GAN with an attention mechanism in a generative adversarial network, using a convolutional neural network as the discriminator.The model includes a social attention module to capture pedestrian's historical patterns. In the task of generating vehicle movement trajectories, GAN have been utilized.For example, a GAN-based framework for predicting vehicle trajectories was proposed by Roy et al. (2019) to model the interactions between vehicles with diverse types and driving styles.The crucial aspect involves integrating the social environment into the GAN model, which incorporates the LSTM encoder-decoder architecture and has demonstrated superior performance compared to certain purely RNN-or LSTM-based approaches.To account for the interactions among multiple vehicles, Wang et al. (2020c) proposed a collaborative learning approach based on GANs to generate multi-modal distributions of vehicle trajectories.This approach comprises two modules: the autoencoder social convolution module and the recursive social module, enabling the modeling of spatiotemporal information for distinct vehicles.Zhao et al. (2021) introduced a GAN model for trajectory generation and a vehicle turning model to adapt the prediction process in urban scenarios.During the dataset preparation, the complex spatial dependencies of road topology were addressed through vehicle coordinate transformation. The above-mentioned GAN-related models, similar to the RNN-based methods for generating pedestrian and vehicle movement trajectories, are applied in Scenario 3. Hybrid methods The majority of the models presented in Sects.5.2.1 and 5.2.2 generate trajectories depicting the movement of pedestrians or vehicles within a shared scene.Subsequent approaches integrate multiple neural network models within their frameworks and capture intricate scenarios.Unless otherwise indicated, these methods are also employed for Scenario 3. Zhao et al. (2019b) presented the Multi-Agent Tensor Fusion (MATF) network, which generates trajectories considering both vehicles and pedestrians.Specifically, this method utilizes an LSTM encoder-decoder architecture and employs Conditional Generative Adversarial Networks (CGAN; Mirza and Osindero 2014) to learn a stochastic generative model , that captures uncertainties across multiple modes.The future trajectories are subsequently obtained through iterative decoding processes.Vishnu et al. (2023) further expanded upon the previously mentioned approach and introduced three prediction models with distinct architectures: TS-Transformer, Generative Adversarial Network-based (TS-GAN), and Conditional Variational Autoencoderbased (TS-CVAE).These models are designed to generate trajectories for multiple agents in interactive driving scenarios.Sadeghian et al. (2019) provided Sophie, a GAN-based model, to predict future social constraints among multiple interacting agents in a scene.This method, similar to SocialGAN, employs LSTM to estimate temporal states.However, it distinguishes itself by integrating two attention mechanisms (physical attention and social attention) to enable interpretable generation.Furthermore, CNN is utilized as a feature extractor to capture scene features. On the contrary, in comparison to most scene generation models that require extensive condition settings and parameters, Wu et al. (2020) introduced a fully data-driven model called LSTM-GAN, which solely relies on historical data.Moreover, the data generated by this method can concurrently encompass continuous time periods and locations. Others The Graph Neural Network (GNN) is a machine learning model that operates on graph structures.Graph attention networks (GAT), which combine GNN with attention mechanisms, have been employed for trajectory generation tasks.Kosaraju et al. (2019) utilized a GAN based on the GAT to generate multimodal pedestrian trajectories in interactive scenes, known as Social-BiGAT.Huang et al. (2019) introduces a spatial-temporal graph attention mechanism, using LSTM to capture the temporal correlations of pedestrian movements, and GAT to model spatial interactions. In contrast to the aforementioned research, Gao et al. (2020) introduced a hierarchical graph neural network known as VectorNet.Rather than employing CNN for encoding, they utilize vector representations to handle high-definition maps and agent movement trajectories.Additionally, they stack multiple GNN layers to capture higher-order interactions among all components.Lv et al. (2023) designed a model that combines Graph Convolutional Networks (GCN) with attention mechanisms to capture interactions among pedestrians and between pedestrians and the environment in complex scenes.However, since the functions are specifically designed based on inherent graph structures, they are not compatible with non-GCN methods. Lv and Yuan (2023) integrates social knowledge (such as the distance, speed, and visual range between pedestrians) as a matrix and combines it with the GAT to generate pedestrian movement trajectories.However, this approach primarily emphasizes the interaction between pedestrians, neglecting the interaction between pedestrians and the environment.Kang et al. (2021) proposed a method called TraG for urban crowd mobility, which automatically captures contextual and statistical mobility features, ranging from simple empirical data to synthetic trajectories, using real-world datasets.This study primarily focuses on Scenario 1 and Scenario 2 for evaluating network simulation and planning decisions. To address the probabilistic generation task for multiple interacting entities, Li et al. ( 2019) employed a variational recurrent neural network (VRNN) to improve coordination classification accuracy and used a Coordination-Bayesian Conditional Generative Adversarial Network to generate future vehicle trajectories based on historical information and coordination outcomes of multiple vehicles.Si et al. (2019) designed an Adaptive Generation (AGen) method for generating vehicle trajectories.This method combines online adaptation and offline learning models to account for individual variances and temporal behaviors.It also incorporates an RNN model. To simulate human spatio-temporal mobility patterns, Luca Pappalardo (2018) designed a data-driven algorithm called DIary-based TRAjectory Simulator (DITRAS), which achieves realistic simulation of human mobility.The basic idea is to separate the temporal characteristics and spatial characteristics of human mobility.Specifically, it constructs a mobility diary from real data and transforms it into a mobility trajectory. To address the issue of data scarcity in emerging cities, recent works have combined prior knowledge with data-driven methods based on Scenario 2. He et al. (2020) proposed a framework that integrates transfer learning and multiple-source data from the target city to generate mobility data for new cities.Rong et al. (2023) was inspired by the previous work and combined GNN and GAN to generate OD flow data in emerging cities using data from source cities. For improved estimation of traffic conditions and patterns in urban development planning and management (Scenario 2), TrafficGAN, a deep generative model proposed by Zhang et al. (2020b), captures the underlying patterns of how traffic evolves with changing travel demands and the evolving structure of the underlying road network.Within their framework, they developed a generative adversarial network (GAN) architecture featuring a generator and discriminator equipped with dynamic convolutional layers.Additionally, Zhang et al. (2020a) proposed conditional GAN (cGAN) to address traffic planning problems by considering traffic demands as conditions to generate traffic estimates.They used dynamic convolutional layers to extract spatial correlation within localized networks.Finally, they utilized self-attention mechanisms to capture temporal relationships. For the task of generating future trajectories of moving objects and forecasting traffic flow in urban areas (Scenario 2 and Scenario 3), Karimzadeh et al. (2021) employ both reinforcement learning and transfer learning techniques to design the architecture of LSTM models.Additionally, they leverage high-order convolution operations and adaptive distance adjacency matrices to effectively capture the spatiotemporal dependencies within urban environments. In summary, data-driven approaches have the capability to uncover complex and latent factors or correlations from the data itself.In Sect.3, we have discussed two limitations of the data-driven paradigm.From this subsection, it can be concluded that the performance of data-driven approaches is contingent upon the quality of the training data.Furthermore, in the majority of existing data-driven approaches, there is a lack of clear understanding regarding the training process.Taking GAN as an example, the adversarial learning process within GAN remains largely unknown.Consequently, the training of GAN still poses significant challenges in GAN-related research.Nevertheless, data-driven methods offer distinct advantages when compared to knowledge-driven methods.Our current understanding and theories may not fully grasp the inherent complexity of the mobility trajectory process.For instance, accurately modeling the subtle psychological state of individual drivers, which significantly influences trajectory generation, remains elusive. Data-driven methods are more commonly utilized in Scenario 2 and Scenario 3.Among them, Scenario 2 is primarily employed to generate and evaluate data for emerging cities, leveraging historical traffic data, and to assess the impact of new buildings on future traffic.These tasks are frequently complemented by knowledge-driven approaches. These works primarily concentrate on generating traffic speed and traffic flow data.Datadriven methods in Scenario 3 has the capability to generate both vehicle and pedestrian movement trajectories.The generated trajectory data usually exhibits shorter duration and covers smaller spatial areas, enabling a more detailed exploration of spatio-temporal dependencies. In the trajectory generation process, knowledge-driven methods, in addition to the relevant information mentioned in Sect.4, have taken into account the temporal patterns of residents' travel in Scenario 1 (Uppoor et al. 2014;Bedogni et al. 2015).Conversely, datadriven methods incorporate a wider range of features for training, going beyond the sole reliance on location-based information.Within Scenario 2, data-driven methods primarily focus on the average speeds within specific regions and the traffic flow within each respective region (Zhang et al. 2020a).Moreover, certain studies also encompass demographic data, income-related data, epidemiological conditions, and policies (Bao et al. 2022), alongside the generated time periods (peak hours, holidays) and weather conditions (Wu et al. 2020).In Scenario 3, there is a notable emphasis on ensuring safe distances between vehicles (Dai et al. 2019) and considering environmental factors (Gao et al. 2020). The complexity of knowledge-driven methods in handling large-scale and long-duration simulations depends on various factors, including the number of vehicles, road segment complexity, vehicle interactions, and traffic signal controls.The processing time increases as more factors are taken into consideration.In data-driven methods, the CNN module exhibits relatively low complexity, whereas RNN and GAN training involves higher complexity, demanding more computational resources (Huang et al. 2019).The complexity of hybrid methods is contingent upon network design, parameter size, and training process iterations.A summary is shown in Table 2. Evaluation metrics A major problem with generating mobility trajectory data is that they are generated by model simulations and thus require validation.However, the validation of the accuracy and effectiveness of knowledge-driven and data-driven data generation methods are different. For knowledge-driven methods, most evaluation metrics are based on prior knowledge (e.g., real traffic conditions and navigation services data) or visualize the generated data to analyze whether it is following common sense.For instance, Dian Khumara et al. (2018), Kong et al. (2018), Pigné et al. (2011), Bedogni et al. (2015), Zhao et al. (2019a) and Raney et al. (2003) show the efficiency for the generated dataset by comparing with the real traffic condition.In addition, some works (Kong et al. 2018;Uppoor et al. 2014) visualizes the generated data and analyzes its rationality.In some cases (Codeca et al. 2015), directly uses the generated data in actual scenarios, such as evaluating and testing network protocols.Kanaya et al. (2012) simulate a monitoring-based flow estimation system to validate the usefulness of the model. For data-driven methods, the evaluation is usually included qualitative evaluation and quantitative evaluation.The qualitative evaluation is mainly to show the generated results in a way of visual comparative analysis.There are metrics used to quantitatively evaluate the mobility trajectory generation models as illustrated in Table 3. First, for the data type of pedestrian trajectories, the common error metrics used to quantitatively evaluate the accuracy of the generated model are average displacement error (ADE), final displacement error (FDE).SUMO Kanaya et al. (2012), Uppoor et al. (2014), Codeca et al. (2015), Bedogni et al. (2015), Gramaglia et al. (2016) where N is the set of pedestrians, (x j t , ŷj t ) are the generated coordinates at time t and (x j t , y j t ) are the real position coordinates of time t. ( (2) FDE (Gupta et al. 2018;Roy et al. 2019;Sadeghian et al. 2019): this metric is the average Euclidean distance difference between the final generation positions and the corresponding truth locations.The calculation formula of this metric is as follows: where N is the set of pedestrians, (x j n , ŷj n ) are the generated coordinates at time n and (x j n , y j n ) are the real position coordinates at time n. (3) Jensen-Shannon Divergence (JSD) (Ouyang et al. 2018;Feng et al. 2020): JSD is the symmetric measure of the distance of two probability (P and Q) distributions.The smaller the JSD between the generated data and the real-world data distribution, the better.The calculation formula of this metric is as follows: where X = 1 2 (P + Q).In addition, for the data type of vehicle trajectory, the more common metrics are average accuracy (AA), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root-Mean-Squared Error (RMSE).( 4) AA (Zhao et al. 2021): it represents the average generation accuracy of the generated vehicle trajectory.The calculation formula of this metric is as follows: where y i is the information of real traffic; ŷi represents the predicted by y i ; n repre- sents the number of vehicles, K represents a constant. ( (5) MAE (Zhao et al. 2021;Li et al. 2018;Park et al. 2018): this metric represents the average value of absolute error, which can reflect the real value of the error of generated value.The calculation formula of this metric is as follows: where y i is the information of real data; ŷi is the predicted by y i ; n represents the num- ber of vehicles.The evaluation metric MAPE is equivalent to the weighted version of MAE. ( 6) RMSE (Deo and Trivedi 2018;Khakzar et al. 2020;Zhang et al. 2020a;Wang et al. 2020c): this metric is the square root of the ratio of the square sum of the error of the generation result to times n of generation.RMSE is sensitive than other metric in abnormal value.The calculation formula of this metric is as follows: where y i is the information of real data; ŷi represents the predicted value of y i ; n represents the number of vehicles. Open mobility trajectory datasets and source code In this section, we summarize the open datasets and code from the existing researches.We hope this section will help the successor to spawn more valuable work in this domain. Open datasets We categorize the datasets into three types.The first type is road network data.As previously mentioned, the road network data consist of point, line, and plane.These data show the basic structures of the region.They can be easily obtained from the Internet, such as OpenStreetMap. 6The second type is the trajectory of pedestrians and vehicles data.On the one hand, these data main include longitude and latitude information.On the other hand, they can be matched with road network data.The third type is data generated by the simulation tools.The key to generating these data is the calculation of region demand traffic using domain knowledge. The relevant open datasets are shown in Table 4. (1) Geolife: this dataset includes timestamped points with latitude and longitude information collected from 182 users from April 2007 to August 2012.Al-Molegi et al. (2018) uses it as test datasets. Table 4 Open mobility trajectory datasets (6) Cologne trace: this vehicle mobility dataset is provided by the Institute of Transportation Systems at the German Aerospace Center (ITS-DLR) based on the project of TAPASCologne.This dataset covers 400 km 2 during 24 h in a region. Open source code Open source code is not only helpful for researchers to compare the result with other methods but also inspires successors to think and deepen understanding during operation.Therefore, we provide the existing hyperlinks of open source code in this paper (as shown in Table 5). All the open source code is built on the PyTorch 7 framework.For the SocialLSTM model, the core part is the LSTM sequence network and it can train on a single GPU.The SocialGAN model consists of three components: generator, max pooling and discriminator.The code is developed on Ubuntu 16.04 with Python 3.5 and PyTorch 0. Challenges and future opportunities Mobility trajectory generation is very challenging because of the complicated relationship of spatio-temporal in mobility trajectory data.In addition, evaluating the generated results is also an important aspect of mobility trajectory generation.In this section, we introduce four common challenges and their corresponding solutions, make a comparison after a comprehensive survey on the knowledge-driven and data-driven approaches in mobility trajectory generation. Long-term mobility trajectory generation As above-mentioned, most existing work generates mobility trajectory data in a short-term range ( ≤ 30 min).Though knowledge-driven methods reviewed in Sect.5.1 can generate large-scale and long-term mobility trajectory data, the fine-grained quality of these generated data is still worse than that generated by data-driven methods.Moreover, prior or external knowledge is hard to obtain in some scenarios. As stated in Sect.5.2, lots of data-driven methods learn the temporal correlations in data by RNN-based approaches.However, the time-consuming and gradient vanishing/explosion problems limit its capabilities of generating long-term sequences.Therefore, longterm temporal dependency learning is one of the most important challenges for mobility trajectory generation. In future research, it is crucial to focus on the development of models capable of capturing global temporal dependencies.Currently, Attention-based methods (Vaswani et al. 2017;Kitaev et al. 2020;Zhou et al. 2021) have proven effective in learning long and global temporal dependencies.Moreover, integrating finer-grained knowledge into datadriven methods can guide the model in learning long-term dependencies within mobility trajectory data (Karpatne et al. 2017). Spatio-temporal interactions In mobility trajectory data generation, the basic factor is to make a model to learn spatiotemporal dependencies or correlations sufficiently.Knowledge-driven methods achieve impressive performance on macroscopic scenarios, which is inferior to data-driven methods on microscopic data generation. Although data-driven methods can learn more fine-grained spatio-temporal correlations, the sequential learning manner of these methods still limits their capabilities to learn spatio-temporal interactions.Guo et al. (2019) argued that different spatial correlated locations at different time slots are considered to formulate different impacts on a given region in the future. Most existing methods model the spatio-temporal correlations separately in generating mobility trajectory data.To overcome this challenge, future work should investigate representing mobility-related data in a more structured mode, such as using graph representations (Ye et al. 2020;Sheng et al. 2022) and knowledge graph triplets (Wang et al. 2020a).These representations can explicitly enhance the model's learning of spatial and temporal interactions. Model limitations We reviewed the mobility trajectory data generation work based on their different modeling driving forces.Knowledge-driven methods rely very little on data and perform well on the macroscopic mobility trajectory data generation, e.g., area trajectory status generation (Kong et al. 2018).However, the accuracy of these methods can not support the finegrained downstream application tasks.Data-driven methods depend largely on the data and can obtain accurate mobility trajectory data generation results.However, missing data, privacy protection or difficulty in data acquisition may limit the application of data-driven methods. To tackle this challenge, future research should explore the integration of prior knowledge into data-driven approaches.Knowledge-assisted learning has garnered considerable interest in recent years due to its potential in reducing the complexity of learning and mitigating overfitting problems when dealing with limited data (Karpatne et al. 2017).An exemplary application of this approach is seen in COVID-GAN (Bao et al. 2020(Bao et al. , 2022)), which incorporates various factors such as population demographics, median income, epidemic conditions, and policy parameters into a generative adversarial network (GAN).This integration allows for the generation of accurate mobility data specifically tailored to the COVID-19 period. Fixed representation In mobility trajectory data generation, the popular way to represent data is the image-based representation.The map is divided into regular grids and the mobility trajectory data is transformed into each regular grids.Then, CNN is utilized to extract the features of these data.However, the spatial structure in mobility trajectory has been demonstrated more complex than the Euclidean space (Ye et al. 2020).This fixed representation of mobility trajectory data is the challenge in generating more accurate results. In future works, the exploration of graph-structured data learning continues to hold significant promise (Wu et al. 2021;Lv et al. 2023).Given the prevalence of graph structures in traffic data, the integration of GNNs into deep learning frameworks, such as RNNs and GANs, offers a means to capture non-Euclidean spatial dependencies and obtain more precise generation outcomes.survey paper will facilitate readers in comprehending the fundamental concepts, application scenarios, relevant theories, and techniques in the area of mobility trajectory generation, thereby providing valuable insights. Fig. 1 Fig. 1 Overview of the categories in mobility trajectory generation Fig. 2 Fig. 2 Examples of spatio-temporal trajectories of SUMO with NS2/3 makes it possible to achieve vehicle-to-vehicle (V2V) data transmission and generate vehicle trajectories.•VISSIM VISSIM is a discrete and stochastic microscopic traffic simulation system software based on the PTV Corporation's time step and driving behavior in Germany.The traffic simulator relies on the "Wiedemann 74'' car following model or the "Wiedemann 99'' car-following model, which is classified as a psycho-physical car following model(Aycin and Benekohal 1999).The lateral lane change uses a rule-based algorithm.The VISSIM software is internally composed of a traffic simulator and a signal state generator.The simulator includes a car-following model and a lane change model. Fig. 5 Fig. 5 The structure of the typical RNN model ,Lim et al. (2017), DianKhumara et al. (2018),Kong et al. (2018), and Kong et al. (2022) Advantages the generated dataset has a long duration and wide coverage, considering macro and micro simulations Disadvantages relying on prior knowledge Based on camera Ferreira et al. (2009), and Thakurzx et al. (2012) Advantages the dataset is easily obtainable Disadvantages high cost, poor scalability, and short generation duration Others Raney et al. (2003), and Cetin et al. (2003) Advantages the generation is fast and has a long duration, with wide coverage Disadvantages they are only applicable for trajectory generation in a single traffic mode Data-driven RNNs Alahi et al. (2016), Xue et al. (2017), Zhang et al. (2019), Haddad et al. (2019), Lisotto et al. (2019), Vemula et al. (2018), Al-Molegi et al. (2018), Jiang et al. (2019), Fernando et al. (2018), Bhujel et al. (2019), Xue et al. (2020), Park et al. (2018), Deo and Trivedi (2018), Ma et al. (2019), Messaoud et al. (2019), Khakzar et al. (2020), Choi et al. (2019), and Dai et al. (2019) Advantages these models excel at capturing temporal dependencies in trajectory sequences Disadvantages the generated data has short duration, limited coverage, suffers from the vanishing gradient problem, and training it is complex and challenging on long time sequences GANs Gupta et al. (2018), Ouyang et al. (2018), Amirian et al. (2019), Song et al. (2019), Liu et al. (2020), Zhao et al. (2021), and Roy et al. (2019) Advantages these models provide multiple generation outcomes Disadvantages training is challenging, there is a collapse problem, and it has poor interpretability GNNs Kosaraju et al. (2019), Gao et al. (2020), Lv et al. (2023), Huang et al. (2019), and Lv and Yuan (2023) Advantages these models better capture spatio-temporal dependencies and non-symmetric interaction patterns, and has stronger interpretability Disadvantages the construction of the graph is time-consuming and requires more computational resources, and it has poor transferability Hybrid Zhao et al. (2019b), Vishnu et al. (2023), Sadeghian et al. (2019), and Wu et al. (2020) Comments these models incorporate various neural network techniques, such as reinforcement learning, LSTM, CNN for visual feature extraction, and Transformers for parallel computation Table 2 . (2019),Li et al. (2019),Kang et al. (2021),He et al. (2020),Rong et al. (2023), Luca Pappalardo (2018), Zhang et al. (2020b),Zhang et al. (2020a), and Karimzadeh et al.(2021)Comments they can combine prior knowledge with data-driven approaches, considering contextual information and effectively capturing spatio-temporal dependencies(1) ADE(Gupta et al. 2018;Roy et al. 2019;Sadeghian et al. 2019): this metric is the average Euclidean distance difference between each generated position and ground truth position during the generation time.Apart from this, the average non-linear displacement error (NL-ADE) calculates the distance between each generated position in the nonlinear region formed by the turning point generated by the pedestrian walking process and the ground truth position(Jiang et al. 2019).The calculation formula of this metric is as follows: Fig. 8 Fig. 9 Fig. 8 Social pooling in Social LSTM.Three hidden states in different colors are aggregated into two different pools ( 3 ) NGSIM: this dataset contains four trajectory subsets, namely: US-101, I-80, Lankershim Boulevard, and Peachtree Street.The first two are commonly used which record the trajectory of the vehicle on the highway.The I-80 dataset represents data collected on Interstate 80 in Emeryville, California on April 13, 2005.The US-101 dataset was collected on US Highway 101 in Los Angeles, California on June 15, 2005.(4) PeMS: this dataset is provided by the California Transportation Department from 2001 to 2019.It contains various traffic-relevant data, such as congestion.5) METR-LA: this dataset contains highway traffic information from Los Angeles County Road, collecting by loop detectors.Li et al. (2018) referenced the time period from March 1st to June 30th, 2012. 4. Theoretically, the SocialWay model is an improvement on the basis of the Social-GAN model, such as the SocialWay model implemented attention pooling to replace max pooling.The code of the CurbGAN model and TrafficGAN model can develop on Ubuntu 16.04 with Python 3.6.7 and PyTorch 0.4.1. Table 1 The description of notations • Spatial information spatial information can be categorized into two classes.The first class is the road network information.The road network is composed of points, lines, and planes.Besides, road network shows the basic spatial structure of a city and contain large amounts of information, such as different road network representing various road, hierarchy, and path structures.The second class is the Point of Interest (POI). Table 2 Summary of advantages and disadvantages of mobility trajectory generation Table 3 Evaluation methods for generating mobility trajectories Table 5 Open source code
17,053.4
2023-09-24T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Horizontal transfer and southern migration: the tale of Hydrophiinae’s marine journey Horizontal transfer and southern migration: the tale of Hydrophiinae’s marine journey. James D. Galbraith1, Alastair J. Ludington1, Richard J. Edwards2, Kate L. Sanders1, Alexander Suh*3,4, David L. Adelson*1 1) School of Biological Sciences, University of Adelaide, Adelaide, SA 5005, Australia 2) School of Biotechnology and Biomolecular Sciences, University of New South Wales, Sydney, NSW 2052, Australia 3) School of Biological Sciences, University of East Anglia, Norwich Research Park, NR4 7TU, Norwich, United Kingdom 4) Department of Organismal Biology Systematic Biology, Evolutionary Biology Centre, Uppsala University, SE-752 36 Uppsala, Sweden * David L. Adelson and Alexander Suh corresponding authors and contributed equally to this work. Introduction Elapids are a diverse group of venomous snakes found across Africa, Asia, the Americas and Australia. Following their divergence from Asian elapids ~30 Mya, the Australo-Melanesian elapids (Hydrophiinae) have rapidly diversified into more than 160 species including ~100 terrestrial snakes, ~60 fully marine sea snakes, and 6 amphibious sea kraits [1]. Both the terrestrial and fully marine hydrophiines have adapted to a wide range of habitats and niches. Terrestrial Hydrophiinae are found across Australia, for example the eastern brown snake Pseudonaja textilis in open habitats, the tiger snake (Notechis scutatus) in subtropical and temperate habitats, and the inland taipan (Oxyuranus microlepidotus) to inland arid habitats [2]. Since transitioning to a marine habitat, many sea snakes have specialised to feed on a single prey such as fish eggs, catfish, eels or burrowing gobies, while others such as Aipysurus laevis are generalists [3,4]. Sea kraits (Laticauda) are amphibious and have specialised to hunt various fish including eels and anguilliformlike fish at sea, while digesting prey, mating and shedding on land [5]. Since transitioning to marine environments, both sea snakes and sea kraits have been the recipients of multiple independent horizontal transposon transfer (HTT) events, which may have had adaptive potential [6,7]. Transposable elements (TEs) are mobile genetic elements that can move or copy themselves across the genome, and account for a large portion of most vertebrate genomes [8,9]. Though often given short shrift in genome analyses, TEs are important agents of genome evolution and generate genomic diversity [10,11]. For example, the envelope gene of endogenous retroviruses was exapted by both mammals and viviparous lizards to function in placental development [12]. In addition, unequal crossing over caused by CR1 retrotransposons led to the duplication, and hence diversification, of PLA2 venom genes in pit vipers [13]. Transposable elements (TEs) are classified into one of two major classes based on their structure and replication method [14]. DNA transposons (Class II) proliferate through a "cut and paste" method, possess terminal inverted repeats and are further split based on the transposase sequence used in replication. Retrotransposons (Class I) are split into LTR retrotransposons and non-LTR retrotransposons, which proliferate through ``copy and paste" methods. Both subclasses of retrotransposons are split into numerous superfamilies based on both coding and structural features [15][16][17].Within the diverse lineages of higher vertebrates, the evolution of TEs is well described in eutherian mammals and birds. The total repetitive content of both bird and mammal genomes is consistently at 7-10% and 30-50% respectively. Similarly, most lineages of both birds and eutherian mammals are dominated by a single superfamily of non-LTR retrotransposons (CR1s and L1s respectively) and a single superfamily of LTR retrotransposons (endogenous retroviruses in both) [8,18]. Some lineages of birds and mammals contain horizontally transferred retrotransposons which have variably been successful (AviRTE and RTE-BovB respectively) [19,20]. In stark contrast to mammals and birds, squamates have highly variable mobilomes, both in terms of the diversity of their TE superfamilies and the level activity of said superfamilies within each genome [21]. While these broad comparisons have found significant variation in TEs between distant squamate lineages, none have examined how TEs have evolved within a single family of squamates. The one in depth study into the mobilome of snakes found the Burmese python genome is approximately ~21% TE and appears to have low TE expansion, while that of a pit viper is ~45% TE due to the expansion of numerous TE superfamilies and microsatellites since their divergence ~90 Mya [22,23] . Unfortunately it is unclear whether similar expansions have occurred within other lineages of venomous snakes. Here we examine the TE landscape of the family Hydrophiinae, and in doing so discover horizontal transfer events into the ancestral hydrophiine, sea kraits and sea snakes. Ab initio TE annotation of the elapid genomes We used RepeatModeler2 [24] to perform ab initio TE annotation of the genome assemblies of 4 hydrophiines (Aipysurus laevis, Notechis scutatus, Pseudonaja textilis and Laticauda colubrina) and 2 Asian elapids (Naja naja and Ophiophagus hannah). We manually curated the subfamilies of TEs identified by RepeatModeler (rm-families) to ensure they encompassed the full TE, were properly classified and that each species' library was non-redundant. We first purged redundant rm-families from each species library based on pairwise identity to and coverage by other rm-families within the library. Using BLAST [25] we calculated the similarity between all rm-families. Any rm-family with over 75% of its length aligning to a larger rm-family at 90% pairwise identity or higher was removed from the library. We then searched for each non-redundant rm-family within their source genome with BLASTN (-task dc-megablast) and selected the best 30 hits based on bitscore. In order to ensure we could retrieve full length TE insertions, we extended the flanks of each hit by 4000 bp. Using BLASTN (-task dc-megablast) we pairwise aligned each of the 30 extended sequences to others, trimming trailing portions of flanks which did not align to flanks of the other 29 sequences. Following this, we constructed a multiple sequence alignment (MSA) of the 30 trimmed sequences with MAFFT [26] (--localpair). Finally we trimmed each MSA at the TE target site duplications (TSDs) and constructed a consensus from the multiple sequence alignments using Geneious Prime 2021.1.1 (www.geneious.com) which we henceforth refer to as a mcsubfamily (manually curated subfamily). To classify the mc-subfamilies we searched for intact protein domains in the consensus sequences using RPSBLAST [27] and the CDD library [28] and identified homology to previously described TEs in Repbase using CENSOR online [29] . Using this data in conjunction with the classification set out in Wicker (2007) [14], we classified previously unclassified mc-subfamilies where possible and corrected the classification of mc-subfamilies where necessary. Where possible we used the criteria of Feschotte and Pritham (2007) [17] to identify unclassified DNA transposons using TSDs and terminal inverted repeats. Finally, we removed any genes from the mc-subfamily libraries based on searches using online NCBI BLASTN and BLASTX searches against the nt/nr and UniProt libraries respectively [30,31]. Any mc-subfamilies unable to be classified were labelled as "Unknown". TE annotation of the elapid genomes We constructed a custom library for TE annotation of the elapid genome assemblies by combining the mcsubfamilies from the six assemblies with previously described lepidosaur TEs identified using RepeatMasker's "queryRepeatDatabase.pl" utility. Using RepeatMasker, we generated repeat annotations of all six elapid genome assemblies. Estimating ancestral TE similarity To estimate the sequence conservation of ancestral TEs, and hence categorise recently expanding TEs as either ancestral or horizontally transferred, we identified orthologous TE insertions and their flanks present in both the Notechis scutatus and Naja naja genome assemblies. From the Notechis repeat annotation, we took a random sample of 5000 TEs over 500 bp in length and extended each flank by 1000 bp. Using BLASTN (task dc-megablast) we searched for the TEs and their flanks in the Naja assembly and selected all hits containing at least 250 bp of both the TE and the flank. Sequences with more than one hit containing flanks were treated as potential segmental duplications. We also removed any potential segmental duplications from the results. We then used the orthologous sequences to estimate the expected range in similarity between TEs present in the most recent common ancestor of Australian and Asian elapids. Based on this information, TEs with 95% or higher pairwise identity to the mc-subfamily used to identify them were treated as likely inserted in hydrophiine genomes since their divergence from Asian elapids. In addition, mcsubfamilies which we had identified as recently expanding in hydrophiines but were not found at 80% or higher pairwise identity in other serpentine genomes, were identified as candidates for horizontal transfer. Identifying recent TE expansions In each of the four hydrophiines, using the RepeatMasker output we identified mc-subfamilies comprising at least 100 total kbp having 95% or higher pairwise identity to the mc-subfamily. We treated these mcsubfamilies as having expanded since Hydrophiinae's divergence from Asian elapids. We reduced any redundancy between recently expanding mc-subfamilies by clustering Using CD-HIT-EST (-c 0.95 -n 6 -d 50) [32]. Using BWA [33], we mapped raw transcriptome reads of eye tissue taken from each of the hydrophiines [34] for back to these mc-subfamilies. Retrotransposons with RNA-seq reads mapping across their whole length and DNA transposons with RNA-seq reads mapping to their coding regions were treated as expressed and therefore currently expanding. Continued expansion or horizontal transfer Using BLASTN (-task dc-megablast), we searched for homologs of recently expanding mc-subfamilies in a range of snake genomes including Asian elapids, colubrids, vipers and a python. We classified mcsubfamilies having copies of 80% or higher pairwise identity to the query sequence in other snakes as ancestral. All hydrophiine mc-subfamilies we were unable to find in other snakes were treated as candidates for horizontal transfer. We searched for the horizontal transfer candidates in approximately 600 additional metazoan genomes using BLASTN (-task dc-megablast). We classified all mc-subfamilies present in nonserpentine genomes at 80% or higher pairwise identity and absent from other serpentine genomes at 80% or higher pairwise identity as horizontally transferred into hydrophiines. Genome quality affects repeat annotation Previous studies have highlighted the importance of genome assembly quality in repeat annotation, with higher sequencing depths and long read technologies critical for resolving TEs [35,36]. Our repeat analysis reveals significant variation in total TE content between genome assemblies (Table 1, Figure 1), however some of this variation is likely due to large differences in assembly quality rather than differential TE expansions or contractions in certain lineages. Most notably, the TE content of the Ophiophagus assembly is significantly lower than that of that of the other species (~36% compared to ~46%). The TE content of the Aipysurus assembly is also notably lower, however to a lesser extent (41% compared to ~46%). The Naja, Laticauda, Notechis, and Pseudonaja assemblies are much higher quality assemblies than the Ophiophagus and Aipysurus assemblies, having longer contigs and scaffolds (SI Table 1). This discrepancy is because the Ophiophagus and Aipysurus genomes are both assembled solely from short read data with a low sequencing depth (28x and 30x respectively). In stark contrast the Naja genome was assembled from a combination of long read (PacBio and Oxford Nanopore) and short read (Illumina) data, scaffolded using Chicago and further improved using Hi-C and optical mapping (Bionano) technologies. In the middle ground, the Laticauda, Notechis and Pseudonaja assemblies utilized a combination of 10X Chromium linked read and short read technologies. Many of the recently expanded TEs in the Ophiophagus and Aipysurus genomes likely collapsed during assembly because of their very high sequence similarity. Therefore, the apparent lack of recent activity in Ophiophagus and Aipysurus is a likely artefact of assembly quality. As the total TE content annotated in the Naja, Laticauda, Notechis and Pseudonaja is comparable at 46-48% of the genome and the four genomes are of comparable quality, the majority of the following analyses focuses on these four species. Due to much lower genome assembly quality resulting in collapsed TEs, little recent expansion in the Aipysurus laevis and Ophiophagus hannah genomes was detected. TEs were identified using RepeatMasker [37] and a custom repeat library (see methods). Recent insertions vs ancestral insertions Recent TE insertions are likely to have diverged only slightly from the sequences RepeatMasker used to identify them, while ancestral insertions will likely be highly divergent. Based on this assumption, we discerned between recent and ancestral insertions using the pairwise identity of TE insertions to the mcsubfamily used to identify them. To estimate the expected divergence of ancestral TE insertions from consensus sequences compared to new insertions we searched for orthologues of 5000 randomly selected Notechis TE insertions and their flanks in the Naja assembly ( Figure 2). From the 5000 TEs we were able to identify 2192 orthologues in Naja naja. was notably lower than that of TEs likely inserted since the species diverged. TEs were initially identified in Notechis using RepeatMasker [37]. The presence of orthologues in Naja was determined using BLASTN (task dc-megablast) [25]. Recent expansion of specific superfamilies By comparing TE divergence profiles of the various assemblies, we can gain an overall picture of how TE superfamilies have expanded since the split of Hydrophiinae from Asian elapids (Figures 3-5). Large expansions of Gypsy retrotransposons are apparent in both the Naja and hydrophiine assemblies, however [21], except here we see variation within a family of snakes, not just between families of snakes. Without highly contiguous assemblies of all species it is difficult to rigorously identify recent or ongoing TE expansions. However, by using transcription as a proxy for transposition we identified currently expressed TE families in present day species as candidates for being active and potentially expanding. To achieve this, we first identified TE subfamilies in each species with over 100 kbp of copies with over 95% pairwise identity to the consensus sequences used to identify them; treating these subfamilies as potentially expanding. By mapping raw transcriptome reads back to these consensuses, we were able to identify expressed TE subfamilies. In all four species, diverse TEs were expressed including subfamilies of Copia, ERV, DIRS, Gypsy, Penelope, CR1, L1s, Rex1, RTE, hAT and Tc1-Mariner. being most active in hydrophiines and L2s most active in the cobra outgroup. TEs were identified using RepeatMasker [37] and a custom repeat library (see methods). Continued expansion or horizontal transfer The TE subfamilies which we have identified as recently expanded within Hydrophiinae could be ancestral and continuously expanding since diverging from Asian elapids or have been horizontally transferred from long diverged species. Differentiating between ancestral and horizontally transferred TEs is difficult and must meet strict conditions. Horizontally transferred sequences are defined as having a patchy phylogenetic distribution and higher similarity to sequences in another species than would be expected based on divergence time. To identify any TEs which may have been horizontally transferred into Hydrophiinae we conservatively estimated the expected minimum similarity of TEs present in both hydrophiines and Asian elapids using the 2192 orthologous sequences identified in Notechis and Naja to be 80% ( Figure 6). Based on this, any vertically inherited TE subfamily classified as recently expanding in hydrophiines will likely have copies of 80% or higher similarity present in Asian elapids. genomes. TE initially identified in Notechis scutatus using RepeatMasker, orthologues identified in and pairwise identity calculated for Naja naja using BLASTN (-task dc-megablast) [25]. To determine whether any recently expanding TE subfamilies were horizontally transferred into hydrophiines following their divergence from Asian elapids, we searched for them in the genomes of Naja, Ophiophagus and an additional 8 non-elapid snakes. Some recently expanding subfamilies absent from Naja and Ophiophagus were present in non-elapid snakes at 80% or higher identity. To be conservative we treated these TEs as ancestral, likely being lost from Asian elapids. The remaining TE subfamilies, those present in hydrophiines but absent from other snakes, were treated as horizontal transfer candidates. To confirm these candidate TEs were horizontally transferred into hydrophiines we searched for them in over 600 metazoan genomes. This search revealed at least eleven autonomous TEs present in non-serpentine genomes at 80% or higher identity and are therefore likely to have been horizontally transferred into hydrophiines. Of these eleven, three were transferred into the ancestral hydrophiine, five into sea kraits, one into sea snakes and one into the common ancestor of terrestrial hydrophiines and sea snakes (Figure 7). We have previously described 2 of the 11 HT events in detail, that of Proto2-Snek to Aipysurus and Harbinger-Snek toLaticauda, both of which were likely transferred from a marine species (see [6,7]). Three of the four newly identified HT events identified in Laticauda were probably also from an aquatic species, because similar sequences are only found in marine or amphibious species. Therefore, the transfer of these elements likely occurred following the transition of each group to a marine habitat. The exception is a Tc1-Mariner which is most similar to sequences identified in hemipterans, a beetle and a spider, however as Laticauda is amphibious this is perhaps not surprising. The Rex1 transferred to the common ancestor of terrestrial hydrophiines and sea snakes was only identified in the central bearded dragon (Pogona vitticeps), an agamid lizard native to the inland woodlands and shrublands of eastern and central Australia [40]. As this TE is restricted to another species of Australian squamate, this HTT appears to have occurred after hydrophiines reached Australia. The most interesting of the horizontally transferred TEs are the Tc1-Mariner, Gypsy and Rex1 which were horizontally transferred into the ancestral hydrophiine following its divergence from Asian elapids. Those three are most similar to sequences identified in marine species, either fish or tunicates. Marine elapids (sea kraits and sea snakes) and terrestrial Australian elapids were originally considered two distinct lineages [41][42][43], however recent adoption of molecular phylogenomics has resolved Hydrophiinae as a single lineage, with sea kraits as a deep-branch and sea snakes nested within terrestrial Australian snakes [1,44,45]. Fossil evidence combined with an understanding of plate tectonics has revealed Hydrophiinae, like many other lineages of Australian reptiles, likely colonised Australia via hopping between islands formed in the Late Oligocene-Early Miocene by the collision of the Australian and Eurasian plates [46][47][48][49][50]. Alternatively, it has also been proposed the common ancestor of Hydrophiinae may have been a semi-marine "proto-Laticauda", which colonised Australia in the Late Oligocene directly from Asia [51]. The horizontal transfer of three TEs into the ancestral hydrophiine likely from a marine organism provides tangible support for the hypothesis that the ancestral hydrophiine was a semi-marine or marine snake. Conclusion In our survey of elapid genomes, we have found that TE diversity and their level of expansion varies significantly within a single family of squamates, similar to the variation previously seen across all squamates or within long diverged snakes. This diversity and variation is much greater than what has been reported for mammals and birds. Our finding of HTT into lineages of hydrophiine exposed to novel environments indicates that environment may play a large role in HTT through exposure to new TEs. Additionally, the HTT of three TEs found solely in marine organisms into the ancestral hydrophiine provides evidence that terrestrial Australian elapids are derived from a marine or amphibious ancestor. As long read genome sequencing becomes feasible for more species, genome assembly quality will continue to increase and multiple genomes of non-model species will be able to be sequenced. Using these higher quality genomes, we will be able to better understand HTT and the role TEs play in adaptive evolution. Due to their rapid adaptation to a wide range of environments and multiple HTT events into different lineages, Hydrophiinae provide the ideal system for such studies.
4,477.2
2021-01-01T00:00:00.000
[ "Biology" ]
Antineutrophil cytoplasmic antibodies in Chinese patients with tuberculosis. INTRODUCTION Based on reports, infection with Mycobacterium tuberculosis is believed to induce the development of antibodies that are considered to be biological indicators for the diagnosis of some other diseases. However, conflicting results have been published regarding the presence of antineutrophil cytoplasmic antibodies (ANCAs) in patients with tuberculosis. We aim to study the seroprevalence of ANCA in a population of Chinese patients with tuberculosis, which may lead to the misdiagnosis of vasculitic disorders. METHODS The study was conducted from January 2016 to May 2017 to evaluate the presence of ANCA in 103 Chinese patients using indirect immunofluorescent assay. An enzyme-linked immunosorbent assay was performed for anti-myeloperoxidase (MPO) and anti-proteinase 3 (PR3) detection. RESULTS Perinuclear ANCA (p-ANCA) was detected in 4.8% (5/103) of patients, whereas cytoplasmic ANCA (c-ANCA) was not detected; 1.9% (2/103) of patients with tuberculosis was positive for anti-MPO antibodies, and none had anti-PR3 antibodies. Both anti-MPO-positive patients were diagnosed with ANCA-associated vasculitides. CONCLUSIONS ANCA positivity may be more related to vasculitis and immunological disorders than to a M. tuberculosis infection. Therefore, to improve diagnostic accuracy, patients with M. tuberculosis who are ANCA positive should be investigated for concurrent diseases, including the effects of drugs. Therefore, even in tuberculosis epidemic area, ANCA seropositivity, detected by ELISA, is still more suggestive of ANCA-associated vasculitides. INTRODUCTION Mycobacterium tuberculosis infection is one of the major global public health threats because more than two billion people are estimated to be infected with tuberculosis 1 .Tuberculosis is associated with autoimmune diseases, including rheumatoid arthritis, multiple sclerosis, and vasculitis, possibly through molecular mimicry 2 .Antineutrophil cytoplasmic antibodies (ANCAs) are directed against cytoplasmic ANCA (c-ANCA) and perinuclear ANCA (p-ANCA) antigens and are associated with Wegener's granulomatosis, polyangiitis, microscopic polyangiitis, and other autoimmune disorders, which are also considered to be clinical markers for systemic vasculitic disorders.Some studies reported that ANCAs could be positive in infectious diseases, such as tuberculosis; however, conflicting results have also been reported 3,4 .In clinical practice, immunological diseases, such as granulomatosis with polyangiitis, can present with clinical features that overlap with those of tuberculosis.Therefore, an ANCA test may help with differential diagnosis.This study aims to investigate the prevalence of serum ANCA positivity in Chinese patients with tuberculosis. METHODS This single-center retrospective study was conducted at the First Hospital of Jilin University in Northeast China between January 2016 and May 2017 in accordance with the Declaration of Helsinki, and the local Ethics Committee approved this study.All patients who participated in this study signed an informed consent form.Patients who were either untreated or were within 30 days of beginning their treatment for M. tuberculosis infection were included.Tuberculosis infection was diagnosed through sputum microbiology testing, clinical and radiological signs, and symptoms.The study participants underwent a detailed clinical history, including questions about musculoskeletal symptoms, duration of symptoms, and a history of medication use. The presence of ANCA was determined via indirect immunofluorescence (IIF) using a commercially available kit (Cell Signaling Co.).Testing was performed according to the manufacturer's protocol.Serum samples were diluted 1:100 in phosphate buffered saline (PBS) and incubated in microplates coated with the specific antigen.Antigen-antibody binding was detected using an anti-human immunoglobulin conjugated with peroxidase and a 3,3′,5,5′-tetramethylbenzidine (TMB) chromogenic substrate.After washing with PBS, the microplates were examined, using a fluorescence microscope.Serum samples were also tested for the presence of antibodies to proteinase 3 (PR3) and myeloperoxidase (MPO), using standardized kits (Fitzgerald Industries International).Therefore, 10μL of sample was combined with 990μL of sample buffer in a polystyrene tube and mixed well.Controls were ready to use and did not need to be diluted.PR3 and MPO were bound separately to the microwells. We added 100μL of each sample into the wells and incubated for 30 minutes at 20-28°C.After discarding the supernatant, we washed it 3 times with 300μL wash solution; and then TMB substrate solution was added into each well.A stop solution was used to quench the reaction.Finally, horseradish peroxidase (HRP)-conjugated anti-human immunoglobulin G (IgG) was used to immunologically detect the bound patient antibodies forming a conjugate/antibody/antigen complex.The microplates were examined at 450nm using a fluorescence microscope.Statistical analysis of the data included the presentation of quantitative variables as means [95% confidence interval (95% CI)] or median, and interquartile range; qualitative variables were expressed as percentages.Statistical Package for the Social Sciences (SPSS) software (SPSS Statistics for Windows, Version 17.0.Chicago: SPSS Inc.) was used for the statistical analysis. RESULTS The clinical characteristics of the patients with tuberculosis and confirmed infection with M. tuberculosis are summarized in Table 1.The demographic, clinical, and radiological characteristics of the 103 patients with M. tuberculosis were expressed in numbers and percentages, unless otherwise stated. Of the 103 patients with M. tuberculosis included in this study, 54% (56/103) were women with a mean age of 51 years.Thirty-four patients were in the early stages of their treatment for tuberculosis; treatment included rifampicin (RFP), isoniazid, and pyrazinamide.The median treatment duration was 8 days (interquartile range, 2-19 days). Of the 103 individuals included in the study, six had tuberculosis involving more than two sites, including tuberculous peritonitis, pleuritis and pericarditis, maxillofacial tuberculosis, urologic tuberculosis, and spinal tuberculosis. A previous episode of tuberculosis, which had preceded the current infection by 2-31 years, was noted in 20 patients. The serology testing for ANCA showed that p-ANCA was detected in 4.8% (5/103) of patients, and c-ANCA was not observed in any patients (Table 2).MPO was detected in 2/103 patients, who were diagnosed with tuberculosis and ANCA associated vasculitides.There were 3/103 patients without anti-MPO or PR3 who had a low serum titer of p-ANCA (1:10), among which, two patients had diabetes complicated by infection, and one patient had anaphylactoid purpura [Immunoglobulin A/Henoch-Schönlein purpura (IgA/HSP)], which is a form of vasculitis. DISCUSSION Although the most common clinical presentation of Mycobacterium tuberculosis infection is pulmonary tuberculosis, with more than 10% of patients developing extra-pulmonary manifestations, which often delay the diagnosis and allow the chronic inflammation to progress.Patients with pulmonary tuberculosis present with typical clinical symptoms, including fever, cough, and hemoptysis, and with typical diagnostic findings that include nodular and cavitating lesions on chest imaging; chronic inflammation with necrotizing granuloma formation is histologically observed on lung biopsy.However, patients with autoimmune disorders, including systemic lupus erythematosus, and patients with systemic vasculitis, including Wegener's granulomatosis, share similar clinical and histological features with tuberculosis [5][6][7][8] . The detection of serum antibodies against MPO can be associated with microscopic periarteritis (MPA) and necrotic and crescentic glomerulonephritis (NCGN).The anti-MPO antibody titer is associated with the disease activity and can be used for early diagnosis, prediction of disease recurrence, and guidance of patient response to clinical treatment.Anti-MPO antibody positivity is highly suggestive of MPA and NCGN.The presence of serum PR3 antibodies are indicative of Wegener's granulomatosis, which can be considered as a primary vasculitis. Previously published studies showed that patients with tuberculosis may have serum ANCA, as well as anti-MPO, and anti-PR3.Clinical studies on the presence of ANCA in patients with tuberculosis have been controversial.As shown in Table 3, p-ANCA was the predominant pattern in four studies that included ANCA-positive patients.However, Florez-Suarez et al. reported that 80% of ANCA-positive patients had c-ANCA, and 44.4% of patients with tuberculosis showed ANCA positivity, and that 90% of ANCA-positive patients had anti-PR3 and anti-MPO 9 .The proportion of serum-positive tuberculosis patients has been shown to decrease during tuberculosis therapy 10 .In another study, p-ANCA was detected in 25% of Iranian patients with tuberculosis, and c-ANCA was detected in 3.1%; ELISA results showed that 75% of cases had anti-MPO, and 12.5% had anti-PR3, indicating a high ANCA seropositivity rate in tuberculosis 3 .However, Teixeira et al. showed that tuberculosis was associated with low ANCA (10%) seroprevalence, including 4% of c-ANCA and 6% of p-ANCA, but only one IIF-negative specimen was anti-PR3 positive 11 . The clinical distinction between tuberculosis and Wegener's granulomatosis can be difficult at disease onset.According to some studies, ANCA testing has an important role in the differential diagnosis.However, there have been conflicting results regarding the presence of ANCA in tuberculosis.Although reasons for the conflicting findings from these studies are not clear, it is likely that geographical factors and ethnic factors may have accounted for the differences.Additionally, drugs, such as hydralazine and propylthiouracil are associated with ANCA.Ori Elkayam 12 have shown that most patients with serum anti-MPO normalized following the treatment, whereas anti-PR3 usually increased in some treated patients, which is probably due to drug-related autoimmune phenomena 2,12 .Esquivel-Valerio et al. found that in 68 cases, tuberculosis patients with ANCA positivity detected by an indirect immunofluorescent assay (IIF) was increased from 4.4% (one c-ANCA and two p-ANCA) to 28.8% (3 c-ANCA and 12 p-ANCA) after treatment, while anti-PR3 and anti-MPO were negative in all serum samples from patients with TB 10 .A published case report has shown that following treatment with RFP and ethambutol, serum anti-MPO and ANCA titers of the patient were high and a drug-induced nephritis was found 13 .Another case report showed that the serum anti-PR3 was positive after anti-tuberculosis treatment with RFP, which indicated a strong correlation between RFP and ANCA 14 .We consider that the presence of ANCA might be due to drugrelated autoimmune phenomena and, in some cases, might be affected by anti-tuberculosis treatment to a certain degree, with an increase in antibody titers to MPO and PR3 12 .Our study, which included 103 patients with a confirmed diagnosis of TB on chest X-ray and sputum microbiology, showed that the seroprevalence of ANCA was 4.9% (5/103), which was low.The patients with positive ANCA, especially with anti-MPO, tended to have other additional diseases, which included ANCA-associated vasculitis in two patients.In our study, the positive rate of serum p-ANCA was 40% in the ANCA positive patients.Although anti-MPO positivity was found, no anti-PR3 was detected.Based on these findings, we conclude that anti-PR3/c-ANCA is a rare occurrence in tuberculosis, and that autoimmune disease or vasculitis should be considered in the differential diagnosis, and that other diseases should be excluded, especially in tuberculosis endemic areas. Compared with previous studies 1,3,4,10,11,[15][16][17] , this study recruited patients with many types of tuberculosis, including spinal tuberculosis, peritoneal tuberculosis, and tuberculous meningitis.Based on the findings of this study, patients who were ANCA-positive (especially for MPO or PR3) were more likely to have other ANCA-related disorders.Therefore, concurrent diseases, including the effects of drugs should initially be considered when ANCA positivity is found, even in countries with a high prevalence of tuberculosis. This study had several limitations.It had a small study population and was performed in China.Therefore, the findings of this study in a Chinese population with tuberculosis may not be applicable to other geographical areas or other ethnic groups, although it indicates a low seroprevalence of ANCA in China.However, the study findings, when considered together with previously published studies, support that ANCA positivity without microbiological confirmation of tuberculosis is more likely to be due to systemic vasculitis. TABLE 1 : Characteristics of 103 patients with tuberculosis. TABLE 2 : Antineutrophil cytoplasmic antibody test results for 103 patients with culture-positive tuberculosis. TABLE 3 : Clinical studies on ANCA, anti-MPO, and anti-PR3 in patients with tuberculosis.
2,567.6
2018-07-01T00:00:00.000
[ "Medicine", "Biology" ]
Draft genome sequence data of the facultative, thermophilic, xylanolytic bacterium Paenibacillus sp. strain DA-C8 Thermophilic, facultatively anaerobic, xylanolytic bacterial strain DA-C8 (=JCM34211 =DSM111723), newly isolated from compost, shows strong beechwood xylan degradation ability. Whole-genome sequencing of strain DA-C8 on the Ion GeneStudio S5 system yielded 69 contigs with a total size of 3,110,565 bp, 2,877 protein-coding sequences, and a G+C content of 52.3 mol%. Genome annotation revealed that strain DA-C8 possesses debranching enzymes, such as β-L-arabinofuranosidase and polygalacturonase, that are important for efficient degradation of xylan. As inferred from 16S rRNA sequences and average nucleotide identity values, the closest relatives of strain DA-C8 are Paenibacillus cisolokensis and P. chitinolyticus. The genomic data have been deposited at the National Center for Biotechnology Information (NCBI) under accession number BMAQ00000000. a b s t r a c t Thermophilic, facultatively anaerobic, xylanolytic bacterial strain DA-C8 ( = JCM34211 = DSM111723), newly isolated from compost, shows strong beechwood xylan degradation ability. Whole-genome sequencing of strain DA-C8 on the Ion GeneStudio S5 system yielded 69 contigs with a total size of 3,110,565 bp, 2,877 protein-coding sequences, and a G + C content of 52.3 mol%. Genome annotation revealed that strain DA-C8 possesses debranching enzymes, such as β-Larabinofuranosidase and polygalacturonase, that are important for efficient degradation of xylan. As inferred from 16S rRNA sequences and average nucleotide identity values, the closest relatives of strain DA-C8 are Paenibacillus cisolokensis and P. chitinolyticus . The genomic data have been deposited at the National Center for Biotechnology Information (NCBI) under accession number BMAQ0 0 0 0 0 0 0 0. Value of the Data • The genome data from newly isolated strain DA-C8 contribute to understanding of mechanisms of efficient degradation of lignocellulosic biomass, including xylan, by xylanolytic bacteria. • Comparison of the genome data of strain DA-C8 with data of other xylanolytic bacteria can yield information useful for enhancing the efficiency of xylanolytic enzymes. • The genome data of strain DA-C8 can aid taxonomic delineation of new independent genera and Paenibacillus . Data Description Efficient hydrolysis of lignocellulosic biomass not only requires the participation of β-1,4-glycosidic chain-cleaving enzymes, such as endo-β-1,4-glucanase, cellobiohydrolases, and β-glucosidase, but also the cooperation of numerous hemicellulosic enzymes (e.g., xylanolytic enzymes) and side chain-cleaving enzymes (e.g., α-L-arabinofuranosidase) [1] . Cellulolytic and xylanolytic enzymes, in particular, have various potential industrial applications in a wide variety of areas, such as food engineering and the production of supplements, animal feed, bio-ethanol, and pulp [2][3][4] . The laundry and dish detergent industry is one of the primary consumers of industrial enzymes [2,3,5] . Among xylanolytic enzymes, Paenibacillus strains produce a variety of enzymes, including amylases, cellulases, xylanases, other hemicellulases, and lipases, with potential applications to the industrial manufacturing of detergents, food, paper, and biofuels [5] . Enzymes of Paenibacillus strains are highly active under industrially-relevant conditions, and Paenibacillus strains can be produced at a lower cost than available alternatives by high density culture [5] . The screening, identification, and characterization of the functional properties of strongly xylanolytic bacteria are of crucial importance for the construction of applicable bioprocesses. To obtain a bacterium exhibiting efficient xylan-degradation ability under anaerobic and thermophilic conditions, we newly isolated strain DA-C8, assigned to the genus Paenibacillus , as a pure culture from compost. This strain was deposited at the RIKEN BioResource Research Center as JCM 34211 and at the German Collection of Microorganisms and Cell Cultures GmbH (DSMZ) as DSM111723. Strain DA-C8 possesses strong xylan-degradation ability under thermophilic anaerobic conditions. We compared the xylan-degradation abilities of DA-C8 and P. curdlanolyticus B-6, which is highly xylanolytic because of the production of an extracellular multienzyme complex [6] , using beechwood xylan (1% w/v). When we incubated DA-C8 and B-6 for 6 days at 55 °C under anaerobic conditions in previously reported BMN basal medium [7] or at 37 °C under aerobic conditions in Berg s mineral salt medium [6] , respectively, complete degradation of beechwood xylan was achieved earlier with strain DA-C8. Strain DA-C8 can thus degrade beechwood xylan more efficiently than can xylanolytic P. curdlanolyticus B-6. The genome annotation confirmed the presence of the following predicted essential enzymes having xylan and lignocellulosic biomass degradation abilities in strain DA-C8: endo-1,4β-xylanase ( 1.131). Of particular interest, the detected debranching enzymes, such as β-L-arabinofuranosidase and polygalacturonase, are not present in the genome sequence of P. curdlanolyticus B-6 [8] . The contigs and annotated data of strain DA-C8 can be accessed at Mendeley Data [9] . Bacterial strain isolation and deposition into collections Strain DA-C8 was isolated from compost as described previously. Modified BMN medium [7] , which consisted of 2.9 g/L K 2 HPO 4 , 4.2 g/L urea, 2.0 g/L yeast extract, 1.0 g/L Na 2 CO 3 , 0.01 g/L CaCl 2 ·2H 2 O, 0.5 g/L cysteine-HCl, and 0.0 0 05 g/L resazurin in water and 200 μL aqueous mineral solution (25.0 g/L MgCl 2 ·6H 2 O, 37.5 g/L CaCl 2 ·2H 2 O, and 0.312 g/L FeSO 4 ·7H 2 O) supplemented with 1% (w/v) beechwood xylan as the sole carbon source, was used as the basal medium. All chemicals used for the basal medium were purchased from Fujifilm Wako Pure Chemicals, Osaka, Japan. The basal medium was aerated with high-purity nitrogen gas before autoclaving. Strain DA-C8 ( = JCM34211 = DSM111723) was deposited in the open culture collection of the RIKEN Bioresource Research Center (JCM) and the Leibniz Institute German Collection of Microorganisms and Cell Cultures (DSMZ). The culture of DA-C8 was centrifuged, and the pellet was used for DNA extraction. P. curdlanolyticus B-6 was cultivated on Berg's mineral salt medium at 37 °C under aerobic shaking conditions [6] . Genomic DNA purification and sequencing After cultivation of cells for 4 days under anaerobic conditions at 55 °C with xylose as the carbon source, genomic DNA was extracted by the phenol/chloroform method [8] and purified. DNA fragmentation and library preparation were carried out using an Ion Xpress Plus Fragment Library kit (catalog no. #4471269, Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. Before library preparation, fragments approximately 200 to 300 bp in size were selected by electrophoresis on Invitrogen E-Gel SizeSelect II agarose gels (catalog no. #G661012, Thermo Fisher Scientific). Genomic DNA sequences of strain DA-C8 were obtained using the Ion GeneStudio S5 system and then processed [8] . Phylogenetic analysis Sequences obtained by BLAST searching against the GenBank database were manually aligned with the 16S rRNA sequence of strain DA-C8 using CLUSTAL_W [10] . A phylogenetic tree was generated by the neighbor-joining method based on the Tamura-3 parameter model [11] in MEGA X v10.1 [12] . Genomic ANIs Calculation of pairwise ANI values of whole-genome sequences of strain DA-C8 and nine Paenibacillus strains was conducted in GENETYX NGS v4.1.1. The matrix generated from the calculated ANI values was converted into a genetic dendrogram using algorithms described previously [8] . Ethics Statement This research and analysis did not involve the use of human subjects or animal experiments. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article.
1,668.8
2021-01-22T00:00:00.000
[ "Biology", "Engineering" ]
Modern approaches to pharmacotherapy of tuberculosis infection in children Anti-TB drugs for children: Aetiotropic therapy is used for the treatment of tuberculosis (TB) in children, as well as in adult patients. Anti-tuberculosis drugs (anti-TB drugs) are divided into 3 lines, taking into account drug sensitivity in Mycobacterium tuberculosis (MBT). First-line anti-TB drugs (basic) are used to treat TB caused by drug-susceptible MBT. Secondand third-line (reserve) drugs are recommended for the treatment of MBT-induced multidrug-resistant (MDR) and extensively drug-resistant (XDR) TB, respectively. Stages and regimens to treat tuberculosis: Chemotherapy of tuberculosis in children is carried out in 2 stages (intensive treatment and continuation of treatment) and includes 5 regimens. Each regimen assumes a certain combination of anti-TB drugs, indicating the duration and frequency of their administration. The final chemotherapy regimen is chosen only according to the results of determining the drug sensitivity. To improve the TB epidemic among children, it is important to improve the regimens for the use of anti-TB drugs. The effectiveness of anti-tuberculosis pharmacotherapy is largely determined by the MBT sensitivity and the rational choice of the chemotherapy regimen. The wrong choice of a chemotherapy regimen or its violation threatens to reduce the effectiveness of pharmacotherapy and expand the spectrum of resistance of the pathogen. The development of fixed-dose combination anti-TB drugs and special dosage forms for children will improve the quality of chemotherapy and adherence to treatment. Pharmacoeconomic studies are needed to increase the effectiveness of drug pharmacotherapy for tuberculosis infection in children and to optimize the costs of its implementation. Introduction An estimated 10 million people (12% of them were children) worldwide fell ill with tuberculosis in 2019, according to The World Health Organization. The Russian Federation is one of the countries with the greatest burden of tuberculosis. The indicator of the total incidence of tuberculosis in the country in 2019 was 41.2 per 100 000 population The cases of multidrug resistance tuberculosis increased to 5.4 per 100 000 population in 2019. The epidemic of a new coronavirus infection caused by SARS-COV-2, which began in 2019, has affected the tuberculosis control. The number of people newly diagnosed with TB fell from 7.1 million in 2019 to 5.8 million in 2020. Reduced access to prevention, diagnostics and treatment of TB has resulted in an increase in TB deaths -by 1.3 million TB deaths among HIV-negative people and by an additional 214 000 deaths among HIV-positive people. Still, we should not forget that tuberculosis is curable and preventable. The most effective method of treating tuberculosis patients is the use of aetiotropic chemotherapy with anti-TB drugs (World Health Organization 2021). Anti-TB drugs for children In the Russian Federation, the epidemic situation with regard to tuberculosis infection, including among the child population, remains difficult. The incidence of tuberculosis among children decreased from 12.4 per 100 000 population (2015) to 7.7 per 100 000 population (2019). Child mortality in 2019 was 0.3 per 100 000 population. The same indicator in 2015 was 0.6 per 100 000 population. The difficult situation is largely due to the development of drug resistance of Mycobacterium tuberculosis (MBT) to the main anti-TB drugs (Nechaeva 2020). Russia, along with India and China, is included in the list of countries with a wide prevalence of drug resistance of MBT to anti-TB drugs (Vаsilyevа et al. 2017). The proportion of patients with MBT resistant to Isoniazid and Rifampicin (multidrug resistance, MDR) among patients with respiratory tuberculosis in 2019 increased to 30.1% (Federal Research Institute for Health Organization and Informatics of the Ministry of Health of the Russian Federation 2019). The most effective method of treating tuberculosis patients is the use of aetiotropic chemotherapy with anti-TB drugs. According to scientists, tuberculosis infection can be successfully treated with adequate drug supply and rational use of drugs. With adequate pharmacotherapy, it is possible in most cases to complete effectively the main course of treatment of patients with tuberculosis (Zinchenko et al. 2018;Nechaeva 2020). Several chemotherapy regimens are used to treat children with TB infection. Constant monitoring of MBT resistance to anti-TB drugs and selecting the optimal modes of their use are necessary to increase the effectiveness of anti-tuberculosis pharmacotherapy. This article presents an analysis of various regimens of chemotherapy for tuberculosis infection in children with the aim of optimizing them and increasing the effectiveness of the use of anti-tuberculosis drugs. The basis for the management of patients of any age with tuberculosis infection is the appointment of aetiotropic pharmacotherapy -chemotherapy. In Russia, the treatment of patients with tuberculosis infection is based on guidelines approved by the Ministry of Healthcare of the Russian Federation (MH RF). Currently, the provisions of the Order of the Ministry of Health of the Russian Federation no. 951 of December 29, 2014 and no. 1246n of November 24, 2020 are in force (Ministry of Healthcare of the Russian Federation 2014, 2020). In accordance with the orders, the clinical guidelines have been developed and approved, specifying the provision of anti-tuberculosis care to various categories of citizens, including for children and adolescents (Russian Society of TB Clinicians 2014, 2015. Medicines for the treatment of respiratory tuberculosis are divided into 3 lines, taking into account the drug sensitivity. First-line anti-TB drugs are the mainstay of treatment for TB caused by drug-susceptible Mycobacterium tuberculosis. These include Isoniazid, Pyrazinamide, Rifampicin, Rifabutin, Ethambutol, and Streptomycin. If a patient has MBT with multidrug resistance (MBT resistance to Isoniazid and Rifampicin), they resort to prescribing drugs of the second-line (reserve) -Kanamycin, Amikacin, Capreomycin, Levofloxacin, Moxifloxacin, Sparfloxacin, Bedaquiline, Protionamide, Ethionamide, and Aminosalicylic acid. The third-line, also a reserve one, includes antibacterial drugs: Linezolid, Meropenem, Imipenem+Cilastatin, and Amoxicillin+Clavulanic acid. They are recommended for the treatment of extensively drug-resistant tuberculosis (XDR) of the pathogen (MBT resistance to Isoniazid, Rifampicin, any drug from the group of fluoroquinolones and one of the injectable second-line anti-tuberculosis antibiotics: Kanamycin and/or Amikacin and/or Capreomycin) and pre-XDR (resistance of MBT to fluoroquinolone (Ofloxacin or Levofloxacin) or at least to one injectable second-line antibiotic (Capreomycin, Kanamycin or Amikacin), as well as in other cases when it is impossible to form a regimen of five effective drugs (Ministry of Healthcare of the Russian Federation 2014). The WHO recommends the use of Delamanid (registered in the Russian Federation in 2020) and Clofazimine as backup anti-TB drugs, but in Russia these drugs are not yet officially included in the therapy regimens. Aetiotropic pharmacotherapy is prescribed to suppress the growth and reproduction of the MBT population in the body and to cure the patient (Ministry of Healthcare of the Russian Federation 2014; Russian Society of TB Clinicians 2014). At the same time, an integrated approach to the chemotherapy of tuberculosis infection is important, involving the combined use of anti-tuberculosis and antibacterial drugs to influence the causative agent of the infection and drugs for the prevention and correction of side effects that occur during the treatment period. According to the literature Borisov 2017, 2018;Schegertsov et al. 2018;Athulnadh et al. 2020;Laghari et al. 2020;van der Walt et al. 2020), the most frequent side effects associated with the use of chemotherapy in patients with tuberculosis infection are toxic reactions (hepatotoxicity, gastrotoxicity, nephrotoxicity) and allergic reactions. The incidence of side effects from the liver is 44-60% of cases, which is confirmed by the results of studies by many researchers (Ivаnovа and Borisov 2018;Stаrshinovа et al. 2018). It was found that hepatotoxicity is more often associated with taking first-line anti-TB drugs -Rifampicin (67%), Pyrazinamide (30%), and Isoniazid (7%) (Schaaf et al. 2016;Stаrshinovа et al. 2018). To prevent the development of toxic reactions, at the same time with the start of anti-tuberculosis chemotherapy, it is recommended to prescribe corrective drugs to patients. The main drugs for the prevention and elimination of emerging side effects are hepatoprotectors and B vitamins, in particular Pyridoxine hydrochloride (Russian Society of TB Clinicians 2016; Stаrshinovа et al. 2018). Unfortunately, the rationality of using hepatoprotectors has not been fully proven due to the lack of sufficient scientific data obtained in accordance with the principles of evidence-based medicine (Novikov and Klimkina 2009). In pediatrics, anti-tuberculosis and antibacterial drugs are prescribed in maximum therapeutic doses (Table 1), taking into account the child's age and body weight, with controlled continuous daily intake in accordance with the prescribed chemotherapy regimen (Ministry of Healthcare of the Russian Federation 2014; Russian Society of TB Clinicians 2016). Stages and regimens to treat tuberculosis Tuberculosis chemotherapy includes 2 stages. The first stage is an intensive phase of treatment, which is aimed at destroying the maximum amount of MBT. It is the first stage of treatment at which the acute manifestations of the disease are eliminated, bacterial excretion is stopped, the development of drug resistance is prevented, and infiltrative and destructive changes in tissues are reduced. The second stage is the continuation phase of treatment. This stage targets the remaining MBT and prevent its reproduction. Two-stage treatment promotes the consistent involution of the tuberculosis process, a stable clinical effect, and prevents the reactivation of tuberculosis (Ministry of Healthcare of the Russian Federation 2014). When prescribing anti-tuberculosis treatment and choosing a chemotherapy regimen, the epidemic danger of a patient, the severity of the course of the disease, the presence of concomitant diseases and conditions in the anamnesis must be taken into account. Often, phthisiatricians have to resort to prescribing individualized treatment, in the construction of which they pay attention to the pharmacokinetics of anti-TB drugs and their interaction with one another. Several chemotherapy regimens have been developed and approved for the treatment of patients with respiratory tuberculosis (Fig. 1). The chemotherapy regimen is a combination of antituberculosis and antibacterial drugs, indicating the duration and frequency of their administration, timing and content of control studies, and organizational forms of treatment. The final chemotherapy regimen is selected only according to the results of determining the drug sensitivity of MBT, and when the nature of the sensitivity to the drugs used changes, the chemotherapy regimen used is adjusted (Ministry of Healthcare of the Russian Federation 2014; Russian Society of TB Clinicians 2014, 2016). The first and third regimes are used in the case of diagnosing drug-susceptible tuberculosis; the second, fourth and fifth -in drug-resistant tuberculosis (Russian Society of TB Clinicians 2016). The first chemotherapy regimen is prescribed for children with bacterial excretion and with preserved drug sensitivity of the pathogen. The course of treatment takes at least 6 months with 4 first-line anti-TB drugs in the intensive care phase (at least 2 months, 60 doses) and two or three drugs in the continuation phase (at least 4 months, 120 doses). To achieve abacillation according to the first regimen in the intensive treatment phase, it is recommended to use a combination of Isoniazid, Rifampicin, Pyrazinamide, and Ethambutol. The use of Ethambutol can cause the development of optic neuritis. In this connection, the drug is prescribed to children only by the decision of the medical commission. Ethambutol can be replaced by Streptomycin, but only if the sensitivity of the secreted MBT is established. Rifapentine can be used in place of Rifampicin. The drug differs from Rifampicin Children with tuberculosis with established drug resistance of MBT to Isoniazid and sensitivity to Rifampicin, or from a contact with this type of sensitivity are treated according to the second regimen. The duration of the regimen should be at least 9 months (intensive phase -at least 3 months, 90 doses; continuation phase -at least 6 months, 180 doses). In the intensive phase, 4 anti-TB drugs of both the first-and second-line are combined, taking into account the results of determining the drug sensitivity of the pathogen. The recommended regimen is the following: Rifampicin, Pyrazinamide and Ethambutol with Levofloxacin for 6 months -intensive phase; Rifampicin, Pyrazinamide and Ethambutol up to 9-12 months in the continuation phase. The prescription of The prescription of the third chemotherapy regimen is indicated for children with tuberculosis without bacterial excretion and the risk of developing MDR. The duration of the intensive phase should be at least 2 months (60 doses), and the continuation phase should be at least 4 months (120 doses). For treatment in the intensive phase, 4 anti-TB drugs are used -Isoniazid, Pyrazinamide, Rifampicin, and Ethambutol. The continuation phase includes Isoniazid with Pyrazinamide or Isoniazid with Rifampicin and Pyrazinamide. Ethambutol may be used instead of Pyrazinamide in the three-way regimen. The fourth chemotherapy regimen is prescribed for children with tuberculosis with established resistance of the pathogen to Rifampicin and Isoniazid and sensitivity to drugs of the fluoroquinolone group, with unknown drug sensitivity to other anti-TB drugs, as well as for patients at risk of MDR of the pathogen. For the treatment of such children (with MDR MBT), anti-TB drugs are used, divided into three groups depending on their effectiveness. Group A consists of Levofloxacin or Moxifloxacin with Bedaquiline and Linezolid. Bedaquiline is a new diarylquinoline drug approved for use in children. Clinical studies have confirmed its effectiveness and safety (Esposito et al. 2016;Pym et al. 2016;D'Ambrosio et al. 2017;Migliori et al. 2017;World Health Organization 2019). Cycloserine and Terizidone are in group B. Group C includes Ethambutol, Pyrazinamide, Imipen-em+Cilastatin, Meropenem, Amikacin (Streptomycin), Ethionamide or Protionamide, and Aminosalicylic acid. Kanamycin and Capreomycin are prescribed only for life-saving reasons due to their high toxicity. Treatment of patients with MDR according to the fourth regimen should be carried out for at least 18-20 months (long-term regimen) or at least 9-12 months (short regimen). The short regimen is indicated in the case of exclusion of resistance to drugs of the fluoroquinolone group and injectable antibiotics, and in cases of limited and minor forms of tuberculosis in children. The fourth treatment regimen consists of at least 5 drugs: Levofloxacin or Moxifloxacin, Bedaquiline, Linezolid, Cycloserine or Terizidone, and/or Protionamide or Ethionamide (Russian Society of TB Clinicians 2016). The fourth regimen can be standard or custom. An individualized regimen is indicated for patients with respiratory tuberculosis with established drug resistance of the pathogen to Isoniazid and Rifampicin and sensitivity to drugs of the fluoroquinolone group, with known results of sensitivity to anti-TB drugs of the second-line (reserve). In 2019, WHO introduced a new approach to the formation of tuberculosis chemotherapy regimens, including combinations of new drugs with antimycobacterial activity. In particular, the groups of drugs were revised according to the order of inclusion in the chemotherapy regimen when MBT with MDR or Rifampicin resistance is detected. Group A includes Levofloxacin (or Moxifloxacin), Bedaquiline, and Linezolid; Group B -Clofazimine and Cycloserine/Terizidone; Group C (if it is impossible to use drugs of groups A and B) -Ethambutol, Delamanid, Pyrazinamide, Imipenem+Cilastatin, Meropenem, Amikacin (Streptomycin), Ethionamide/Protionamide, Aminosalicylic acid (World Health Organization 2019). The use of the standard fifth chemotherapy regimen is justified in patients with tuberculosis with suspected XDR pathogen without bacteriological confirmation, as well as from a proved close contact with a tuberculosis patient with XDR-MBT. An individualized regimen is necessary for children with respiratory tuberculosis, with established drug resistance of the pathogen to Isoniazid and Rifampicin in combination with established or suspected resistance to Ofloxacin. The duration of pharmacotherapy according to the fifth regimen is at least 18-24 months (intensive phase -at least 6-8 months; continuation phase -at least 12 months). The terms of treatment for patients with limited uncomplicated processes with good positive dynamics can be reduced to 15-17 months (Russian Society of TB Clinicians 2016). In the intensive phase of therapy, at least 6 drugs are used, to which sensitivity is preserved. It is recommended to include Bedaquiline, Linezolid, and Levofloxacin in the regimens. In the continuation phase, 4 drugs are prescribed with the addition of Moxifloxacin or Levofloxacin and other drugs with preserved sensitivity. The effectiveness and safety of antituberculosis pharmacotherapy in children depends on the duration and continuity of treatment. Short course or premature refusal from chemotherapy does not allow achieving the final clinical effect, which results in the aggravation and progression of the tuberculosis process. Long-term use of chemotherapy in children is dangerous due to the occurrence of side effects with the development of gross disorders of cellular metabolism and a gradual decrease in the sensitivity of MBT to drugs. Violation of the regime for taking anti-TB drugs threatens to expand the spectrum of resistance of the pathogen. One of the main reasons for early discontinuation of chemotherapy for tuberculosis (especially in children) is the inconvenience of taking a large number of drugs. Recent studies have been aimed at this aspect of pharmaceutical care in pediatric phthisiology. Fixed-dose combined anti-TB drugs have been developed. An increase in adherence to treatment has been proven in the case of the use of such drugs (Faust et al. 2019;Tsiligiannis et al. 2019;Wademan et al. 2019). In addition, the combination of anti-TB drugs makes it possible to sum up their therapeutic effect. The destruction of MBT occurs faster, and the likelihood of drug resistance formation decreases (Faust et al. 2019). A recent Russian multicenter observational study demonstrated that the use of fixed-dose combined anti-TB drugs in patients with newly diagnosed tuberculosis or its recurrence with preserved MBT sensitivity to Isoniazid and Rifampicin in chemotherapy regimens I and III was effective and was characterized by sufficient safety and tolerability (Tyulkova et al. 2020). Another reason for the insufficient effectiveness of antituberculosis chemotherapy in children may be lack of special children's dosage forms of antituberculosis drugs on the Russian pharmaceutical market (Usacheva et al. 2020). A number of scientists point out the need to expand the range of anti-TB drugs used by developing and introducing oral liquid dosage forms, tablets for dispersion in the oral cavity (Kim et al. 2016;Purchase et al. 2019). Research is underway to develop innovative dosage forms for the treatment of tuberculosis by incorporating anti-TB drugs into nanoparticles, nanocapsules, and liposomes for further targeted transport to the site of infection (Zinchenko et al. 2018, Sanzhakov et al. 2013). Conclusion For the treatment of tuberculosis infection in children, as well as in adult patients, aetiotropic chemotherapy is used. Antituberculosis drugs are divided into 3 lines, taking into account the drug sensitivity of Mycobacterium tuberculosis. First-line anti-TB drugs (basic) are used to treat TB caused by drug-susceptible MBT. Drugs of the second-and third-lines (reserve) are recommended for the treatment of tuberculosis caused by MBT with multi-drug resistance and extensively drug resistance, respectively. Chemotherapy of tuberculosis in children is carried out in 2 stages (intensive treatment and continuation of treatment) and includes 5 regimes. Each chemotherapy regimen involves a certain combination of anti-TB drugs, indicating the duration and frequency of their administration, and organizational forms of treatment. The final chemotherapy regimen is selected only according to the results of determining the drug sensitivity of MBT, and when the nature of the sensitivity to the drugs used changes, the chemotherapy regimen used is adjusted. To improve the TB epidemic among the child population, it is important to improve the regimens for the use of anti-TB drugs. The effectiveness of antituberculosis pharmacotherapy is largely determined by the MBT sensitivity and the rational choice of the chemotherapy regimen. The wrong choice of chemotherapy regimen does not allow achieving the desired clinical effect, is associated with the risk of side reactions and a decrease in the sensitivity of MBT to drugs. Violation of the anti-TB drugs regimen, including the duration and continuity of treatment, threatens to decrease the effectiveness of pharmacotherapy and to expand a spectrum of resistance of the pathogen. The development of fixed-dose combined anti-TB drugs and special dosage forms for children will improve the quality of chemotherapy. Increase adherence to treatment has been proven with the use of such drugs. At the same time, the destruction of MBT happens faster, and the likelihood of the formation of drug resistance decreases. Pharmacoeconomic studies are needed to increase the effectiveness of drug pharmacotherapy for tuberculosis infection in children and to optimize the costs of its implementation.
4,587.8
2021-12-03T00:00:00.000
[ "Medicine", "Biology" ]
Using Online Videos to Improve Attitudes toward Shared Autonomous Vehicles: Age and Video Type Differences : Future adoption of shared automated vehicles (SAVs) should lead to several societal benefits, but both automated vehicles (AVs) and ridesharing must overcome their barriers to acceptance. Previous research has investigated age differences in ridesharing usage and factors influencing the acceptability and acceptance of AVs. Further complicating our understanding of SAV acceptance, much of the public lack accurate knowledge and/or actual experience regarding AVs. In this study, we employed a 3 (age group) × 4 (video condition) longitudinal mixed experimental design to investigate age differences in anticipated SAV acceptance after viewing different types of introductory videos related to AVs (educational, experiential, or both) or currently available ridesharing provided by transportation network companies (control). Younger, middle-aged, and older adults were randomly assigned to watch (1) an educational video about SAV technologies and potential benefits, (2) an experiential video showing an SAV navigating traffic, (3) both the experiential and educational videos or (4) a control video explaining how current ridesharing services work. Attitudes toward SAVs (intent to use, trust/reliability, perceived usefulness, perceived ease of use, safety, desire for control, cost, authority, media, and social influence) were measured before and after viewing the video(s). Significant differences in how SAV attitudes changed were found between the educational and experiential video conditions relative to the control video and between different age groups. Findings suggest that educational and/or experiential videos delivered in an online format can have modest but significant improvements to their viewers’ attitudes toward SAVs—particularly those of older adults. Introduction Autonomous vehicles (AVs) have promise and potential to bring a host of benefits to their users and positive externalities to transportation networks and society at large (e.g., increases in access to mobility, improvements in safety and comfort, reductions in traffic congestion and related greenhouse gas emissions, etc.), so long as they are deployed in a way that is sustainable.It is expected that SAE level 4 [1] AVs will improve riders' comfort during transit and allow those who were once burdened with the safety-critical dynamic driving task to focus on other productive tasks or relaxing activities [2].To realize any potential benefits of AVs, prospective riders must be informed and aware of their utility, have their uncertainties about the operation of novel AV technologies and services clarified, and find the reliability of AVs acceptable before actually experiencing a ride in them.To this end, the multifaceted factors influencing the acceptance of AVs have been widely studied e.g., [3][4][5][6].Nordhoff and colleagues' [6] review-based analysis of this literature displays the multideterminant nature of attitudes toward AVs.Nordhoff et al. found that published research on AV acceptance broke down into seven main classes listed here in descending order by percentage: Socio-demographics (28%), domain-specific system evaluation (22%; i.e., performance and effort expectancy, safety, facilitating conditions, and service and vehicle characteristics), travel behavior (15%), personality (14%), moralnormative system evaluation (12%; i.e., perceived benefits and risks), exposure to AVs (6%), and symbolic-affective system evaluation (4%; hedonic motivation and social influence). Travel needs and an individual's ability to meet them vary across the lifespan [7,8], and the acceptability of a transportation mode often varies by and within the age cohort [9,10].Perhaps unsurprisingly, younger individuals and those who have a high level of comfort with new technologies and identify as early adopters of them also tend to hold positive attitudes regarding emergent transportation innovations like AVs [5,11].While some market segments may not require much enticement to adopt AVs, others might turn to AVs to support their community mobility and address unmet travel needs, such as those with disabilities or older adults with age-related declines that limit their viable transportation options [12][13][14].There remains significant reticence about AV technology among many people with disabilities [15] and older adults [16], even though their quality of life might stand to benefit greatly from the adoption of AV technology once it becomes available.Older adults may especially benefit from this technology, as they are more likely to experience increased unmet travel needs as they age [8]. Precisely this phenomenon was witnessed in a preliminary study that simulated the impacts of personally owned AVs' on future travel behavior.Harb and colleagues [17] provided households with an allotment of hours of a chauffeur service and instructed them to use it as if it were a completely autonomous vehicle.The most notable increase in vehicle miles traveled (VMT) came from an older woman who used the chauffeur service to more than triple the amount she drove in a week (117 miles vs. 516), citing the novelty and the ability to satisfy her latent demand for longer trips than she felt comfortable driving herself.Harb and colleagues' preliminary study results and the results that followed from the full experiment [18] strongly suggest that many of the potential safety, sustainability, and benefits related to congestion will not be realized by AVs if they are personally owned by individuals instead of shared. While improvement in access to transportation for those with mobility limitations is indeed a positive possible outcome of widespread AV adoption, there is reason for trepidation if personally owned or single rider AVs are preferred over shared AVs (SAVs).have been witnessed to make up for these unmet travel needs when given access to an AV-like transportation option [17]. Though the visibility of AVs is increasing as developers extend their service ranges to more urban centers, the availability to experience vehicle automation is still quite limited anywhere outside of these zones.This means that the operationalization research conducted after giving individuals experience with AV technology usage-informed factors has shown that perceived ease of use (PEOU) of AVs, intention to use the technology, and perceived barriers have also been found to be significant indicators of attitudes toward AVs [19] yet Nordhoff et al.'s [6] review found at the time of its publication that only 6% of studies of AV acceptance attitudes considered experience and knowledge and/or exposure to AVs.Collectively, this all suggests efforts to familiarize individuals with AV technology by bolstering the accuracy and amount of their knowledge about their capabilities and limitations and/or increasing their exposure to and experience of how AVs operate in the complex real-world environments that they will be deployed in. Will Sharing Rides Increase Acceptance of AVs? Shared automated vehicles (SAVs), defined in this paper as an AV where the passenger is paired with other riders requesting transportation along a similar route, could lead to several additional benefits, including reduced traffic, reduced pollution from vehicles, and improved parking availability [5,11].These vehicles would be a form of public transportation rather than privately owned vehicles.However, for these benefits to be realized, there needs to be high levels of public acceptance for SAVs [20].For many, ridesharing can be a convenient and cost-effective transportation alternative to a personal car and can potentially help solve first-mile-last-mile problems (i.e., getting from home to a metro station and back) when using high throughput public transit systems [21].Previous studies have examined what factors influence a traveler's decision to use ridesharing services offered by transportation network companies (TNCs), such as UberPool or Lyft Shared Ride, where users are paired with other passengers requesting a ride along a similar route.Motivations for using ridesharing services include cost savings, travel time compared to public transportation, and comfort [22].Demographic factors, such as gender, have seen mixed results regarding ridesharing use, quite possibly due to cultural differences between the regions studied and/or sampling differences in the studies.Some research found that males were more likely to use ridesharing services than females [5], while others found no gender differences [22].Age, on the other hand, has been found to be a significant factor in current trends in TNC ridesharing use, with younger individuals being more likely to use these services than older individuals [5,22].There is some concern that there will be reticence to share the vehicle with strangers for a number of reasons, such as security and privacy concerns [23,24] or inconvenience [25] associated with having other unknown riders share the ride. There is some evidence from other work on advanced vehicle technologies that typical patterns of age and technology adoption/use might differ from the norms found with information communication technologies (ICT).Classen and colleagues found no difference in age for AV acceptance in a study that provided participants an opportunity to obtain first-hand experience riding in an automated shuttle as well as a simulated AV [11].Older adults have been shown to place higher monetary value on advanced driver assistance systems (ADAS) like blind spot monitors [26].They are also more willing to adopt other driving technologies.Familiarity with and trust in automated technologies have been shown to positively correlate to positive attitudes toward AVs [5,27].Given these findings, it is possible that older adults' attitudes toward AVs could be improved by increasing familiarity with and highlighting the benefits of the technology.Trust in AVs has been shown to increase with first-hand experience riding in one [27,28], and perceived safety influences both intention-to-use and perceived usefulness of AVs [2]. Computer-Mediated Communication to Improve SAV-Related Attitudes? Computer-mediated communication has become an appealing approach for marketing and consumer research due to its low cost, speed, and breadth of reachable audiences [29], much less expensive than incentive programs that have been suggested to increase the adoption of connected and automated vehicles (CAVs; [30]).While the use of online videos as a persuasion tool is still a relatively new field compared to more traditional computermediated communications, such as email campaigns, there has been some investigation into how effective different types of online videos are at appealing to their intended audience.For example, within healthcare, one study found that the instructional use of online videos on using a common psoriasis severity measure was able to improve the accuracy in assigning severity scores for both physicians and patients [31].In another recent study, the effectiveness of an educational, narrative-based online video was compared to that of traditional printed pamphlets in improving individuals' beliefs in their own ability to taper their opioid use as well as their behavioral intentions to do so [32].This research found that patients who viewed the online video displayed significant improvements in their attitudes toward the effectiveness of tapering their opioid use as well as their tapering self-efficacy when compared to those patients who viewed a pamphlet instead [32].This shows that the online video medium's enhanced communicative and persuasive effectiveness may be better for changing hard-to-change attitudes than printed materials.Video interventions have also been shown to be effective in modifying certain types of health behaviors, such as breast self-examination, prostate cancer screening, and sunscreen adherence [33].These are promising indicators for stakeholders that want or need to use broadly distributable and easily consumable online videos to inform consumers about novel technologies: by developing online media showing the technology in action, they can educate consumers and/or address any misconceptions they may have. Study Purpose AVs have the potential to provide many benefits to their users, but again, only if the technology is accepted.SAVs should increase AV-related benefits to communities, as their use should optimally lead to reductions in the number of vehicles on the road if widespread adoption takes place.The current geographical limitations associated with providing in-person experience with AVs or SAVs raise the question of whether online methods of information and 'experience' distribution could be effective in improving attitudes toward these technologies with a broader audience.Our study focuses prospectively on age-related differences in attitudes after exposure to different types of information promoting shared automated vehicle (SAV) use.Gender is included as a covariate since there may be gender differences in using conventional ridesharing services like those offered by Uber/Lyft that may affect attitudes toward SAVs unrelated to the AV technology [5,22].We aim to explore how the type and delivery method of information aimed at improving potential consumers' attitudes toward SAVs are affected by potential age-related differences in attitudes.While much of previous research has focused on age differences in ridesharing usage or factors influencing acceptance of AVs, our study aims to combine these factors by looking at age differences in the malleability of anticipated acceptance of SAVs and the factors influencing anticipated acceptance.For the purposes of this study, we define SAVs as Society of Automotive Engineers (SAE) Levels 4 and 5, which are considered fully autonomous vehicles capable of driving themselves in most (L4) or all situations a human driver could manage (L5; SAE, 2016) being shared by riders traveling similar routes to their various destinations.SAE Level 3 (L3) was not considered because this study focuses on shared autonomous driving, where the vehicle is primarily responsible for the safety and performance of the driving and the human is a passenger, whereas, in L3, the human operator is still ultimately responsible for driving performance and is likely the owner of the personal vehicle with that L3 system.We also specify anticipated acceptance because SAVs are not currently widely available for consumer use.Our hypotheses are as follows: Hypotheses 1 (H1): The educational video will have a positive effect on the participants' attitudes towards SAVs; Hypotheses 2 (H2): The experiential video will have a positive effect on participants' attitudes toward SAVs; Hypotheses 3 (H3): When viewed together, the educational and experiential videos will have a more positive effect on participants' attitudes toward SAVs than either alone; Hypotheses 4 (H4): Younger participants will have a greater change in attitudes toward SAVs after watching the educational and/or experiential videos than middle-aged or older adult participants; Hypotheses 5 (H5): Younger participants will have more positive attitudes toward SAVs than middle-aged or older participants. Attitudes toward advanced vehicle technologies might be improved by increasing exposure and, thus, familiarity with them.Previous research has shown that first-hand experience with AVs can increase trust, which influences intent-to-use [27,28].Because first-hand experience is difficult to make available to a wide audience at this point in the technology's development, as well as persistent pandemic conditions (their study's data collection was interrupted), we aim to look at whether and what types of online videos (educational or experiential) would be effective in influencing potential user's attitudes toward SAVs. Experimental Design This study employed a 3 × 4 (age group × video condition) longitudinal mixed experimental design, with the between-subjects dependent variables coming from attitudinal changes in the different condition assignments (control, educational video only, experiential video only, and both educational and experiential videos), and the within-subjects component coming from changes to SAV attitudes before and after viewing their randomly assigned video(s). Participants To determine how many participants were necessary to detect an effect size of ~0.25 using F-test repeated measured within-between interaction, an a priori power analysis was performed using G*Power [34].A Cohen's f effect size of 0.25 was used during the analysis because this was the smallest significant effect size found by Classen and colleagues [28] in their study that used a similar scale to make the pre-post measurements we used for our pre-post condition main effects.Using three groups of 20 measurements (10 measures each from the pre-and post-condition surveys) with an alpha level of 0.95, we calculated the minimum total sample size should be 335 participants. Prior to participant recruitment, we sought and gained approval from Clemson University's institutional review board (approval # IRB2020-315).We recruited three different age groups of adults: younger adults aged 18-25, middle-aged adults aged 30-64, and older adults aged 65 and over.We recruited middle-aged and older adults through Prolific (www.prolific.com),an online data collection service, paying participants $9.50/hour.Younger adults were conveniently recruited through Clemson University's SONA system (www.sona-systems.com) for course credit.Students were given three-course credits in return for their participation.All participants were US residents, and the survey took 35-45 min to complete.Data were collected in February and March 2021. Materials Respondents' attitudes towards SAVs might be influenced by several factors, including their current comfort with ridesharing services and their existing attitudes towards technology.To account for participants' comfort with ridesharing services, we used the measures implemented in Sarriera and colleagues' [22] study on dynamic ridesharing usage, with responses given using a 7-point Likert scale (see Appendix A).To account for respondents' perceptions of technology, we used a combination of preconceptions measures from Lee and colleagues [35] and experience measures from Mason and colleagues [36] using a 100-point slider scale, with greater values signaling more positive views of technology (see Appendix B).Older participants additionally completed an online version of the Montreal Cognitive Assessment [37] to capture any cognitive impairment. Our dependent measure was the Shared Automated Vehicle User Perception Survey (SAVUPS), which consisted of a modified version of the Automated Vehicle User Perception Survey (AVUPS; see [36] for the original version; see Appendix C for our modified SAVUPS) that was lightly modified to specifically assess attitudes toward SAV services.The AVUPS has established face and content validity [36] as well as construct validity and test-retest validity [19].We delivered the SAVUPS before and after participants watched the video(s) assigned to their condition.Responses from this survey can be broken down into the following dimensions that affect an individual's attitude towards AVs: intention to use, trust/reliability, perceived usefulness (PU), perceived ease of use (PEOU), safety, desire for control/driving-efficacy, cost, authority, media, and social influence.Finally, the post-video SAVUPS also concluded with four open-ended questions regarding respondents' attitudes toward AVs. Our four conditions included several videos (control, educational, experiential, and both educational and experiential) we found or produced and were differentiated based on the videos' content.We produced a seven-minute educational video using infor-mation gathered from the Partners for Automated Vehicle Education (PAVE) website (www.pavecampaign.org) that introduced the different technologies that enable automated driving, what kinds of tasks automation performs better than or worse than human drivers, and the potential benefits of AV acceptance.The seven-minute educational video is intended to be objective and informational only rather than persuasive, but the information presented may cast AVs in a positive light because only the potential benefits of AVs are discussed.Our experiential video used raw footage provided by an AV developer (Zoox, Inc.; www.zoox.com) of one of their AVs driving around San Francisco, which included both a representation of what the automated driving system (ADS) 'sees' and footage from cameras mounted on the hood and both side mirrors (Figure 1).This experiential video contains only footage of an AV successfully navigating various driving conditions, so it frames AVs in a positive light, but only contains examples of the current state of the technology and does not discuss what the future might look like once technology advances far enough for fully automated vehicles to be the standard.For the video employed in the control condition, we used a two and a half minute long pre-made video describing how ridesharing services like Uber and Lyft work that we found on YouTube [38]. post-video SAVUPS also concluded with four open-ended questions regarding respondents' attitudes toward AVs. Our four conditions included several videos (control, educational, experiential, and both educational and experiential) we found or produced and were differentiated based on the videos' content.We produced a seven-minute educational video using information gathered from the Partners for Automated Vehicle Education (PAVE) website (www.pavecampaign.org) that introduced the different technologies that enable automated driving, what kinds of tasks automation performs better than or worse than human drivers, and the potential benefits of AV acceptance.The seven-minute educational video is intended to be objective and informational only rather than persuasive, but the information presented may cast AVs in a positive light because only the potential benefits of AVs are discussed.Our experiential video used raw footage provided by an AV developer (Zoox, Inc.; www.zoox.com) of one of their AVs driving around San Francisco, which included both a representation of what the automated driving system (ADS) 'sees' and footage from cameras mounted on the hood and both side mirrors (Figure 1).This experiential video contains only footage of an AV successfully navigating various driving conditions, so it frames AVs in a positive light, but only contains examples of the current state of the technology and does not discuss what the future might look like once technology advances far enough for fully automated vehicles to be the standard.For the video employed in the control condition, we used a two and a half minute long pre-made video describing how ridesharing services like Uber and Lyft work that we found on YouTube [38]. Procedure Once participants signed up for our study via SONA or Prolific, they were provided a link to a Qualtrics survey that randomly assigned them to either the educational video condition, the experiential condition, both educational and experiential videos, or a control condition that contained a video detailing how to use TNC services.Participants first filled out standard demographic information-gender, age, whether they lived in an urban/suburban/rural area, etc.-and filled out the comfort with ridesharing and perceptions of technology sections.Older adults completed the MoCA in between the demographics and ridesharing comfort sections.Next, participants completed the SAV prevideo survey, watched their condition's video(s), and then completed the post-video SAV survey and questions about comfort with human vs automated drivers.Figure 2 illustrates the procedure participants completed during their involvement in the online study. Procedure Once participants signed up for our study via SONA or Prolific, they were provided a link to a Qualtrics survey that randomly assigned them to either the educational video condition, the experiential condition, both educational and experiential videos, or a control condition that contained a video detailing how to use TNC services.Participants first filled out standard demographic information-gender, age, whether they lived in an urban/suburban/rural area, etc.-and filled out the comfort with ridesharing and perceptions of technology sections.Older adults completed the MoCA in between the demographics and ridesharing comfort sections.Next, participants completed the SAV pre-video survey, watched their condition's video(s), and then completed the post-video SAV survey and questions about comfort with human vs automated drivers.Figure 2 illustrates the procedure participants completed during their involvement in the online study. Because it was critical to our results that participants viewed the video(s) assigned to their conditions and retained their content before conducting analysis, we removed participants who did not spend half the video length or more in the video block to watch their video(s).We also removed participants who failed either of the two attention check questions we inserted into the survey (i.e., questions that explicitly instruction to select a certain answer).Whether or not the videos were watched was determined by the length of time spent on the question with the video embedded.If the timing was less than 200 s or more than 1000 s, the participant's data were removed from the pool of data for analysis.These numbers were based on the educational video and experiential video lengths being 442 and 420 s, respectively.The minimum of 200 s was chosen to account for the possibility that participants may choose to watch the videos at 2× speed.Spending longer than 1000 s on the page with the video(s) we took as an indication that the participant clicked 'play' then walked away or turned their attention to another task.Additional video comprehension questions gave us insight into how much of the information in the videos the respondent retained.Because it was critical to our results that participants viewed the video(s) assigned to their conditions and retained their content before conducting analysis, we removed participants who did not spend half the video length or more in the video block to watch their video(s).We also removed participants who failed either of the two attention check questions we inserted into the survey (i.e., questions that explicitly instruction to select a certain answer).Whether or not the videos were watched was determined by the length of time spent on the question with the video embedded.If the timing was less than 200 s or more than 1000 s, the participant's data were removed from the pool of data for analysis.These numbers were based on the educational video and experiential video lengths being 442 and 420 s, respectively.The minimum of 200 s was chosen to account for the possibility that participants may choose to watch the videos at 2× speed.Spending longer than 1000 s on the page with the video(s) we took as an indication that the participant clicked 'play' then walked away or turned their attention to another task.Additional video comprehension questions gave us insight into how much of the information in the videos the respondent retained. Analysis To assess the effects of age and our videos on respondents' attitudes towards SAVs, we performed a 3 × 4 MANCOVA analysis on the SAVUPS dimension difference scores (intent to use SAVs, trust in SAVs, perceived usefulness of SAVs, perceived ease of use of SAVs, and perceived AV safety) using the independent variables age group (younger (18)(19)(20)(21)(22)(23)(24)(25), middle (30-64), and older (65+)) and video condition (ridesharing control video, educational video only, experiential video only, both educational and experiential videos).We included the covariates gender, ridesharing comfort (i.e., how comfortable the respondent was sharing a ridesharing vehicle with another passenger), past and present ridesharing experience, perceptions of technology, as well as the pre-video SAVUPS dimensions cost (i.e., how much cost influences their intent to use SAVs) and desire for control/driving-efficacy (i.e., their preference to drive themselves despite having automation available).To ensure participants watched the videos we added a timer on the video pages of the survey and removed any participants who spent less than half the time on the page or more than time and half on the page.That range was chosen to allow for participants who watched on double speed or rewatched portions.We also included video comprehension questions and removed participants who failed one or more comprehension questions. Participants Table 1 provides a breakdown of several participant characteristics by both the four video conditions as well as the three age groups.We were able to recruit 239 younger Analysis To assess the effects of age and our videos on respondents' attitudes towards SAVs, we performed a 3 × 4 MANCOVA analysis on the SAVUPS dimension difference scores (intent to use SAVs, trust in SAVs, perceived usefulness of SAVs, perceived ease of use of SAVs, and perceived AV safety) using the independent variables age group (younger (18)(19)(20)(21)(22)(23)(24)(25), middle (30-64), and older (65+)) and video condition (ridesharing control video, educational video only, experiential video only, both educational and experiential videos).We included the covariates gender, ridesharing comfort (i.e., how comfortable the respondent was sharing a ridesharing vehicle with another passenger), past and present ridesharing experience, perceptions of technology, as well as the pre-video SAVUPS dimensions cost (i.e., how much cost influences their intent to use SAVs) and desire for control/driving-efficacy (i.e., their preference to drive themselves despite having automation available).To ensure participants watched the videos we added a timer on the video pages of the survey and removed any participants who spent less than half the time on the page or more than time and half on the page.That range was chosen to allow for participants who watched on double speed or rewatched portions.We also included video comprehension questions and removed participants who failed one or more comprehension questions. above, our final sample consisted of 147 younger adults, 145 middle-aged adults, and 144 older adults, giving us a total of 436 participants included in our analysis.See Figure 3 for the baseline SAVUPS dimension scores by age group and Figure 4 for the SAVUPS dimension difference scores (post-video scores minus pre-video scores).above, our final sample consisted of 147 younger adults, 145 middle-aged adults, and 144 older adults, giving us a total of 436 participants included in our analysis.See Figure 3 for the baseline SAVUPS dimension scores by age group and Figure 4 for the SAVUPS dimension difference scores (post-video scores minus pre-video scores). SAVUPS Difference Score MANCOVA Levene's test was performed and was not found to be significant for any of the dependent variables, so the assumption of homogeneity of variance was not violated.Box's M test was also not statistically significant, so the assumption of covariance homogeneity was also not violated.Multivariate tests showed rideshare experience to be the only significant covariate (Pillai's Trace = 0.034, F (5, 414) = 2.87, p < 0.016, ηp 2 = 0.034), with more rideshare experience associated with significantly lower intent to use difference scores (F(1, 418) = 4.52, p < 0.036, ηp 2 = 0.011) and PEOU difference scores (F(1, 418) = 7.18, p < 0.009, ηp 2 = 0.017). No significant interactions were found in the multivariate tests (Pillai's Trace = 0.082, F(30, 2090) = 1.16, p = 0.25, ηp 2 = 0.016), but a significant between-subjects test interaction between video condition and age group was observed (F(6, 418) = 2.65, p < 0.02, ηp 2 = 0.037).Explored graphically (see Figure 5), it revealed that older participants in the control condition reported significantly higher PEOU difference scores after watching the control video on how to use ridesharing services than other age groups in their video condition.While such inconsistencies are to be questioned, we believe this has meaningful implications, which we will elaborate on during the Discussion. between video condition and age group was observed (F(6, 418) = 2.65, p < 0.02, ηp 2 = 0.037).Explored graphically (see Figure 5), it revealed that older participants in the control condition reported significantly higher PEOU difference scores after watching the control video on how to use ridesharing services than other age groups in their video condition.While such inconsistencies are to be questioned, we believe this has meaningful implications, which we will elaborate on during the Discussion.The main effects observed between the video conditions and age groups are detailed in the paragraphs that follow.See the descriptive statistics for the SAVUPS difference scores in Table 2 and the full results of this analysis in Table 3.It is worth noting that the relatively high variability seen in Table 2 is due to individual differences in how much scores changed; some people's attitudes change a lot some only a little bit.The main effects observed between the video conditions and age groups are detailed in the paragraphs that follow.See the descriptive statistics for the SAVUPS difference scores in Table 2 and the full results of this analysis in Table 3.It is worth noting that the relatively high variability seen in Table 2 is Video Condition Findings Multivariate testing showed that watching the educational and/or the experiential video had a significant effect on participants' SAV attitude difference scores, with a Pillai's Trace of 0.090 F(15, 1248) = 2.56, p < 0.002, η p 2 = 0.030.Tests of between-subjects effects revealed that intent to use increased significantly more after watching the video(s) in the Both and Experiential conditions F(3, 418) = 3.47, p < 0.017, η p 2 = 0.024 (see Figure 6).Perceived safety difference scores also increased significantly more after viewing any of the intervention videos compared to the control condition F(3, 418) = 6.88, p < 0.0001, η p 2 = 0.47 (see Figure 7). Discussion As seen in Figure 3, the baseline attitudes toward SAVs were low to middling for all age groups.Interestingly, there was not a large difference between younger and older participants' baseline attitudes toward SAVs, as we expected based on previous literature [5,11,36].After viewing one of our intervention videos, attitudes shifted in a positive direction, but the observed effect sizes were only in the small to medium range (ηp 2 values typically fell below 0.04).For example, average intent to use SAVs scores suggested a slight reluctance at baseline.After our online video intervention, the average intent to use scores suggested a neutral intent to use SAVs.It is worth noting that this stated intent to use a shared mode of transportation may have been muted by pandemic conditions at the time of data collection in the spring of 2021, when shared modes were justifiably extremely limited in availability and/or their use [39,40].While this shift in behavioral intentions to use SAVs is in a positive direction, it may only make someone strongly opposed to SAVs slightly more open to the idea of them.Trends of participants being slightly more positive about their attitudes toward SAVs after educational or experiential online video exposure can be seen across the other SAVUPS dimensions.This finding adds improving positive attitudes toward SAVs to a growing list of topics for which easily deployable online instruction methods prove to be either adequate substitutes for or useful boosters in domains such as the treatment for patients in psychotherapy [41], teachers earning credentials [42], as well as brief online interventions to reduce social anxiety [43].Cases that do not align fully are enlightening as well, as some online programs are deemed insufficient alone, and an in-person component must be included [44]. We found that short online videos were useful in improving attitudes toward SAVs, supporting H1 and H2.Both types of videos were similarly effective from a statistical standpoint, so H3 (i.e., that the combination condition would be more positively inclined than any single type of video) was not supported.Still, this is promising for future promotional campaigns that companies intending to offer SAV services may want to initiate to increase their profile among potential riders.While the subjective results observed in this study may not directly impact use behavior, they can serve as indicators for future behavior.Both video conditions that contained the experiential video showed the potential to increase participants' intent to use SAVs, which provides evidence that short online videos showing AVs safely navigating different, somewhat difficult driving conditions improve the likelihood of SAV services intending to be used by individuals of all ages that view them.Both educational and experiential videos also positively impacted perceptions of safety across the age groups in this study, suggesting that either knowing more about how SAVs work or seeing them in action may improve perceived safety.Findings from this study suggest that both experiential and educational video approaches can have a positive effect on potential users' perception of SAVs and could be integrated into strategies for preparing the public for a future where SAVs play an important part in everyday transportation. Knowing which methods different age groups respond to most positively when it comes to learning about and accepting SAVs can help stakeholders planning to launch these kinds of services target their messaging.For example, older adults displayed significantly greater increases in PU, trust, and PEOU than their younger counterparts after watching 7-15 min of online videos, which shows that the usefulness, trustworthiness, and ease of use of SAVs can be effectively demonstrated using such a brief, easily distributable medium.SAV stakeholders could host promotional events aimed at older populations, giving potential users experience with these technologies.In fact, evidence of the potential utility of providing general training on how to use currently available TNC services was observed in an unanticipated between-subjects tests interaction (Figure 5). For the most part, middle-aged adults did not differ significantly from older or younger adults and fell somewhere in between the two.The exception is for PEOU, where older participants' PEOU ratings of SAV services were significantly higher than both middleaged and younger adults.Younger adults, in fact, had non-significant but slightly negative changes in PEOU after watching the educational video, failing to support H4 and H5 that they would show greater positive shifts (H4), leading to higher overall attitudes towards SAVs (H5).This may have been because of either the technology explanation content or because younger adults had overly optimistic views of the technologies, and the explanation brought their expectations down a bit.Interestingly, older adults' PEOU ratings benefitted from viewing the control condition's instructional TNC ridesharing video rather than just viewing the educational and/or experiential videos.This implies that older participants, relative to younger and middle-aged participants, had a lack of understanding of how currently available TNC services might be hailed from their smartphones.Only roughly 4 in 10 older adults are smartphone users [45], and this number seems to be increasing.This lack of familiarity and/or comfort with using such technology may be an inhibiting factor limiting older adults' use of current and future ridesharing services.It is possible that these older participants might be conflating the TNC services described in the control video with SAV services, but recent divestitures and/or partnerships made by TNCs regarding their self-driving ventures [46,47] suggest that future SAV services might be hailed quite similarly to today's TNC rides. While it is promising that promotional campaigns delivered via online video can be modestly effective in improving attitudes, it is worth keeping in mind that there were individual differences in the video's effectiveness and that it is still likely that the in-person experience would be more effective.Classen and colleagues [28] observed moderate to large effect sizes in their in-person study, whereas ours had smaller effect sizes.However, due to the costs of such in-person demonstrations and the wider range of people an online campaign could reach compared to smaller, targeted, in-person interventions, we believe that online videos like the ones used in this study have the potential to have a more widespread impact on the general public's SAV attitudes than in-person demonstrations.It is also worth noting that interventions like these could be safely deployed during a global pandemic rather than waiting for it to be safe to return to in-person interactions. This online survey study was not without limitations.One was our limited control over participants' attentiveness to our video interventions.We mitigated the issue of video attentiveness by removing any participants who spent less than half the video length on the video page and who missed more than one video attention check question, but even with those measures in place, it is difficult to ascertain what extent the video content was absorbed by participants.Another limitation was due to the homogeneity of the sample, which was restricted to the continental US in the middle and older age groups and to a medium-sized university in the southeastern US.Different research has sought to collect data in multiple countries and also compare personal and shared ride models, providing insights into differences in markets and business models [48].Our younger adult sample was more homogenous than typical online samples due to local convenience sampling.Younger participants were all students at Clemson University, and their lack of changes in attitudes may have been due to their location in a rural area where there is low availability of any kind of TNC services, and SAV deployment in such areas is unlikely to happen any time soon.Additionally, another limitation is the complex and intertwined nature of SAV attitudes.It is difficult to tell from a single online study what criteria any given participant's reasoning for the responses we collected was based upon.Is the threat of COVID-19 infection leading to a muted effect on participants' willingness to participate in ridesharing?Is the potential physical threat from other unknown riders a consideration?Or is the primary driver of attitudes more the novel, relatively untested, safety-critical technology that AVs rely upon?All of these are questions that will need to be answered before we can say with certainty what kinds of interventions will work best for which age groups when it comes to SAV attitudes. Conclusions Participants of varying ages participated in an online survey study to gauge the impact of educational and experiential videos on their SAV attitudes, which were measured before and after watching the intervention videos.Participants viewed videos with different information presentation strategies (experiential, educational, and ridesharing control).Significant changes were found between the pre-and post-video scores both between video types and across age groups.We observed small to medium effect sizes with online information dissemination.While the effect sizes were not as large as in-person experiences with AVs [28], online videos make it easier to reach potential users than having to bring users to a physical space, particularly older adults.These results are promising for the scalability of information dissemination for SAV stakeholders and potential riders. Appendix C. Shared Automated Vehicle User Perception Survey Definition: An automated vehicle (i.e., self-driving vehicle, driverless car, self-driving shuttle) is a vehicle that is capable of sensing its environment and navigating without human input.Full-time automation of all driving tasks on any road, under any conditions, and does not require a driver or a steering wheel.Directions: Please place a vertical dash (|) on the scale (by moving the slider) to display the degree to which you agree or disagree with the statement.One hundred-point slider from "Disagree" to "Agree".I am open to the idea of using shared automated vehicles.I am suspicious of automated vehicles.I believe I can trust automated vehicles.I would engage in other tasks while riding in an automated vehicle.I believe automated ridesharing services would reduce traffic congestion.I believe automated ridesharing services will alleviate parking headaches.I believe automated ridesharing services will allow me to stay active.Automated ridesharing services will allow me to stay involved in my community.Automated ridesharing services will enhance my quality of life/well-being.I expect that automated ridesharing services will be easy to use.I expect that it would require a lot of effort to figure out how to use automated ridesharing services.I would us an automated ridesharing service on a daily basis.I would rarely use an automated ridesharing service.Even if I had access to an automated ridesharing service, I would still want to drive myself occasionally.It will be important for there to be the option for a human to drive when using an automated ridesharing service.My driving abilities would decline due to relying on an automated ridesharing service.I would be willing to pay more for an automated ridesharing service compared to what I would pay for a traditional ridesharing service.If cost was not an issue, I would use an automated ridesharing service.I would use an automated vehicle if the National Highway Traffic Safety Administration (NHTSA) deems them as being safe.Media portrays automated vehicles in a positive way.My family and friends would encourage/support me when I use an automated ridesharing service.When I'm riding in an automated vehicle, other road users will be safe.I believe that automated vehicles will increase the number of crashes.I would feel safe riding in an automated vehicle.I feel hesitant about using an automated vehicle. Figure 1 . Figure 1.Driver's view and automated driving system's view of the AV's surroundings. Figure 1 . Figure 1.Driver's view and automated driving system's view of the AV's surroundings. Figure 5 . Figure 5. Observed between-subjects tests video condition by age group interaction on perceived ease of use.NOTE: Error bars are 95% CIs. Figure 5 . Figure 5. Observed between-subjects tests video condition by age group interaction on perceived ease of use.NOTE: Error bars are 95% CIs. due to individual differences in how much scores changed; some people's attitudes change a lot some only a little bit.Covariates appearing in the model are evaluated at the following values: gender = 1.60, technology perceptions = 73.54,rideshare experience = 3.84, rideshare comfort = 4.07, SAVUPS driving = 211.47,SAVUPS cost = 71.77. Figure 6 . Figure 6.SAVUPS Intent to Use Difference Scores by Video Condition.Error bars are 95% CIs.Figure 6. SAVUPS Intent to Use Difference Scores by Video Condition.Error bars are 95% CIs. Figure 6 . Figure 6.SAVUPS Intent to Use Difference Scores by Video Condition.Error bars are 95% CIs.Figure 6. SAVUPS Intent to Use Difference Scores by Video Condition.Error bars are 95% CIs. Figure 6 . Figure 6.SAVUPS Intent to Use Difference Scores by Video Condition.Error bars are 95% CIs. Figure 7 . Figure 7. SAVUPS AV safety difference scores by video condition.Error bars are 95% CIs. Figure 7 . Figure 7. SAVUPS AV safety difference scores by video condition.Error bars are 95% CIs. Figure 8 . Figure 8. SAVUPS trust in AVs difference scores by age group.Error bars are 95% CIs. Figure 8 . Figure 8. SAVUPS trust in AVs difference scores by age group.Error bars are 95% CIs. Figure 8 . Figure 8. SAVUPS trust in AVs difference scores by age group.Error bars are 95% CIs. Figure 9 . Figure 9. SAVUPS perceived usefulness of SAVs difference scores by age group.Error bars are 95% CIs. Figure 9 . Figure 9. SAVUPS perceived usefulness of SAVs difference scores by age group.Error bars are 95% CIs. Figure 8 . Figure 8. SAVUPS trust in AVs difference scores by age group.Error bars are 95% CIs. Figure 9 . Figure 9. SAVUPS perceived usefulness of SAVs difference scores by age group.Error bars are 95% CIs. Figure 10 . Figure 10.SAVUPS perceived ease of use of SAVs difference scores by age group.Error bars are 95% CIs. Table 1 . Video condition and age group participant characteristics. NOTE: Values are Mean (SD) Table 2 . Video condition and age group SAVUPS difference scores. NOTE: Edu + Exp = Educational and Experiential Videos. Table 3 . Results of SAVUPS difference score MANCOVA (tests of between-subjects effects).
10,284.4
2024-03-20T00:00:00.000
[ "Computer Science", "Engineering", "Psychology" ]
An Empirical Investigation of the Factors Influencing Formal and Informal Employment in the City of Asmara This study investigates the factors influencing formal and informal labour market in Asmara, the capital city of Eritrea. The findings reveal that variables such as age, gender, education and birth place influence formal and informal labor market of the city. The chances for young people getting jobs in formal are low relative to older people. Higher educational level is related to securing jobs in the formal sector. Regarding gender males have more chances in the formal sector than females. People from Maekel/Central Region (townships surrounding Asmara) have higher chances to engage in self-employment. Generally, the results reveal that the labor market in Asmara show varied characteristics. Background and Empirical Literature As far as, urban labor markets in developing countries are concerned, it is generally classified as formal and an informal sector (Pradhan and van Soest, 1995).The informal sector includes all jobs in informal sector enterprises.According to (OECD, 2009), informal sector was mainly considered as characteristics of developing countries and it was assumed that it would disappear as these countries economy develops.Mazumdar (1989) describes an urban labor market structure in a typical developing country as being subdivided into three main categories: the formal sector, informal sector and the unemployed.Similarly, the International Labor Organization (ILO) also categorizes employment in the informal sector as: ``employment in the informal sector`` and ``informal employment`` with the informal sector being the largest sector in many countries (ILO, 2002). The question that is central at formal or informal work is whether individuals choose to work in the informal sector or they opt to work in the sector as the only alternative at their disposal.The former considers employment in the informal sector to be supply-led and voluntary (Heckman and Sedlacek 1985;Malony 2004;Packard 2007); while the latter views informal work as a secondary market where all those without access to the primary formal market find themselves (Fields, 1990).Fields (2005) also expanded the debate by presenting a third feature of the informal sector stating as a 'last resort sector', a 'desirable sector', and with 'internal dualism' combining the first two.Heterogeneity in the formal and informal sectors was usually done by distinguishing labor inside the formal and informal sector according to employment type as well as position in earning distribution (Arias and Khamis 2008;Bargain and Kwenda, 2009;Nguyen et al. 2011;Tansel and Kan, 2012;Harati, 2013).As aforementioned informal sector is crucial for the functioning of the labor market, since it affects income distribution (inequality) and poverty with implications for efficiency in terms of allocation of labor.This explains why the role of the informal sector has recently been analyzed extensively. What strategies should be in place to enable government in developing countries to generate new employment and income opportunities and reduce informality and unemployment?In this regard, the need to create employment opportunities in Eritrea is underscored by the fact that the size of the informal market has been growing faster due to several economic and social issues.Published by IDEAS SPREAD There are various research reports on the determinants of labor market participation and labor market modeling.This work has generally modified with the three strands of labor market namely formal sector, self-employed informal sector participants and informal wage earners. Several scholars using logit, random utility, and ordinary least square (OLS) models conducted research in various countries including Guinea, Kenya, Ghana, Tanzania, Morocco, Cameroon, Burkina Faso, Mexico, and Pakistan.Their research findings show that higher the more a person is educated the more he/she can be employed in the formal sector (Glick and Sahn, 1997;Mariara, 2003;Rankin et al., 2010;Amin et al., 1995;Irfan, 1983;El Aynaoui, 1997;Traore, 2013;Faridi, 2011;Gong et al., 2000).However, this doesn't mean that other variables do not have any influence in the determination of labour market.For instance, age and place of residence determine whether a person chooses the formal or informal sector. The main objective of this study is to investigate the factors that influence the formal and informal labor market in Eritrea.We employed multinomial logit model by dividing the labor market participation into formal employment (private and public) and informal employment.The specific objectives of this study are: What are the factors that influence labor market in Eritrea?What are the occupational choice determinants of an individual on the labor market? Methodology This study investigates the determinants of formal and informal labour market based on data collected from the metropolitan area of Asmara, Eritrea.In conducting this study, primary and secondary data have been used.Primary data were collected from individual residents of the city using survey questionnaire.A total number of 1200 questionnaires were distributed.Of the entire distributed questionnaire, we obtained 1080 correctly completed usable questionnaires, which is 90% response rate.Individuals were requested to give information related to their participation in the labor market.Moreover, relevant socioeconomic and demographic characters of the individuals were also collected.This is helpful to estimate the probability of being in each employment type and allows for the marginal impact of explanatory variables to vary across the employment types.This study provides a practical explanation of the different determinants for the employment choices.Thus, the study employs a quantitative approach for the purpose of examining magnitudes of the effects of various factors.Data collected were analyzed and interpreted by using SPSS version 23 and multinomial logit (MNL) regression model. Data Analysis The data gathered has been analyzed using logit model, which is the most common method of describing how individuals choose between different occupational choices.As mentioned above, in this study individuals are sorted into three labor force categories, formal employment (private and public), informal self-employment and informal wage employment in informal and formal enterprises.The model allows the dependent variables to take three mutually exclusive and exhaustive values, j=0 (formal employment), 1 (informal self-employment), or 2 (informal wage-employment).The explanatory variables used include the individual's socioeconomic and demographic characteristics such as age, gender, marital status, level of education, household size, religion, birth place, ethnicity and income. In this paper, we use latent utility function framework to analyze the characteristics of occupational choice in the case of Asmara metropolitan area residents and to look for relationships that the above mentioned characteristics such as sex, age, education, income and others have with the individual's choice of a sector.The results can be used to find, understand and compare the attractiveness of each choice and determine the reasons and motives behind each of these choices. The designing of choice model needs extensive evaluation of observed data and the efficiency of the whole model system.In the current study, specific parameters are predicted to impact individuals' behavior, when individuals have different choices.These requirements consist of the parameters such as gender (Gender), educational (Education) level, age (Age), household size (HHSize), marital status (Mstatus), religion (Religion) and birth place (BirthPlace) of the respondent (respondents are categorized based on their birth place from one to four being (Asmara =1; Maekel Region = 2; other Eritrean Regions = 3 and born outside of Eritrea = 4). Descriptive Analysis The data comprises of 1080 respondents and 44.8 % of these are females.Out of the total respondents, 545 are formal employees, 291 are informal self-employees and 244 are informal wage employment in informal and formal Published by IDEAS SPREAD enterprises.The mean age of respondents is 36.7 years with minimum of 12 years and maximum of 75 years.Low minimum age reflects that there are young and poor respondents who work in the informal self-employment sector working as petty retailers.In doing the analysis, the formal employment was used as a base (reference) so that the other two choices (selfemployment and informal wage employment) were compared to this base.The model summary as presented in Table 3 shows a Likelihood Ratio value of 265.683 which is significant at the 0.0001 level.The Pseudo R-squared values (Cox and Snell and Nagelkerke) of 0.218 and 0.250 reveal the model is useful in predicting the employment choice of respondents.The regression results as presented in Table 5 shows that some of the determinants such as age (Age), gender (Gender), birth place (Birthplace), marital status (Marital Status) household size (Hosueholdsize), education (Education) and religion (Religion) statistically affect the probability of a respondent choices of sector of employment in the labor market.Published by IDEAS SPREAD The above table shows the most acceptable model.Some of the models that were analyzed have revealed inadequate statistical goodness of-fit and/or had counter-intuitive signs; and therefore were invalidated and discarded.As stated previously, the basic idea behind the mode choice estimation was to identify factors influencing respondents' choice between formal and informal sector employment.Greater parts of the variables presented have significant parameter estimates. Of the specific parameters used to predict the choice of sector employment, the demographic variable age (Age) is important for these two informal subsectors (informal self-employment and informal wage employment) of employment.The result is significant with a negative coefficient implying that the likelihood of people to be employed in these informal subsectors decreases with age.The odds ratios are also below one supporting the argument.As age is related to experience, this result concurs with Goldar (2010), where generally older individuals are preferred in the formal labor market. Gender (Gender) variable has a negative coefficient and is statistically significant in the informal self-employment and informal wage employment sub-sectors and it`s odds ratio is less than one.This implies that the chances for males in the informal self-employment and informal wage employment subsectors are low.The result simply shows that the majority of males have high probability of working in the formal sector compared to females for the simple fact that males face fewer barriers related to qualification and discrimination.In addition, some women may prefer the informal sector in order to cope with the need to care for children and domestic chores to the extent that this sector enables them to combine productive and reproductive work. Regarding Education variable, the coefficient of is negative and significant.The negative effect of education on these two subsectors implies that having better education diminishes the chances of being employed in these two subsectors or enhances the opportunity for working in the formal sector.This chance increases with the level of education and is therefore higher for university education.The odds ratios are also below one supporting the argument. Finally, Birthplace variable is introduced to explain the labor market choice behavior of the respondents.The result was found to be positive and significant for the category (Maekel Region) indicating that the probability of employment in informal self-employment sector increases for those who are born in this Region.Maekel (central) is one of the six regions of Eritrea and the city of Asmara is located in this region.Residents of this region have easy access to the city as they live from fifteen to thirty kilometers away from the city.In the pick seasons they work in their farms and during slack seasons they come to the city to do some informal self-employment activities. Conclusions and Implications The main goal of this paper is to assess the variables that determine the choice of employment in the formal/informal sector in the metropolitan area of Asmara.As discussed in the aforementioned section, four variables-Age, Gender, Education and Birth place are important in determining the sectorial choice.That is, variable age (statistically significant with a negative coefficient); gender (negative coefficient and is statistically Published by IDEAS SPREAD significant); the coefficient of education variable is negative and significant; and the result of birth place variable was found to be positive and significant for the category (Maekel Region). In general, formal sector employment in both public and private sectors is male-dominated while women occupy the inferior informal sector (inferior in the sense of low incomes, precarious tenure and unregulated forms of employment).Regarding formal employability of workers, improving quantity and quality of education is important as an enabling instrument.In pursuit of educational achievements, gender imbalance has to be addressed as a way of increasing professionalism of women and to emancipate them from being preys of informal employers. This study raises many questions for further research while identifying education and employment policy gaps: What specific skills or qualities do employers look for when recruiting new employees?Is the current education system demand or supply driven, and does it equip graduates with adequate skills to become self-employed?In addition, a great concern is to identify whether gender discrepancy is a result of labor market discrimination against or is justified on the basis of human capital skills.Answers to these questions have broad policy implications towards an achievement of gender balance in education, the labor market and poverty eradication. Our study is not without limitations.This study focuses on the supply factors only excluding the demand side.A thorough understanding of the demand would be necessary to complement such a study, but relevant data on the demand side are lacking.Future research and survey data collection methodologies should incorporate the demandside information. Table 1 . Table 1 below presents summary of respondents.Summary Statistics Table 2 and table 3 provide further details. Table 5 . MNL results for labor market choice
3,003
2018-04-29T00:00:00.000
[ "Sociology", "Economics" ]
Synergy in monoclonal antibody neutralization of HIV-1 pseudoviruses and infectious molecular clones Background Early events in HIV infection are still poorly understood; virus derived from acute infections, the transmitted/founders IMCs, could provide more reliable information as they represent strains that established HIV infection in vivo, and therefore are investigated to elucidate potentially shared biological features. Methods This study examined synergy in neutralization by six monoclonal antibodies targeting different domains in gp120 and gp41 and assayed in pairwise combination against 11 HIV-1 clade B strains, either Env pseudoviruses (PV, n = 5) or transmitted/founder infectious molecular clones (T/F IMCs, n = 6). Three of the early-infection env tested as PV were juxtaposed with T/F viruses derived from the same three patients, respectively. Results All antibodies reaching IC50 were assayed pairwise (n = 50). T/F IMCs showed overall lower sensitivity to neutralization by single antibodies than PV, including within the three patient-matched pairs. Remarkably, combination index (CI) calculated using the Chow and Talalay method indicated synergy (CI < 0.9) in 42 data sets, and occurred in T/F IMC at similar proportions (15 of 17 antibody-T/F IMC combinations tested) as in pseudoviruses (27 of 33). CI values indicative of additivity and low-level antagonism were seen in 5 and 3 cases, respectively. Most pairs showed comparable synergic neutralizing effects on both virus groups, with the 4E10 + PG16 pair achieving the best synergic effects. Variability in neutralization was mostly observed on pseudovirus isolates, suggesting that factors other than virus isolation technology, such as env conformation, epitope accessibility and antibody concentration, are likely to affect polyclonal neutralization. Conclusions The findings from this study suggest that inhibitory activity of bNAbs can be further augmented through appropriate combination, even against viruses representing circulating strains, which are likely to exhibit a less sensitive Tier 2 neutralization phenotype. This notion has important implications for the design and development of anti-Env bNAb-inducing vaccines and polyclonal sera for passive immunization. Electronic supplementary material The online version of this article (doi:10.1186/s12967-014-0346-3) contains supplementary material, which is available to authorized users. Introduction Neutralizing antibodies to HIV-1 do not generally develop at early stages of infection, and thus usually cannot inhibit HIV-1 amplification and establishment of chronic infection. Selection pressure exerted by host immunity, and the intrinsic ability of HIV-1 to rapidly mutate result in great variability of HIV strains over time, and thus virus isolates from later stages of infection can differ substantially from the early virus population and in particular from the respective transmitted virus strain(s). Recent approaches utilizing single genome amplification (SGA) of viral sequences from acutely infected patients overcame prior limitations in analyzing the genomes of viruses initiating clinical infection, thereby enabling the identification of transmitted/founder (T/F) HIV env as well as proviral sequences with high reliability, and the subsequent generation of infectious molecular clones (IMC) of T/F HIV-1 [1][2][3] . Biologic characterization of T/F HIV-1 strains from different clades have begun to reveal distinctions between T/F HIV-1 and primary isolates from chronic infection as well as laboratoryadapted  reference virus strains. T/F HIV-1 were found to display an higher glycosylation shield, R5-mediated, T-lymphocyte tropism and, most importantly, relative resistance to antibody neutralization [1,4,5]. In order to develop an effective vaccine able to prevent HIV-1 transmission, it is highly relevant to understand the sensitivity of primary virus strains, including transmitted/founder strains, to humoral defenses. Certain commonly used laboratory-adapted strains and primary HIV isolates are highly neutralization sensitive ( Tier 1 neutralization phenotype) [6] and thus do not adequately reflect the broad spectrum of neutralization observed for primary strains from various clades. The most comprehensive study so far by Montefiori and colleagues [7,8], of 219 Env-pseudotyped viruses assayed in TZM-bl cells [7,8] with sera from 205 HIV-1-infected individuals, highlighted this notion. We were interested whether pair-wise combinations of potently neutralizing monoclonal antibodies (NAbs) directed against different gp120 and gp41 epitopes had synergistic inhibitory effects against a selection of early infection and transmitted/founder Clade B strains. We posit that information about synergy of HIV-1 antibodies could ultimately be exploited to select epitopes combinations for immunogens that might elicit synergistic bNAbs. We conducted our study employing the widely utilized TZM-bl neutralization assay which was recently validated [9]. We chose four env strains of TZM-bl Tier 2 phenotype cloned from early/acute infections and included in the original Clade B env Reference Panel [10], plus one Tier 1A control (SF162 env) for testing of pseudovirus neutralization of a single round of infection. We juxtaposed three of these pseudoviruses with analysis of their matched clade B full-length transmitted/founder infectious molecular clones (T/F IMCs), together with three additional (Tier 2) clade B T/F IMCs. These bona fide transmitted/founder genome sequences had been derived from acutely infected subjects [1,2], and replicationcompetent IMC representing them had been generated by a novel strategy described previously [1,2]. Both sets of viruses were assayed with a panel of potent human neutralizing antibodies directed against distinct envelope epitopes, individually and in pair-wise combination, in order to assess whether synergistic enhancement of inhibition could be achieved. Clade B Env-expression plasmids for pseudovirus generation, including pREJO4551 clone 58, AC10.0 clone 29, pCAGGS SF162 gp160 (cat #10463), pRHPA4259 clone 7, pTHRO4156 clone18, were obtained through the NIH AIDS Research an Reference Reagent Program. (NIH ARRRP as part of the Clade B env pseudovirus panel). The acute env plasmids were generated by Mascola et al. [11] by cloning the gp160 genes from sexually acquired, acute/early infections, in order to facilitate standardized assessments of neutralizing antibody responses. When co-transfected with the env-deleted backbone plasmid pSG3Δenv (contributed by. John C. Kappes and Xiaoyun Wu [12]; cat #11051, included in the Panel) in 293 T cells, these plasmids produce env-pseudotyped viruses that are capable of a single round of infection in TZM-bl cells. The genomic sequence of full-length transmitted/ founder (T/F) HIV-1 strains were deduced using a mathematical model of HIV-1 sequence evolution in acute clinical infection and an experimental strategy based on single genome amplification (SGA) of plasma vRNA/cDNA, followed by direct sequencing of uncloned SGAs [1,4]. The derivation of bona fide T/F infectious molecular clones (IMCs) including pCH040.c/2625, pCH058.c/2960, pCH077.t/2627, pRHPA.c/2635, pTHRO.c/2626, pREJO. c2864 was described previously by Ochsenbauer et al. [2], and T/F IMC are also available through the NIH ARRRP, contributed by John C. Kappes and Christina Ochsenbauer. SF162 Env has a Tier 1 A phenotype in TZM-bl PV assay; all other strains are described as Tier 2 when tested as Env-PV [Neutralizing Antibody Resources tools, at www.hiv.lanl.gov]. Generation and titration of virus stocks 293 T cell-derived stocks of pseudoviruses and replicationcompetent IMCs were generated by proviral DNA transfection using FuGENE 6, according to the manufacturer s protocol (Promega, Madison, WI). Viral supernatants were harvested 72 h post-transfection, clarified at 1800 rpm for 20 min, and frozen at −70? C. The virus stocks were further analyzed for firefly luciferase expression in the TZM-bl cell line. Four replicates of five-fold dilutions of virus were added to 96 flat-bottomed plate wells containing 1 ? 10 4 TZM-bl cells per well, in 10% D-MEM growth medium with 7.5ug/ml of DEAE-dextran (Sigma) in a final volume of 200ul. After 48 h incubation at 37? C, 100uL of culture medium were removed from each well and replaced with 100 uL of Bright-Glo luciferase reagent (Promega). After 2 min incubation, 150 uL of the cell lysate was transferred to a 96-well white solid plate and luminescence was measured using a Victor Light 2030 luminometer (Perkin Elmer). Fifty percent infectious dose (ID50) titers were defined as the reciprocal of the virus dilution yielding 50% positive wells (Reed-Muench calculation). TZM-bl neutralization assays Six 3-fold serial dilutions of antibodies samples (starting from 66ug/mL), were plated in triplicate (96-well flat bottom plate) in 10% D-MEM growth medium (100 uL/well). 200 TCID50 of each pseudovirus or 20 TCID50 of each T/ F IMC were added to each well in a volume of 100 uL and incubated for 1 h at 37? C. TZM-bl cells were then added (1 ? 10 4 /well in a 100 uL volume) in 10% D-MEM growth medium containing DEAE-dextran (Sigma), at a final concentration of 7.5 ug/mL. Assay controls included replicate wells of TZM-bl cells alone (cell control) and TZM-bl cells with virus (virus control). Following a 48 h incubation at 37? C, 150 uL of culture medium were removed from each well and replaced with 100 uL of Bright-Glo luciferase reagent (Promega). After a 2-min incubation, 150 uL of the cell lysate was transferred to a 96-well black solid plate and luminescence was measured using a Victor Light 2030 luminometer (Perkin Elmer). The 50% inhibitory dose (IC50) was calculated as the concentration of antibody that induced a 50% reduction in relative luminescence units (RLU) compared to the virus control wells, after subtraction of cell control RLU. Antibody combinations and synergy calculation All antibodies that individually had achieved an IC50 against a given virus strain were combined pairwise with each other to test for combination effects in the inhibition of the respective viruses. The ratio of each antibody concentration in the combinations was not kept constant, but instead followed the dilutions scheme below: For every mAb pair (A + B), in one column of the 96well plate we plated six 1:3 dilutions of a given antibody (A), starting from one dilution above its IC50. To the same wells we then added the other antibody (B) at a fixed concentration corresponding to its IC50. The same procedure was repeated reciprocally with six three-fold dilutions of the antibody (B) to which antibody (A) was added, plated at the constant concentration representing its IC50. The remainder of the assay was conducted as described above. Each experiment was repeated independently two times. In order to evaluate the possible synergy between the antibodies, the inhibition data for each combination condition were analyzed using the software CompuSyn [13], which is based on a mathematic model of synergy calculation described by Chou [14,15]; Dr. Chou kindly provided his advice on the applicability of the analysis method to our data set and dilution layout: The  median-effect principle of Chou s method is based on a linear transformation of the inhibition data. A linear function is then fitted: log (f a /f u ) = m log(D/D m ), where f a = fraction affected (i.e., the normalized proportion of inhibited infection); f u = fraction unaffected (i.e., 1 − f a , or the relative residual infectivity); m is a constant determining the slope of the linear curve; D m is the  median effect dose , the equivalent of the half-maximal inhibitory concentration; and D is the concentration of inhibitor yielding a degree of inhibition corresponding to f a . The Characteristic values are the following: When CI < 0.9, the two mAbs show synergistic activity; When 0.9 ≤ CI ≤ 1.1 , the antibody pair works in additivity: When CI > 1.1, the two antibodies display antagonism. As introduced above, 12 individual CI values were calculated for the 2 ? 6 reciprocal dilutions done for each MAb pairwise combination. From these 12 values, average CI were obtained and the corresponding standard deviations were also calculated. Results In order to assess synergistic enhancement of inhibition by a panel of human neutralizing antibodies with different HIV-1 envelope protein epitope specificity, the study examined six human neutralizing mAbs, recognizing four different env domains; 4E10 and 2 F5 antibodies bind two contiguous epitopes within the gp41 MPER domain [16,17]; 2G12 antibody recognizes mannose residues located on different glycosides displayed on gp120 surface [18]; b12 antibody specifically interacts with the CD4 binding domain on gp120 [19]; finally, PG9 and PG16 antibodies recognize conformational epitopes on the gp120 V1/V2 loops, binding to various, non-contiguous mannose residues of the glycosidic moiety [20][21][22]. All monoclonal antibodies were assayed in the TZM-bl neutralization assay [9] against a virus panel including five Clade B pseudoviruses and six infectious molecular clones (IMC), representing Clade B Transmitted/Founder HIV-1 strains [2]. In three cases, both pseudoviruses acute /early env strains and T/F IMC were derived from the same patient; Table 1 summarizes relevant features of monoclonal antibodies and viruses used in the study. Single antibody neutralization assays Prior to conducting combinatorial inhibition assays, all antibodies were first assayed in individual neutralization assays against each virus in the panel, in order to assess their respective neutralization potency (IC50) against each PV and IMC HIV strain. As shown in Figure 1, only b12 achieved 50% neutralization in 10 out of 11 viruses, ( 5/5 pseudoviruses and 5/6 T/F IMC). The Nabs 2 F5 and 4E10 neutralized 7 and 8 viruses, respectively, both neutralized 3/6 T/F IMC, and 4/5 and 5/5 pseudoviruses, respectively, while other antibodies achieved 50% neutralization in a lower number of isolates. The 2G12 antibody only achieved 50% inhibition in 1/5 and 1/6 virus isolates, respectively. For the three patient-matched pairs (REJO, THRO and RHPA), T/F IMCs generally showed lower sensitivity to neutralization than pseudoviruses with acute/early envs from the same patients, respectively. Among the matched virus pairs, PG16 antibody only neutralized REJO and RHPA but sensitivity to neutralization of the PV and patient-matched T/F IMC was very similar, with IC50 values of comparable magnitude (within 1.5-fold to 2.2-fold range). However, some antibodies failed in 50% neutralizing the T/F IMC counterparts of the tested PV (e.g. b12 against REJO.c, and 2 F5 against RHPA.c; Figure 1), or the corresponding IC50 value for T/F IMC was by far higher (e.g. >7fold for 4E10 and b12 against THRO.c; Figure 1). These findings are intriguing, however, investigation of the underlying mechanism was outside of the scope and purpose of this study. Representative neutralization curves obtained for pseudoviruses (panels A-B-C) and T/F IMCs (panels D-E-F) from the same subjects (REJO, RHPA and THRO) are shown in Figure 2; neutralization curves were smooth and fulfilled standardized assay acceptance criteria. Paired neutralization assays Once we had assessed neutralization activity of individual antibodies against each virus strains, those antibodies which reached IC50 against a respective virus strain, and thus demonstrated potency, were assayed in pairwise combination against this strain, in order to test for potential synergistic or antagonistic activity. Each antibody pair was tested using reciprocal dilution schemes: the concentration of one antibody was kept constant at its IC50 concentration, while the second antibody was used at 6 three-fold dilutions, starting at one dilution above the IC50, and vice versa. This dilution scheme is a valid method to quantify the so-called Combination Index (CI) as a measure of synergistic, additive or antagonistic effects, utilizing the Chow and Talalay equation illustrated in Material and Methods [14,15]. Compared to the more commonly known matrix-style dilution approach, our approach offers the advantage of utilizing significantly less MAb and assay reagents while generating similarly meaningful CI data. CI values for the effect of antibody pairs can range from synergy (CI < 0.9), to additivity (CI ranging 0.9-1.1) and antagonism (CI > 1.1) [14,15]. In all cases in which antibody combinations were tested against PV (n = 33) and T/F IMC (n = 17), at least 50% inhibition of infection were observed (data not show). Figure 3 illustrates examples of neutralization curves obtained for three antibody pairs tested against REJO PV which resulted in CI values indicative of synergistic, additive and antagonistic effects, respectively. The 12 individual CI values generated for each mAb combination against each tested virus strain are illustrated in Figures 4 and 5, and Table 2 summarizes mean CI values and standard deviations for all antibody combinations (n = 50), observed against pseudovirus and T/F IMC strains. CI values indicative of synergy (CI < 0.9) were observed in 42 data sets. Of those, 11 synergistic data sets (n = 8 in PV group, n = 3 in T/F IMC group; data obtained from nine different antibody combinations) with mean CI <0.9 had standard deviations that reached into the range of additivity; in Table 2 they are indicated with hatched light grey shading to distinguish them from the 5 data set for which CI values indicating additivity were obtained (medium grey shading). CI values indicative of low-level antagonism were seen in 3 cases (dark grey shading). Antibody combination including either 4E10 or 2F5, with the exception of [4E10 + 2F5], displayed synergistic neutralization of all (4E10: 15/15) or most (2F5: 13/15) tested pseudoviruses and T/F IMC. Combinations of 2F5 antibody with b12 or PG16 resulted in synergistic neutralization of 5 out of 6 (with the sixth one showing borderline additivity), and 4 out of 4 tested virus strains, respectively ( Table 2). For T/F IMCs, which had shown overall lower sensitivity to neutralization by single mAbs than pseudoviruses, fewer Ab combinations were thus tested (n = 17) than for pseudoviruses (n = 33). Nevertheless, the majority of T/F IMC (15/17) were neutralized synergistically by the tested Ab combinations; the two exceptions occurred for REJO.c (Table 2 and Figure 4, blue triangles). As illustrated in Figure 4, the mean of all individual CI values obtained with a given Ab combination fell into the synergy range for 11/12 Ab combinations tested against T/F IMC, and for 11/13 Ab combinations tested against PV. This finding suggest that IMCs were no less susceptible than pseudoviruses to synergistic activity of antibodies which individually neutralized at least 50%. Most antibody pairs worked in synergy against all strains they were tested for. However, additivity as well as antagonism were observed for neutralization of both REJO PV and T/F IMC, for SF162 PV, and for AC10.0 PV (borderline additivity for 2F5 + b12), (Table 2). From a functional point of view, four pairs of antibodies targeting different domains displayed synergic activity against both pseudoviruses and T/F IMCs (4E10 + b12, 4E10 + PG9, 4E10 + PG16, 2F5 + PG16). Interestingly, as illustrated in Figure 4, all individual CI values, and not only their respective means, derived for the 4E10 + PG16 combination fell in the range of synergic inhibition of all tested strains (3 PV, 1 T/F IMC). This was also the case for b12 + PG16 against the same three PV strains (Figure 4, circles). In contrast, for other antibody combinations tested against PV, or both PV and T/F IMC, not all 12 CI values for a given virus fell within  synergy range, despite their respective mean CI indicating synergy, e.g. CH077.t T/F IMC (orange triangles) with [b12 + PG16] and REJO4551 Figure 2 Representative example of neutralization curves obtained on TZM-bl cells, using the antibodies listed in Table 1. Pseudoviruses with the three indicated envelope glycoproteins (A,B,C) were compared to T/F IMCs (D,E,F) derived from the same subjects. Values correspond to mean of three different experiments. Curves for tested antibodies that did not reached 50% of virus neutralization at the highest concentration used (66.67ug/mL) are omitted for clarity. Of note, combinations of antibodies recognizing the same overall domain or adjacent epitopes, such as PG9 + PG16 on gp120 or 4E10 + 2F5 on gp41, resulted in CI values ranging from synergy to antagonism, depending on virus strain (Figures 3, 4 and Table 2). Discussion In natural infection, broadly neutralizing antibodies (bNAbs) are generated too late to halt early infection events, and the effectiveness of humoral immunity is further hampered by virus escape in response to developing immune pressure. However, the generation of bNAbs via preventive vaccination could possibly block HIV acquisition. Thus, much effort is being placed on defining optimal immunogens to elicit effective bNAb responses. As the number of identified T/F env genes continues to grow, a detailed understanding of whether T/F strains may sharewithin or across cladescertain global features affecting neutralization sensitivity will underpin discovery of suitable neutralization targets and, thus, development of a preventive vaccine inducing effective virus neutralization. The question whether combination of broadly reactive antibodies directed against distinct epitopes may have synergistic, additive or antagonistic effects on neutralization potency has not been adequately addressed. Thus, in this study, a limited scope assessment of such effects on the neutralization of five Env-pseudotyped viruses and six T/F ICMs by six human broadly neutralizing antibodies was performed. All but one of the HIV-1 strains have been ascribed a Tier 2 neutralization phenotype in TZM-bl/PV assays; only SF162 possesses a Tier 1A phenotype (Neutralizing Antibody Resources Tools, at www.hiv.lanl.gov) [23]). To our knowledge, this is the first study to test human bNAbs individually and in pair-wise combinations against a panel of clade B T/F viruses; among them, three T/F virus strains, REJO.c, RHPA.c and THRO.c, were juxtaposed with pseudoviruses with early infection env genes derived from the same patients, respectively. Interestingly, the three pseudoviruses and T/F IMCs sharing the nearly identical env sequence, i.e. REJO, RHPA and THRO (with two, two, and one amino acid differences, respectively, between early infection and T/F envs; Additional file 1: Figure S1), displayed overall similar patterns of neutralization by single antibodies, however, IC50 values for IMC were generally higher, or not reached (i.e. IC50 > 66 μg/ml) (Figure 1). For example, the Env proteins in REJO PV and IMC have no substitutions and a shared insertion (Ile) in the b12 epitope, but differ from one another in aa 255 (Ala vs. Val), 2 positions upstream of Ser 257 -Thr 258 residues which are part of the b12 epitope (Additional file 1: Figure S1); this variation may contribute to the very different IC50s observed for these viruses (2.58 ug/mL vs >66 ug/mL). Similarly, Env proteins in RHPA PV and IMC differ from one another immediately following the LTRGD 437 portion of the epitope, and resulted in different IC50 values (0.15 vs 1.08 ug/mL). However, Env in THRO PV and IMC were identical to one another in and around the b12 epitope but still displayed different IC50 (0.83 vs 12.87 ug/mL, respectively). Thus, Env sequence alone cannot fully explain the sensitivity of a specific viruses to a given antibody, nor differences between pseudoviruses and IMCs sharing the same or highly similar env sequence. Of note, previous studies also reported different sensitivity (IC50) of IMC and pseudoviruses with identical env genes to single antibody neutralization, regardless of virus clade [24,25]. In both reports, the pseudoviruses were found to be less sensitive than IMCs to specific mAb neutralization [24,25]. However, because of small sample numbers in each study it cannot be ruled out that these results are env-strain specific rather than PV versus IMC specific. In our study including early-infection env PV and T/F IMC, the IMCs generally showed higher IC50 values in single NAb neutralization assays ( Figure 1). However, importantly, antibody pair synergy was observed at a higher proportion in IMC than in pseudovirus assays (Figure 4), suggesting that IMCs were as susceptible to synergistic antibody activity as pseudoviruses. Moreover, substantial similarity between IMCs and PV emerged when the distribution of all individual CI values within each group was compared ( Figure 5), and no significant differences between IMCs and PV were documented. Differences in neutralization sensitivity between IMCs and PV could be due to the two genetically distinct proviral backgrounds since IMCs encompass a complete autologous viral genome from which env is expressed in cis under the control of the autologous LTR [2,4]. In contrast, pseudoviruses are derived by complementing a common env-defective backbone with heterologous env genes expressed in trans [2,4]. Not surprisingly, other studies have reported that different ratios of backbone and env-plasmids transfected in host cells were found to give rise to pseudovirus particles endowed with different envelope features, such as the proportion of env protein cleavage and the level of gp120 surface expression [26]; such changes in envelope features were found to affect pseudoviruses infectivity, and, possibly, antibody reactivity [26]. Indeed, host cells are known to impact biochemical and structural features of virus particles, e.g. in terms of protein processing, folding and glycosylation patterns [25]. Viruses cultured in PBMC or in primary cells were found to be more resistant to antibody neutralization than those obtained from laboratory-adapted cell lines, for example due to a different glycosylation pattern shielding key epitopes and preventing antibody neutralization [24]. However, previous studies investigating structural changes among virus structure or protein composition, failed to associate differences observed in IC50 values or infectivity with any well-defined structural or biochemical feature [26]. In our study, both pseudoviruses and T/F IMCs were produced in 293 T cells, therefore diversity in antibody sensitivity cannot be ascribed here solely to the effect of host cells. We also strove to minimize other possible sources of variability in neutralization result by choosing a standardized, validated method, the TZM-bl assay [9], to perform all assays. Prior to standardization, unsatisfactory assay equivalency among laboratories had been observed even when reagent batches were shared [27]. As was expected, no single antibody neutralized all virus isolates, neither in the pseudovirus nor the T/F IMC group (Figure 1). The b12 antibody, targeting a conserved epitope within the CD4 binding site, achieved 50% neutralization on most T/F strains (5/6), and all PV strains (5/5). The 2G12 antibody was poorly reactive against 9 out of 11 viruses, neutralizing only SF162 pseudovirus and CH058 T/F IMC; this finding is in concordance with the absence of critical amino acid residues (N295, N332, S334, N339) of the 2G12 epitope, and glycosylation, in the resistant strains, respectively. PG16 was more reactive than PG9, neutralizing nearly half of virus strains in both panels (Figure 1). Sensitive virus strains in the study do share N156 and N160 glycosylation sites, which are crucial for PG9/PG16 binding (Additional file 1: Figure S1). Conversely, SF162 PV Env, and CH058.c and CH040.c IMC, lacking N160, showed resistance to these mAbs (Figures 1 and 2). THRO PV and IMC were resistant to PG9/PG16 mAbs, albeit the presence of both N156 and N160 glycosylation sites, possibly due to a K178R mutation in the epitope (Additional file 1: Figure S1). MPER antibodies 4E10 and 2F5 each neutralized 3/6 T/F IMC, and 5/5 (4E10) and 4/5 (2F5) PV, respectively ( Figure 2). Of note, IC50 values for both bNAbs differed seven-to ten-fold between patient-matched Env proteins expressed in either the PV or IMC context (IC50 values higher or not reached in IMC) despite identical MPER sequence. The MPER domain in gp41 is usually weakly recognized by neutralizing antibodies in native virus particles [28,29]. Since 2F5 and 4E10 mostly recognize MPER epitopes when gp41 is conformed in pre-hairpin intermediate [28][29][30], the higher IC50 values obtained against T/F REJO, RHPA and THRO strains may be explained by a more compact and stable conformation of Env expressed in cis from T/F IMCs as compared to their respective pseudovirus counterparts [31], a feature that could increase binding restriction and result in poor accessibility to antibodies [32]. Since humoral responses to pathogens are usually polyclonal, synergy and antagonism between antibodies may naturally occur. Due to their dimensions, neutralizing antibodies do not cluster on one unique Env molecule within the trimer, but are likely to bind distinct monomers within a singleor within two proximaltrimer spikes [30,33]. Electron microscopy and mathematical modeling have not yet determined the spike number required to carry out infection successfully, however, HIV particles are studded with only a low number of spikes (between 4 45), sparsely distributed on the envelope membrane [34]. Therefore, synergy and antagonism would result from the interaction of two or more antibodies with a population of molecular targets, where each single virus particle can carry a number of trimer spikes as well as Env dimers or monomers [33][34][35], and it cannot be readily assumed that two antibodies would have synergistic or antagonistic effects because they were bound to the same Env molecule. Due to the inherit variability of the envelope protein, a relevant question is whether in the in vivo context the presence of prolonged Nabs activity may play a role in modulating evolution of the disease. Although it is worth mentioning that Nabs have been associated with control of the disease in Long-Term Non-Progressor subjects, where an equilibrium has been established between virus and host, we cannot exclude that over time mutations occurring within the envelope can affect neutralizing activity thus resulting in an antagonism rather then synergy. This latter situation could occur when the patients clinical status changes to rapid progressors, thus loosing the previously established equilibrium. In this regard, the different density level of envelope spike could play a crucial role as well. The low density of envelope spikes, a distinguishing feature when compared with viruses to which protective neutralizing antibody responses are consistently raised, directly impedes bivalent binding by IgG antibodies. The result is a minimization of avidity, normally used by antibodies to achieve high affinity binding and potent neutralization, thereby expanding the range of mutations that allow HIV to evade antibodies. Understanding limitations to avidity may be essential to establish whether specific antibodies combination can differentially modulate their activity, in particular upon variability on the density of envelope spike during the course of chronic infection. Not all antibody combinations were tested against all PV and T/F IMC strains since not every antibody had reached IC50 individually. In all cases in which antibody combinations were assayed (n = 33 for PV; n = 17 for T/F IMC), inhibition levels of at least 50% were reached. Remarkably, synergy was observed in 42 out of 50 assays. CI values indicative of additivity were seen in 5 cases (one with T/F ICMs; four with pseudovirus assays). Low level antagonism was observed in three assays (one with T/F IMC, two with PV, respectively) and involved antibody combinations 2F5 + 4E10, and PG9 + PG16 which target related epitopes (Figures 2, 3, 4, Table 2). Nearly all antibody pairs achieved synergistic inhibition of both pseudoviruses and T/F IMCs, respectively, with a few and possibly virus-strain specific exceptions (Table 2). Findings from the bNAb pair assays, thus, suggest that synergy usually occurs when antibodies targeting different env domains were involved (e.g. 4E10or 2F5with b12 or with PG16). In other words, association of two suitable antibodies could induce a favourable conformational change, when binding the same monomer in a trimer or even when binding different monomers, therefore creating favourable conditions for synergic activity. From this point of view, synergy between b12 and MPERtargeting antibodies is not surprising, because CD4 binding takes usually place before gp41 exposure and promotes Env refolding into the intermediate, extended conformation [29,36]. Similarly, b12 binding could enhance accessibility of 4E10 (or 2F5) antibodies to MPER domain by inducing suitable conformational changes involving both gp120 and gp41 glycoproteins [30,37,38]. Combinations of PG9 + PG16 and the 2F5 + 4E10 antibodies, with members of each pair targeting overlapping or adjacent epitopes, were tested as controls. Surprisingly, their mean CI values ranged from synergy to antagonism depending on virus isolates (Table 2). In some cases all 12 individual CI values for each bNAb pair/ virus combination (Figure 4) fell into only one category (e.g. for 4E10 + 2F5 vs RHPA PV), while in others the individual CI values differed over a wide range depending on NAb concentrations (e.g. synergy for 4E10 + 2F5 versus REJO PV). While antagonism observed with the PG9 + PG16 pair may be explained by steric hindrance or target competition, since both of them bind V1-V2 loops of gp120 or quaternary structures exposed on the top of the gp120 trimer [39,40], it is noteworthy that antagonism was seen in only one out of four tested virus strains. The PG9/PG16 binding site determinants on gp120, and Env trimers, are not fully resolved; the N160 glycosylation site, shared by most HIV isolates, is one unique feature precisely attributed to both binding sites, and its mutations are known to affect PG9-PG16 neutralization [41]. All viruses tested in the study share N160 glycosylation site within their gp120 sequences (see Additional file 1: Figure S1), however they differ in the amino acid positions in the adjacent contact sites and displayed different neutralization sensitivity to PG9 and PG16, possibly because not all gp120 molecules and Env trimers could effectively bind PG16 and PG9. In the case where PG9 and PG16 neutralize individually, but their activity is antagonistically affected in combination it is possible that the PG9 and PG16 pair compete for the same binding site when both present, and thus causing antagonism [34] . The 2F5 and 4E10 antibodies recognize two contiguous, linear epitopes along MPER, which are especiallybut not exclusivelyaccessible in pre-fusion gp41, i.e. the pre-hairpin intermediate. Differently from PG9-PG16, the two MPER epitopes are close, but not overlapping; moreover, 2F5 and 4E10 antibodies target epitopes which are made accessible on different conformations of gp41 [36] therefore, their binding may not be competitive under some conditions [42], and in an Env strain dependent manner. Due to the nature of the epitope conformation and to MPER refolding, the 4E10 epitope may be accessible on native gp41 and throughout gp41 refolding, while the 2F5 epitope is accessible only during early phases of hairpin formation [36]. In addition, mutations involving the CDR-H3 region in 2F5 and 4E10 are known to reduce their interaction with lipids without altering epitope binding, but make these antibodies non-neutralizing [28]. The notion of the better and more prolonged accessibility of the 4E10 epitope versus the 2F5 epitope was supported by studies in which 4E10 showed a broader neutralizing activity than 2F5. Due to misfolding, symmetry within Env trimers may be disturbed, making MPER epitopesas well as any other Env epitopemore easily accessible to antibodies [43]. Furthermore, 2F5 antibodies representing different isotypes (IgA2 and IgG1) displayed synergic neutralizing activity even though they were directed against the same 2F5 epitope , probably by accessing and blocking 2F5 epitopes on distinct gp41 molecules within or between trimers [44]. Hence, the 4E10-2F5 range of synergy-additivityantagonism observed in the study may result from binding to individual monomers in single or multiple trimers as well as from strong membrane interactions, with unexpected effects on virus infectivity [12,17,28,36,[45][46][47]. In future work it will be of interest to explore whether antibodies that individually are poorly inhibitory and fail to reach IC50 could nevertheless be more potent in combination, due to synergic effects. To further validate the findings from our study that bNAb synergy may be a rather ubiquitous occurrence and thus may be harnessed to inhibit HIV-1 infection, it would be ideal to test a larger panel of circulating HIV-1 strains against additional bNAbs from the ever-growing reservoir. The recently described multi-clade Global Panel of 12 Env clones from the Neutralization Serotype Discovery Project (NSDP) was shown to represent the continuum of neutralization phenotypes observed for globally circulating HIV-1 strains [7]. Thus, testing for bNAb synergy against the Env Global Panel would be highly relevant and timely to gain a deeper understanding of the prevalence and potential of synergic effects on neutralization. In conclusion, we submit that immune strategies eliciting synergic antibody responses have the potential to augment inhibition of transmission and early virus infection, provided that polyclonal responses are employed and that their synergic potential can be fully exploited. Although many open questions remain regarding bNAb synergy, exploiting synergy between more easily inducible individual broadly neutralizing antibodies with more limited potency holds promise for effective vaccination strategies. Conclusion IMCs of HIV-1 strains which have established clinical infection in vivo afford the opportunity to elucidate relevant biological features of transmitted/founder HIV-1. So far, vaccine approaches have failed to elicit the most potent broadly neutralizing antibodies (bNAbs). In this study, we investigated whether pairwise combination of six bNAbs may result in synergic effects on the neutralization of six T/F IMC strains, and pseudoviruses with five env strains, thus augmenting inhibitory potential of individual bNAbs. Three of the early-infection envs tested as PV were juxtaposed with T/F viruses derived from the same three patients, respectively. Albeit we observed generally higher resistance of T/F IMCs to neutralization as compared to the tested pseudoviruses, a similar degree of synergistic activity of antibody pairs was achieved with both virus groups, irrespective of the presentation of Env on virions following expression in cis or in trans. Immune strategies eliciting antibody responses with epitope specificities that favor synergic activity, thus, hold promise to improve inhibition of transmitted/founder and early infection virus strains. Not unexpectedly, we observed that the nature of epitopes targeted by Nabs in paired assays affected the synergic versus additive or antagonistic effects. In our limited-scope study, the 4E10 and PG16 antibodies, when paired, showed optimal synergic activity on both T/F IMC and early-infection env PV HIV-1. The results from this study suggest that considering the concept of synergy between more easily inducible individual broadly neutralizing antibodies which may have more limited individual potency may be useful for designing vaccines and passive immunization approaches.
8,489.6
2014-12-13T00:00:00.000
[ "Biology", "Medicine" ]
Inferring potential small molecule–miRNA association based on triple layer heterogeneous network Recently, many biological experiments have indicated that microRNAs (miRNAs) are a newly discovered small molecule (SM) drug targets that play an important role in the development and progression of human complex diseases. More and more computational models have been developed to identify potential associations between SMs and target miRNAs, which would be a great help for disease therapy and clinical applications for known drugs in the field of medical research. In this study, we proposed a computational model of triple layer heterogeneous network based small molecule–MiRNA association prediction (TLHNSMMA) to uncover potential SM–miRNA associations by integrating integrated SM similarity, integrated miRNA similarity, integrated disease similarity, experimentally verified SM–miRNA associations and miRNA–disease associations into a heterogeneous graph. To evaluate the performance of TLHNSMMA, we implemented global and two types of local leave-one-out cross validation as well as fivefold cross validation to compare TLHNSMMA with one previous classical computational model (SMiR-NBI). As a result, for Dataset 1, TLHNSMMA obtained the AUCs of 0.9859, 0.9845, 0.7645 and 0.9851 ± 0.0012, respectively; for Dataset 2, the AUCs are in turn 0.8149, 0.8244, 0.6057 and 0.8168 ± 0.0022. As the result of case studies shown, among the top 10, 20 and 50 potential SM-related miRNAs, there were 2, 7 and 14 SM–miRNA associations confirmed by experiments, respectively. Therefore, TLHNSMMA could be effectively applied to the prediction of SM–miRNA associations. Electronic supplementary material The online version of this article (10.1186/s13321-018-0284-9) contains supplementary material, which is available to authorized users. Background MicroRNA (miRNA) is a small non-coding RNA molecule (about 22 nucleotides) discovered in plants, animals, human beings and even some viruses, that functions in RNA silencing and post-transcriptional regulation of gene expression [1,2]. The first miRNA was discovered in the early 1990s [3,4]. However, miRNAs were not recognized as a distinct class of biological regulators until the early 2000s [5,6]. MiRNA research revealed multiple roles for miRNAs in many important biological processes [7][8][9][10][11]. MiRNAs function via base-pairing with complementary sequences within mRNA molecules, which results in these mRNA molecules silenced [12,13]. Furthermore, aberrant miRNA expressions are implicated in various disease states [14][15][16], and miRNAbased therapies are under investigation [17]. Many studies have been conducted for the detection or regulation of miRNAs with bio-medical implications [18][19][20]. Regulation of miRNAs by synthesized oligonucleotides or small molecules is an efficient means to modulate endogenous miRNA function and treat miRNA-related diseases. They are being considered as a novel type of bio-markers or potential therapeutic targets for various diseases [21]. In molecular biology and pharmacology, a small molecule is a low molecular weight (< 900 Daltons) organic compound that may help regulate a biological process, with a size on the order of 1 nm [22]. Most drugs are small molecules. Small molecule regulators can modulate the regulatory networks of target miRNAs, and have potential use as probes to identify unknown components of miRNA pathways [23]. Regulation of Open Access *Correspondence<EMAIL_ADDRESS>1 School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China Full list of author information is available at the end of the article oncogenic or tumor-suppressive miRNAs by small molecules can induce cancer cell apoptosis [24]. Several small molecules with different regulatory activities on miRNAs have been identified,including inhibitors of miR-21 and inhibitors and activators of miR-122 [25]. MiR-21 is a well-known oncogenic miRNA and the expression of which is extremely high in ovarian, breast, and lung cancers [26]. Regulation of miR-21 using small molecules may be a novel approach to cancer treatment. Streptomycin was identified as a specific inhibitor of miR-21 [27]. Thermal melting results indicated that the inhibitory activity of streptomycin was derived from its direct interaction with pre-miR-21 [27]. The decreased expression of miR-122 and over-expression of miR-122 in liver cancer cells can induce cancer cell apoptosis [28]. In addition, miR-122 could also promote the replication of the hepatitis C virus (HCV) [29]. Using dual-luciferase reporter gene, where the Renilla luciferase gene is regulated by miR-122, two small molecules were identified as specific inhibitors of miR-122, while another compound was a specific activator. They all targeted miR-122 transcription [30]. MiR-34a is a tumor-suppressive miRNA that is down-regulated in most cancers and targets several anti-apoptotic genes [31][32][33]. Up-regulation of miR-34a can cause cellular apoptosis and inhibit cellular differentiation [34]. MiR-34a mimics with the ability to restore the expression of miR-34a have been examined in clinical trials [34]. Using a hepatocellular carcinoma cell line, a small molecule was identified from a natural product library as a specific activator of miR-34a [35]. QRT-PCR analysis showed that both mature and primary miR-34a was upregulated by this compound, indicating that it activated miR-34a at the transcriptional level [35]. Currently, a wide number of studies have been devoted to develop high-throughput methods to screen small molecule modifiers of miRNAs, which may provide a new direction for miRNA-targeting therapies [36]. MiRNA regulation by small molecules could result from inference in miRNA biogenesis at three levels: before, during and after transcription [37]. Small molecules increase or decrease miRNA expressions indirectly, by altering miRNA promoter regions or binding to the transcription factors [37]. They also can disrupt the maturation of miRNAs by binding with essential RNA-endonucleases [38]. In summary, investigating the relationships between small molecules and miRNAs is important for disease therapy and clinical applications for known drugs [36,37]. However, it is time-consuming to identify the regulations between small molecules and miRNAs by experimental approaches owing to the high complexity of biological systems. Therefore, there is an urgent need to develop new computational approaches or models to decipher the relationships between small molecules and miRNAs to speed up pharmacy genomic studies. Some computational methods have been established to comprehensively identify the potential associations between SMs and miRNAs depending on the assumption that similar SMs are more likely to have associations with similar miRNAs. For example, Li et al. [39] proposed a miRNA pharmacogenomic framework of small molecule-MiRNA network-based inference (SMiR-NBI) model, in which they constructed a heterogeneous network connecting drugs, miRNAs as well as genes and implemented network based inference (NBI) on the network to identify the underlying mechanisms of anticancer drug responses mediated by miRNAs. The model with high prediction accuracy and low computation cost only takes advantage of the network topology information from the built network as input. Lv et al. [40] constructed an heterogeneous molecular network to successfully identify novel SM-related miRNA targets based on the integration of SM side effect similarity, SM chemical structure similarity, gene functional consistency-based similarity for SMs and miRNAs, disease phenotype-based similarity for miRNAs and SMs, known miRNA-SM associations using a similarity-based random walk with restart. Furthermore, Jiang et al. [41] introduced a novel computational method to discover potential miRNA-SM associations in 23 different cancers on the basic of differential expression of miRNA target genes and gene signatures that are extracted from the gene expression profiles following drug treatment of the 23 cancers. As a result, they built the small molecule-miRNA network (SMirN) for 17 different cancers and identified miRNA modules and SM modules in each of the cancer specific SMirNs. Using the constructed network and identified modules, they predicted new miRNAs for drug target and drug candidates for cancer therapy. Wang et al. [42] presented a novel model to successfully predict potential miRNA-SM associations based on miRNA and SM functional similarity network, in which they calculated functional similarity for each pair of SM and miRNA based on Gene Ontology (GO) annotations of miRNA perturbed gene expression profiles and SM perturbed gene expression profiles. It is worth noting that potential drugs-diseases associations could be predicted at the same time through combining known miRNA-SM associations with experimentally validated miRNA-disease associations, which would be helpful for drug repositioning. Recently, Meng et al. [43] built a bioactive Small molecule and miRNA association network in Alzheimer's Disease (SmiRN-AD) to predict novel miRNA-SM associations based on the gene expression signatures of bioactive SM perturbation and miRNA regulation. Furthermore, the topological characteristics and functional properties of miRNAs and SMs were comprehensively analyzed in SmiRN-AD. Lastly, they constructed a database for SmiRN-AD and differential expression patterns of AD-associated miRNA targets can also be provided. Thus, the method and its application may be help for providing a new view with respect to the treatment of AD. Currently, in large-scale studies, high-performance or high-precision computing approaches are still required to comprehensively identify the potential miRNA-SM associations. In this study, we developed an effective computational method of triple layer heterogeneous network based small molecule-MiRNA association prediction (TLHNS-MMA) by combining integrated SM similarity, integrated miRNA similarity, integrated disease similarity, experimentally verified miRNA-SM associations and miRNAdisease associations into a triple layer heterogeneous network. An iterative updating algorithm that propagates information across the constructed heterogeneous network is then developed to predict novel associations between SMs and miRNAs. Moreover, new miRNA-disease associations can be automatically established at the same time. In this model, the known miRNA-SM associations were download form the database of SM2miR v1.0 [44]. We constructed two groups of datasets based on the known miRNA-SM association and employed TLHNSMMA to predict new miRNA-SM associations based on the two datasets respectively. In the Dataset 1, only a part of SMs and miRNAs were involved in the known miRNA-SM associations. In Dataset 2, all the SMs and miRNAs are implicated in the known miRNA-SM associations. To evaluate the effectiveness of TLHN-SMMA, global and local leave-one-out cross validation (LOOCV) as well as fivefold cross validation were implemented. In short, The AUCs of global LOOCV are 0.9859 and 0.8149 for Dataset 1 and Dataset 2, respectively; the AUCs of local LOOCV by fixing each miRNA to predict miRNA-associated SMs are respectively 0.9845, 0.8244 for the two datasets; the AUCs of local LOOCV by fixing each SM to predict SM-associated miRNAs are respectively 0.7645, 0.6057 for the two datasets. For fivefold cross validation, the average AUCs and standard deviations are 0.9851 ± 0.0012, 0.8168 ± 0.0022 for the two datasets, respectively. In case studies, 2 out of the top 10 and 14 out of the top 50 predicted miRNA-SM associations were confirmed by published references. Therefore, it proves that TLHNSMMA is effective in predicting potential associations between miRNAs and SMs. Performance evaluation We used global and local LOOCV as well as fivefold cross validation based on the known SM-miRNA associations in SM2miR v1.0 database to evaluate the performance of TLHNSMMA. Meanwhile, TLHNSMMA was compared with one previous classical computational methods: SMiR-NBI [39] in cross validation. SMiR-NBI only rely on known miRNA-SM associations [39]. The known miRNA-SM association dataset used for this comparison was the same as that in our study, i.e., the 664 known associations between 831 miRNAs and 541 diseases (Dataset 1) and the known 664 known associations between 39 SMs and 286 miRNAs (Dataset 2). The SMiR-NBI model was constructed based on the stateof-the-art network-based inference (NBI) algorithm [45,46]. For initial resources of a given SM located in its regulated miRNAs. Each miRNA will distribute resources equally to all neighboring SMs and then redistribute their obtained resources to every adjacent miRNA. The final resources score for miRNAs represented their potential association to be regulated by the interested SM [46]. In LOOCV, each known miRNA-SM association in the dataset was alternately used as the test sample in turn, while other known miRNA-SM associations were considered as training samples. The miRNA-SM without known association were regarded as candidate samples. After TLHNSMMA was implemented, we would obtain the prediction scores of each miRNA-SM pair. In global LOOCV evaluation, the score of test sample would be compared with the scores of all the candidate samples. However, In the SM-fixed local LOOCV, the test sample would be ranked with the scores of the candidate samples which composed of all the miRNAs that do not associated with the fixed SM. In the miRNA-fixed local LOOCV, the test sample would be ranked with the scores of the candidate samples composed of all the SMs without any known associations with the fixed miRNA. In fivefold cross validation, all the experimentally verified miRNA-SM associations were randomly divided into five equal groups. Each time, four groups were selected as training samples in turn and the other one group would be considered as test sample. Similarly, the miRNA-SM pairs with no known associations were regarded as candidate samples. Then, the score of each test sample would also be compared with that of all the candidate samples, respectively. The procedure of fivefold cross validation would be repeated 100 times in this model. Finally, we plotted Receiver operating characteristics (ROC) curve using true positive rate (TPR, sensitivity) against the false positive rate (FPR, 1-specificity) at different thresholds. Sensitivity denotes the percentage of positive miRNA-SM pairs that are correctly identified among all positive miRNA-SM pairs. Meanwhile, specificity refers to the percentage of negative miRNA-SM pairs that are correctly predicted among all negative miRNA-SM pairs. Area under the ROC curve (AUC) was calculated as a form of evaluation index for the model. In the model, the higher the AUC value, the better the prediction ability. When the model has perfect prediction ability, the value of AUC is 1. Meanwhile, if the model only possesses random prediction ability, the value of AUC is 0.5. As a result, in global LOOCV, TLHNSMMA and SMiR-NBI obtained AUCs of 0.9859 and 0.8843 based on Dataset 1, respectively. TLHN-SMMA and SMiR-NBI obtained AUCs of 0.8149 and 0.726 based on Dataset 2, respectively (see Fig. 1). In the framework of miRNA-fixed local LOOCV, the AUCs of TLHNSMMA and SMiR-NBI based on Dataset 1 are 0.9845 and 0.8837, respectively. In addition, the AUCs of TLHNSMMA and SMiR-NBI based on Dataset 2 are 0.8244 and 0.7846, respectively (see Fig. 2). In the framework of SM-fixed local LOOCV, The AUCs of TLHNS-MMA and SMiR-NBI based on Dataset 1 are 0.7645 and 0.7497, respectively. Furthermore, the AUCs of TLHNS-MMA and SMiR-NBI based on Dataset 2 are 0.6057 and 0.6100, respectively (see Fig. 3). In fivefold cross validation, TLHNSMMA and SMiR-NBI obtained AUCs of 0.9851 ± 0.0012 and 0.8554 ± 0.0063 based on Dataset 1, Meanwhile, TLHNSMMA and SMiR-NBI obtained AUCs of 0.8168 ± 0.0022 and 0.7104 ± 0.0087 based on Dataset 2. Finally, in order to obtain a clear knowledge of the predictability performance of TLHNSMMA compared with SMiR-NBI in our study. We listed evaluation result of TLHNSMMA and SMiR-NBI in global LOOCV, SM-fixed local LOOCV, miRNA-fixed local LOOCV and fivefold cross validation (see Table 1). In general, TLHN-SMMA turns out to be more reliable and effective in predicting potential miRNA-SM associations compared with SMiR-NBI. In addition, in order to assess the baseline performance of TLHNSMMA based on the dataset of known miRNA-SM associations to see whether or not the dataset of known miRNA-SM associations exist false positives. We removed all known miRNA-disease associations in the dataset, and randomly selected 664 miRNA-SM pairs from all miRNA-SM pairs as known associations. Then we implemented TLHNSMMA on the new randomly created adjacency matrix to calculate the AUC value for global LOOCV, SM-fixed local LOOCV and miRNAfixed local LOOCV based on Dataset 1 and Dataset 2, respectively. We repeat 100 times for each process mentioned above. More importantly, if some false positives exist in the dataset of known miRNA-SM associations, the output of TLHNSMMA will be better than random prediction. On the other hand, if there are almost no false positives exist in the dataset of known miRNA-SM associations, the performance of TLHNSMMA will be similar to the random prediction. Therefore, we implemented the hypothesis testing that the six results of LOOCV The results shown that the p value calculated are all higher than 0.05, indicating that the performance of TLHNSMMA will be similar to the random prediction and hence there are almost no false positives exist in the dataset of known miRNA-SM associations. Case studies Based on the published references in PubMed database, we verified the prediction results of TLHNS-MMA. Through the case studies, we can further confirm the effectiveness of the TLHNSMMA. We ulteriorly observed the number of the verified miRNA-SM associations in the top 10, top 20 and top 50 ones predicted by the computational model. As the result shown, among the top 10, 20 and 50 potential small molecule-miRNA associations, there were 2, 7 and 14 associations confirmed by experiments, respectively (see Table 2). For instance, in the top 10 predicted miRNA-SM associations, the association between mir-21 and diethylstilbestrol (DES) was predicted and ranked eighth. DES is a potent synthetic estrogen and the prototypical endocrine disruptor [47]. Based on the analysis of microarray profiling data, Padmanabhan et al. 's study demonstrated that mir-21 was changed more than twofold and significantly upregulated in the samples from DES-exposed compared to control uteri [48]. The progression of the neonatal DES-induced dysplasia/neoplasia phenomenon in the hamster uterus includes a spectrum of miRNA expression alterations that differ during the initiation and promotion stages of the phenomenon [48]. These findings underscore the need for continued efforts to identify and assess both the classical genetic and the more recently recognized epigenetic mechanisms that truly drive this and other endocrine disruption phenomena [48]. What's more, the association between mir-155 and 5-Fluorouracil (5-FU) was predicted and ranked ninth. 5-FU is a widely used chemotherapeutic drug in colorectal cancer. Using translatome profiling, a clinically relevant dose of 5-FU induces a translational reprogramming in colorectal cancer cell lines [49]. 5-FU increased the mRNA translation of HIVEP2, which encodes a transcription factor whose translation in normal condition is known to be inhibited by mir-155 [49]. In response to 5-FU, the expression of mir-155 decreases thus stimulating the translation of HIVEP2 mRNA [49]. These findings indicate that 5-FU promotes miRNA-dependent mechanisms [49]. In the top 20 predicted miRNA-SM associations, we also revealed the potential association between mir-146a and 5-FU ranked thirteenth. This association is demonstrated by Khorrami et al. [50]. In their studies, drug resistance in transfected HT-29 cells was analyzed following treatment with 5-FU [50]. The results showed overexpression of miR-146a enhanced regulatory T cells' frequencies in peripheral blood mononuclear cells [50]. The next prediction is between mir-155 and 17β-Estradiol (E2). In estrogen responsive breast cancer cells, E2 is a key regulator of cell proliferation and survival [51]. Mir-155 is the most significantly up-regulated miRNA in breast cancer [52]. Treatment with E2 in MCF-7 cells increased miR-155 expression, promoting proliferation and decreasing apoptosis of MCF-7 cells [53]. The results demonstrated that E2 promoted breast cancer development and progression possibly through increasing the expression of miR-155 [53]. Besides, the sixteenth predicted association between mir-34a and 5-FU was verified by Li et al. Table 1 Performance evaluation comparison between TLHNSMMA and SMiR-NBI in global LOOCV, SM-fixed local LOOCV, miRNA-fixed local LOOCV and fivefold cross validation based on Dataset 1 and Dataset 2 The corresponding AUCs of TLHNSMMA are shown in the third columns, and compared with the AUCs for SMiR-NBI in the fourth column [54]. Inhibition of lactate dehydrogenase A by mir-34a resensitizes colon cancer cells to 5-FU [54]. The nineteenth predicted association is between mir-145 and 5-FU. Akao's study confirmed that the exposure to 5-FU significantly increased the intracellular levels of mir-145 in the 5-FU-sensitive human colon cancer DLD-1 cells [55]. In addition, knockdown of mir-221 in 5-FU resistant esophageal adenocarcinoma cells resulted in reduced cell proliferation, increased apoptosis, restored chemosensitivity, and led to inactivation of the Wnt/β-catenin pathway mediated by alteration in DKK2 expression [56]. The results demonstrated the association between mir-221 and 5-FU predicted by TLHNSMMA as the last in top 20. The results in case studies have fully showed the outstanding performance of TLHNSMMA. Therefore, we further released the prediction list of the whole potential miRNAs associated with all the SMs in Dataset 1 and their association scores predicted by TLHNS-MMA (see Additional file 1: Table S1). Discussion MiRNAs play significant roles in the development and progression of multiple human complex diseases and discovered to be targeted by SM. Therefore, more and more attentions have focused on the identification of miRNA-SM associations in diseases, which would be helpful for developing a novel effective miRNA-associated therapeutic strategy. In this article, we integrated SM side effect similarity, SM chemical structure similarity, gene functional consistency-based similarity for SMs and miRNAs, disease phenotype-based similarity for miRNAs and SMs, disease semantic similarity, Gaussian interaction profile kernel similarity for disease, known miRNA-SM associations and known miRNA-disease associations into a triple layer network. At last, an iterative updating algorithm based on the triple layer heterogeneous graph was introduced to obtain new miRNA-SM associations. The reliable results from cross validation based on the Dataset 1 and Dataset 2 and case studies have demonstrated that TLHNSMMA could be an reliable and The reason of the useful performance of TLHNSMMA could be due to the following several factors. Firstly, the known experimentally confirmed miRNA-SM associations from highly reliable SM2miR v1.0 database [44] and miRNA-disease associations from reliable HMDD v2.0 database [57] used in the model for the identification of the associations between miRNAs and SMs ensured its effectiveness. Secondly, several reliable biological datasets were integrated into the heterogeneous graph. Unlike some machine learning-based model, the training data TLHNSMMA requires are only positive samples. In general, since the negative samples in machine learning-based model are randomly selected, this inaccurate chosen process would affect the model's prediction accuracy. Therefore, the prediction accuracy of TLHNS-MMA is more convincing compared with the prediction model that needs negative samples to train. Finally, global network information was used to predict potential associations between miRNAs and SMs. Compared with local network information, the advantages of global network information have been confirmed in previous researches of identifying new disease-associated genes, new disease-associated miRNA [58,59], new disease-associated lncRNA [60] and potential drug-target interaction prediction [61]. Furthermore, TLHNSMMA took full advantage of global network information by establishing an iterative process that propagated information across the heterogeneous network, which could promote the effective prediction of TLHNSMMA. Of course, there still exist several limitations in TLHNSMMA that need to overcome in the future. TLHNSMMA cannot predict the potential SM-associated miRNAs for SMs without any known related miRNAs and potential miRNA-associated SMs for miRNAs without any known related SMs. Besides, there is no powerful approaches to obtain the optimal parameters for TLHNSMMA. Finally, the number of experimentally verified miRNA-SM associations are insufficient, there are merely 664 experimentally verified miRNA-SM associations. The more known associations between miRNA and SM need to be confirmed in the future. Although TLHNSMMA has significantly improved the prediction ability compared with previous methods, current prediction accuracy is still not satisfactory based on the evaluation of LOOCV and case studies. Small molecule-miRNA associations The miRNA-SM association dataset used in this study was acquired from the SM2miR v1.0 database [62]. The dataset contains 664 distinct experimentally confirmed miRNA-SM associations. Dataset 1 in this paper consists of 831 SMs and 541 miRNAs, only some of them are involved in the 664 known associations. Dataset 2 consists of 39 SMs and 286 miRNAs that are fully involved in the 664 known associations. Then adjacency matrix A is defined to represent known miRNA-SM associations. If SM s(i) is related to miRNA m(j), the entity A(i, j) is 1, otherwise 0. Furthermore, variables ns and nm are used to indicate the number of SMs and miRNAs, respectively. Human miRNA-disease associations The human miRNA-disease association dataset used here was downloaded from HMDD v2.0 database [57]. In this paper, the known disease-related miRNAs that do not appear in the dataset of known miRNA-SM associations mentioned above need to be deleted. As a result, we obtained 6233 known miRNA-disease associations and established an adjacency matrix B to represent the known miRNA-disease associations. Similarly, variables nd were used as the number of diseases in the dataset, respectively. If miRNA m(i) is related to disease d(j), the entity B(i, j) is 1, otherwise 0. SM side effect similarity We obtained SM drug side effects from SIDER [63]. Here N(i) indicates the SM S(i)-related side effect set. Based on the idea that the more side effects two SMs share, the more similar between the two SMs. If SMs have any no common side effects, their value of side effect similarity is 0. The entity S s S (i, j) used here to indicate the side effect similarity of SM i and SM j. Jaccard score [64] was used to calculate SM side effect similarity, where the notation |X| is used for the cardinality of set X. SIMCOMP (http://www.genom e.jp/tools /simco mp/) has originally been developed as a graph-based method for comparing chemical structures, which is one types of chemical structure search serves for the chemical similarity search. In this work, SIMCOMP [65] was used to calculate SM chemical structure similarities, which were collected from the DRUG and COMPOUND sections of the KEGG LIGAND database [66]. SIMCOMP is a graph-based approach of searching a maximal common sub-graph isomorphism by finding the maximal cliques in an association graph, which reflects the global score of similarity. The approach considered different environmental factors of the same atom and was widely applied to the identification of drug-target interactions. Similarly, S C S (i, j) was presented here to denote the chemical structure similarity between SM i and SM j. Disease phenotype-based similarity for miRNAs and SMs miRNA-related diseases were extracted from HMDD v2.0 [57] databases miR2Disease [67] and PhenomiR [68] databases. Disease phenotype-based similarity for miR-NAs was defined using the Jaccard equation [1] according to the assumption that the more diseases the miRNAs share, the more similarity between the miRNAs. Here N(i) indicates the miRNA m(i)-related disease set. The entity S D M i, j indicated the disease phenotype-based similarity between miRNAs i and miRNA j. Similarly, SM-related diseases were extracted from Comparative Toxicogenomics Database (CTD) [69], DrugBank [70] and Therapeutic Targets Database (TTD) [71]. The entity S D S i, j was defined here using the Jaccard score to indicate the disease phenotype-based similarity between SM i and SM j. Gene functional consistency-based similarity for SMs and miRNAs We obtained the target genes of each miRNA from Tar-getScan [72]. Based on the assumption that if targets of two miRNAs have functional consistency, the similarity between the two miRNAs is greater. Gene Set Functional Similarity (GSFS) method [73] was used in this paper to reflect functional consistency similarity between two miRNAs by calculating functional consistency of their miRNA target gene sets [73]. The entity S T M i, j indicates the gene functional consistency-based similarity between miRNAs i and miRNA j. Target genes of the SMs could be obtained from DrugBank and TTD. The entity S T S i, j indicates the gene functional consistency-based similarity between SMs i and SM j. Integrated SM similarity In this study, we construct integrated SM similarity based on SM side effect similarity [74], gene functional consistency-based similarity for SMs and miRNAs [75], SM chemical structure similarity [76], disease phenotype-based similarity for SMs and miRNAs [74], respectively. In order to reduce the deviation of each similarity and balance the four similarity, a weighed combination strategy was developed to integrate the similarity. As shown in Equations [2]. The integrated SM similarity S S can be defined as follows: Here, the default value β j = 1 indicates each separated similarities have the same weight. (2) Integrated miRNA similarity Integrated miRNA similarity was established in this model by combining gene functional consistency-based similarity for miRNAs and disease phenotype-based similarity for miRNAs [74,75]. Similarly, we used a weighed combination strategy to integrate the similarities. The integrated miRNA similarity S M can be defined as follows: Here, the default value α i = 1 means each separated similarities possess the same weight. Disease semantic similarity model 1 Disease semantic similarity was proposed by combination of two models on the basis of disease directed acyclic graph (DAG) [77]. As illustrated in the literature [78], the semantic information of disease d(i) was explained by a DAG where d(i) and its ancestor diseases were used as nodes. The DAGs were retrieved from the U.S. National Library of Medicine (MeSH) at https ://www.nlm.nih.gov/ mesh/. The DAG(D)= (D, T(D), E(D)) represents the disease D, where T(D) is the node set of node D itself and its ancestor nodes, E(D) indicates the edges between child and parent nodes. We defined the contribution of disease d to the semantics of disease D as follows: Here, is the semantic contribution factor and the contribution of disease D to the semantic value of itself is 1. Besides, the contribution of other diseases to the semantic value of disease D will decrease when the distance between this disease and disease D increases. The semantic value for disease D can be calculated as follows: According to the assumption that two diseases with larger semantic similarity would share larger part of their DAGs, the value of semantic similarity between disease d(i) and d(j) in disease semantic similarity model 1 can be defined as follows: Disease semantic similarity model 2 According to the different disease terms in the same layer of DAG (D) may appear in the different numbers DV 1(d(i)) + DV 1 d j of disease DAGs. For example, the first disease and the second disease appear in the same layer of DAG (D) and the first disease appears in less disease DAGs than the second disease. We can conclude that the first disease is more specific than the second disease. Therefore, the contribution of the first disease to the semantic value of disease D should be higher than the second disease. The contribution of disease in DAG to the semantic value of disease D can be defined as follows: The number of DAGs including d The number of diseases The value of semantic similarity between disease d(i) and d(j) can be calculated in disease semantic similarity model 2 as follows: Gaussian interaction profile kernel similarity for disease Based on the idea that similar diseases are more likely to relate with miRNAs with similar functions. We calculate Gaussian interaction profile kernel similarity for diseases by building binary vector IP(d(u)) to represent the interaction profiles of disease d(u) with each miRNA, i.e. the ith row of the adjacency matrix B. Therefore, we defined Gaussian interaction profile kernel similarity between diseases d(u) and d(v) as follows. where parameter γ d is used to control the kernel bandwidth, which can be obtain from the standardization of a new bandwidth γ ′ d by the average number of related-miRNAs for per disease. Therefore, γ d can be defined as follows. Integrated disease similarity We introduced a Directed Acyclic Graph (DAG) to describe a disease based on the MeSH descriptors. The semantic similarity score was calculated based on the assumption that two diseases with larger shared area of their DAGs may have greater similarity score. In fact, we couldn't get DAGs for all diseases. In other words, for the specific disease without DAG, we couldn't calculate the semantic similarity score between the disease and other diseases. Therefore, for those disease pairs with semantic similarity score, we used the semantic similarity score to denote the disease similarity, for the others, the Gaussian interaction profile kernel similarity score was used to denote the disease similarity. Accordingly, integrated disease similarity matrix S D was constructed by integrating disease semantic similarity model 1, disease semantic similarity model 2 and Gaussian interaction profile kernel similarity for disease. The formulation was showed as follows: TLHNSMMA Based on the guilt-by-association principle [79,80], potential miRNA-SM associations can be predicted by constructing two-layer heterogeneous network with the datasets of integrated miRNA similarity, integrated SM similarity and known miRNA-SM associations. Likewise, novel miRNA-disease associations can be inferred by constructing two-layer heterogeneous network with the datasets of integrated miRNA similarity, integrated disease similarity and known miRNA-disease associations. Therefore, here we infer potential miRNA-SM associations in the newly developed three-layer model by integration of known miRNA-SM associations, miRNA-disease associations, integrated similarity for SMs, miRNAs and diseases using an information flowbased method (see Fig. 4). To establish new associations between SMs and diseases that have no associations originally, we can calculate a new W new sd in matrix format as follow: which incorporates integrated miRNAs similarity S M , miRNA-SM associations W sm and miRNA-disease associations W md . According to the association between SMs and diseases established above, new associations between SMs and miRNAs W new sm can be constructed using SMdisease associations W sd , integrated disease similarity S D and miRNA-disease associations W md . The equation was established to infer new associations between miRNAs and SMs by consideration of diseases. What's more, new associations between miRNA and disease W new md can be obtained simultaneously by incorporating SM information, which can be written as follows: where S D represents integrated disease similarity and superscript T indicates the transpose of the corresponding matrix. We set W new sd as a temporary value and replace it in the right sides of the Eqs. (13) and (14). Then, Eqs. (13) and (14) can be rewritten as follows: In view of the above-mentioned formula, Eqs. (15) and (16) incorporates all diseases related to miRNA and SM, as well as their similarity and all SMs related to miRNA and disease, as well as their similarity, respectively. More importantly, Eq. (15) is potentially more powerful in predicting unobserved miRNA-SM associations by consideration of the information of diseases. The new associations between SMs and diseases as a by-product in the model can be predicted using Eq. (12). The same is true for Eq. (16). Once the new miRNA-SM associations W new sm and new miRNA-disease associations W new md were obtained, we could build an iterative updating procedure. To integrate the initial associations between miRNA and SM associations and initial associations between miRNA and disease associations into those predicted procedures, the final model can be built as follows: where α is a decay factor in the range of (0,1); A is the adjacency matrix of known miRNA-SM associations acquired from the SM2miR v1.0 database, defined A(i, j)= 1 if SM s(i) is linked with miRNA m(j) otherwise 0. B is the adjacency matrix of known miRNA-disease associations downloaded from HMDD v2.0, defined B(i, j)= 1 if miRNA m(i) is linked with disease d(j)n otherwise 0. In each iteration, the known miRNA-SM association matrix A and miRNA-disease association matrix B will contribute to the newly constructed interactions of W k sm and W k+1 md . The contribution is controlled by the scale factor 1 − α , where α is a decay factor. We chose the same decay factor α (0.4) as the one in [81], which used the same triple layer heterogeneous network in their study, so the original known associations have slightly more weights. The associations between a miRNA and SM will finally include all the possible paths connecting them in the constructed triple layer heterogeneous network by iteratively using formula [17]. The same is true for the new miRNA-disease associations using formula [18]. These two iterative update equations can be treated (17) as simulating a process in which each node with prior information propagates the information obtained in the previous iteration to its neighbors. Due to the relation between the end-points and the probability of looking into an edge among the same end-points in a random network with the same node degrees, the weight of an edge was normalized according to the degrees of its endpoints. The two iterative update equations will converge with proper normalization, which is summarized as a theorem [82]. They will be stable after some steps and final probability scores of potential miRNA-SM associations and miRNA-disease associations will be obtained (when the change value between W k+1 sm and W k sm measured by L1 norm is less than a given cutoff, here the cutoff is set as 10 −6 ). W k sm and W k md defined in Eqs. (17) and (18) will converge after proper normalization (the proof can be found in the Additional file 2). Authors' contributions JQ implemented the experiments, analyzed the result, and wrote the paper. XC conceived the project, developed the prediction method, designed the experiments, analyzed the result, revised the paper, and supervised the project. JQL, YZS and ZM analyzed the result and revised the paper. All authors read and approved the final manuscript.
8,415.4
2018-06-26T00:00:00.000
[ "Computer Science", "Medicine", "Biology" ]
Optimization of Fixture Number in Large Thin-Walled Parts Assembly Based on IPSO There are lots of researches on fixture layout optimization for large thin-walled parts. Current researches focus on the positioning problem, i Introduction Large thin-walled parts are commonly-used components in aircraft manufacturing, ship building, and other industrial fields.They are usually assembled to construct outer shells, such as ship hulls and fuselage, which provide necessary space for passengers and cargoes.This kind of parts have large in-plane dimensions including the length and width (3 m−10 m), while the thickness of the part is very small (1 mm−10 mm), which leads to its low outof-plane stiffness.Due to its low out-of-plane stiffness, deformation under the action of gravity is easy to occur.Its deformation will affect the final assembly quality, which will reduce the surface quality as well as the service life of products. To suppress the part deformation in the assembly process, an "N-2-1" locating principle came into being [1].According to the principle, N (N > 3) fixtures are placed on the main datum plane to reduce the deformation (shown in Figure 1).The research shows that this principle not only restricts the degree of freedom of large thin-walled parts, but also reduces their deformation.However, when arranging fixtures on the basis of "N-2-1" locating principle, how to arrange N fixtures reasonably has become a key problem for engineers.Therefore, lots of researches on the optimal design of fixture layout have been done. Most researches on fixture layout optimization design are carried out under the situation that the number of fixtures is known.Researchers combine the finite element model with optimization algorithms to optimize fixture layout.Cai et al. [1] with the finite element method (FEM).The deformation at the grid nodes was used to express the deformation of the plate.Then they applied the nonlinear programming algorithm to obtain the optimal fixture layout to reduce the overall deformation.Ahmad et al. [2][3][4] proposed to use the overall strain energy to represent the deformation of the part.They used the FEM to calculate the strain energy, and then used different optimization methods to conduct optimization.Bi et al. [5] used the partial least squares regression method to carry out optimization.Wu et al. [6] combined genetic algorithm (GA) with FEM to get an appropriate layout of the auxiliary support of the blade in order to suppress its deformation.ANSYS and ABAQUS are commonly-used finite element analysis (FEA) software.Liao and Wang [7] combined ANSYS and MATLAB to build the finite element model, and then used the mode purchasing sampling method to search for the appropriate fixture layout.Hajimiri et al. [8] used ABAQUS for FEA and calculated part deformation.They optimized fixture layout and clamping sequence by GA.Xiong et al. [9] and Yang et al. [10] used Python to modify the parameters of the finite element model, so as to calculate the part deformation under different fixture layouts.Different heuristic algorithms were utilized for optimization. In addition to calculating the deformation directly in the FEA software, some researchers also chose to derive the stiffness matrix and modify it to calculate the deformation.Such behavior can be summarized as building a finite element solver.Du et al. [11] adopted the direct stiffness method (DSM) to obtain the deformation of each node.They modified the stiffness matrix according to modification rules proposed by Wu et al. [12], and then obtained the deformation according to Hooke's law.Liu and Hu [13] adopted the method of influence coefficients (MIC) to deal with the stiffness matrix.Camuz et al. [14] used MIC to obtain the plastic distribution of all nodes on the sheet metal.The method they proposed can efficiently improve the accuracy of deformation prediction. The above studies are carried out when the number of fixtures has been determined based on engineering experience.In most assembly process of large thin-walled parts, the N fixtures are uniformly fixed on the X-Y plane (illustrated in Figure 1), while the heights of the "N" locators in Z direction are adjusted by workers according to the profile of each part to be assembled.With the large in-plane dimensions, the current dense and uniform fixture layout makes the value of "N" a great number.It brings long fixture setup time, extra assembly workload, and high cost in assembly process.Therefore, to reduce extra cost and improve the assembly efficiency under the requirements of assembly accuracy, the number of fixtures needs to be optimized. Li and Melkote [15] employed the sequential quadratic programming technique to optimize the number of fixtures.They designed an iterative synthesis algorithm.Simulation results showed that their approach improved the workpiece location accuracy significantly.Wang et al. [16] proposed an approach which combined FEM with nonlinear programming algorithms to get the appropriate position and number of the fixtures.To optimize fixtures' position and number, Liao [17] proposed a method based on GA.The deformation due to the gravity effect was also minimized.The method was applied to an industrial case and the practicability of the approach was demonstrated.Yang et al. [18] constructed the grey model to link the maximum deformations of the parts to the number of fixtures.The number of fixtures was found under the allowable maximum deformation.Khodabandeh et al. [19] came up with a novel idea which combined the FEM and multi-objective ant colony algorithm.The number and position of clamps were both optimized.This method was proved to be effective by the fixture layout optimization for the automotive side reinforcement.Aderiani et al. [20] utilized evolutionary optimization algorithms and compliant variation simulation of the assembly.They intended to optimize several fixture layout parameters simultaneously.Two cases from the automotive industry were studied to prove that the presented method was effective. Although researchers have done some research on the optimization of fixture number, these studies still have limitations.First, most studies only consider the fixture layout of one part, while the assembly process involves at least two parts.Therefore, it is necessary to consider the fixture layout optimization of two parts at the same time.Second, most of the studies aim to reduce the part overall deformation.However, in large thin-walled part assembly process, in addition to the part deformation itself, there is also the assembly gap between the two parts.As many thin-walled parts are assembled by seam welding, the size of assembly gap has a great impact on the welding quality in the assembly process.Therefore, it is of great significance for improving the assembly quality to control the assembly gap size.Thirdly, the optimization of the number of fixtures was mostly carried out in the form of trialand-error in the past.Researchers increased or reduced the number of fixtures one by one in each stage, looked for the optimal fixture layout respectively, and finally got the smallest number of fixtures.This optimization method has cumbersome steps and low efficiency.Therefore, an improved particle swarm optimization (IPSO) algorithm is proposed to optimize the number of fixtures for large thin-walled parts in this paper. The optimization model is constructed in this paper.The DSM is used to calculate the part deformation and lay a foundation for the deformation and assembly gap control.Then, with the IPSO algorithm, the number of fixtures is optimized.Finally, taking ship curved panel assembly as a case, the feasibility of our method is proved.The arrangement of this paper is as follows: Section 2 introduces the construction of the optimization model, and Section 3 introduces the optimization algorithm.The case study of ship curved plane assembly is discussed in Section 4. Finally, the conclusions are summarized in Section 5. Formulation of Fixture Layout Optimization for Large Thin-Walled Parts According to the "N-2-1" locating principle, the layout scheme of "N" fixtures plays a critical role in reducing the part deformation in the assembly process.However, each additional fixture or adjustment of the fixture will produce additional costs.To reduce the waste and improve profits, it is essential to design an optimization model to minimize the number of fixtures while meet the assembly requirements.This section will describe the optimization problem, decision variables, optimization objectives and constraints, and construct the optimization model of the fixture layout. Decision Variables and Optimization Objectives In "N-2-1" principle, N fixtures placed on the main datum plane are aimed at reducing part deformation.Two fixtures are located on the second datum plane and one fixture is placed on the third plane.Among them, the fixtures on the main datum plane have great influence on the part deformation.Therefore, this paper mainly considers the layout optimization of "N" fixtures on the main datum plane.According to the engineering practice, fixtures cannot locate on sharp curves.Therefore, the fixture design space should subtract these areas.If the fixture positions are represented by the coordinate, it will be difficult to design constraints when the part shape is irregular.To simplify the problem, the fixture positions are expressed by discrete variables X = [x 1 , x 2 , x 3 , . . ., x N ] .X is a vec- tor representing the fixture layout.x i (i = 1, 2, 3, . . ., N ) means the position of each fixture, which is represented by the index of finite element mesh nodes.N denotes the fixture number.The specific method is as follows.Using the finite element analysis software, two large thin-walled parts are divided into multiple grids.It is assumed that the fixtures are arranged at the grid nodes.There are n 1 nodes and n 2 nodes for part 1 and part 2, respectively.Consid- ering that the fixture cannot be arranged at the sharp edge, the nodes at the edge of the parts are excluded from the optional range in this paper.NF 1 and NF 2 are used to represent the number of nodes which cannot be selected on two parts respectively.In other words, there are (n 1 + n 2 − NF 1 − NF 2 ) nodes where fixtures can be located on the two parts.C s represents the set of optional nodes so that x i ∈ C s , i = 1, 2, 3, . . ., N. The optimization objective of this paper is as follows to optimize the number of fixtures: where N (X) means the number of fixtures when the fix- ture layout is X .The objective is to minimize the number of fixtures. Constraints Large thin-walled parts have low out-of-plane stiffness.This characteristic makes them easy to deform.Excessive part deformation will also cause excessive assembly gap between two parts, resulting in a decline in assembly quality.Therefore, while optimizing the number of fixtures, constraints are needed to make sure that the part deformation and the assembly gap meet the requirement.The DSM is used to help calculate part deformation and assembly gap. Firstly, the overall part deformation is considered.The large thin-walled part is divided into multiple grids through finite element analysis, and the nodal deformation is used to characterize the overall deformation.To calculate the deformation, this paper assumes that the deformation at each node is linear elastic.Based on Hooke's law, the overall stiffness equation can be obtained as follows: where F ∈ R n represents the force vector, K ∈ R n×n means the global stiffness matrix, and U ∈ R n is the deformation vector.n represents the number of nodal displacements.The deformation at each node of the part can be calculated by expanding Eq. ( 2) as follows: (1) To improve the calculation efficiency, this paper used the DSM [11,12] to calculate U .According to the DSM, U is calculated by adjusting the stiffness matrix and force vector.Suppose the αth node dis- placement u α is known and , where i = 1, 2, 3, . . ., n and i = α .In the fixture layout process, when a fixture is set on node α , it means that u α = 0 .According to the direct stiffness method, the displacement boundary conditions imposed by the fixture can be easily expressed.For example, if α=3, i.e., u 3 =0, then Eq. ( 3) can be modified as: (3) Through this method, the original Eq. ( 2) can be rewritten as follows: where F ′ 1 (X) and K ′ 1 (X) are the adjusted force vector and stiffness matrix of part 1, respectively.F ′ 2 (X) and K ′ 2 (X) denote the force vector and adjusted stiffness (5) matrix of part 2, respectively.U 1 (X) and U 2 (X) are the node deformation vectors of part 1 and part 2 respectively.During assembly, the surface profile tolerance of the large thin-walled part meets the requirement.In this paper, the surface profile is expressed by the node deformation.Therefore, the requirements for surface profile tolerance can be reflected by the following constraints: where u 1,ix (X) , u 1,iy (X) , u 1,iz (X) , u 2,jx (X) , u 2,jy (X) , and u 2,jz (X) are linear displacements of the i th node of part 1 and the j th node of part 2 in the X, Y and Z direc- tions, respectively.ε is the surface profile tolerance requirement. Besides considering the surface profile, the influence of the assembly gap between the two parts on the assembly quality is also considered in this paper.In fact, if the assembly gap between the two plates is too large, it will greatly affect the quality of welding.The calculation formula for assembly gap dimension of two parts is as follows: where ψ k (X) means the assembly gap at node k under fixture layout X .m 0 is the number of nodes along the assembly gap.H(X) is the maximum assembly gap, which needs to meet the requirement of less than σ .Therefore, the constraint is as follows: In general, the optimization model is as follows: (6) It can be found that the optimization model contains three unequal constraints.To simplify the model, the number of unequal constraints should be reduced.Therefore, the penalty function δ(X) is introduced.δ(X) is defined as follows: where , and δ 0 is a positive parameter.δ 0 is much greater than σ .At this time, the three unequal con- straints of the original problem can be written as follows: It can be proved that Eq. ( 11) is equivalent to the three unequal constraints in Eq. ( 9).The proof process is as follows: is much greater than σ as δ 0 is much greater than σ .Therefore, H ′ (X) will be greater than σ. (2) When H(X) is greater than σ , because δ(X) is greater than or equal to 0, H ′ (X) must be greater than σ.(3) When X satisfies all three unequal constraints, δ(X) equals to 0 and H (X) is less than or equal to σ .Therefore, H ′ (X) ≤ σ holds. The optimization model is simplified as follows: 3 Method for Fixture Layout Optimization for Large Thin-Walled Parts After constructing the optimization model, we need to apply the optimization algorithm to calculate the optimal number of fixtures.As a classical heuristic algorithm, particle swarm optimization (PSO) algorithm has simple structure and fast search speed.It is often used to solve fixture layout optimization problems [21][22][23][24].To further improve the search ability for the fixture layout optimization, the IPSO algorithm is proposed.This section introduces the IPSO algorithm and the specific optimization process. Improved Particle Swarm Optimization Algorithm Kennedy and Eberhart [25] first introduced the PSO algorithm.PSO simulates a behavior of birds gathering towards the same target in a multidimensional space.According to the actual requirements, we improve the PSO algorithm. ( Section 3.1.1,3.1.2and 3.1.3respectively introduce the IPSO algorithm from three aspects: mapping between fixture layouts and particles, updating particle velocity and position, and selection of optimal fixture layouts. Mapping Between Fixture Layouts and Particles There are NP particles in PSO algorithm, and every par- ticle represents a solution of the problem.As for the optimization model in this paper, each particle represents a fixture layout with N * dimensions, where N * means the maximum number of fixtures for two parts.The m th parti- cle can be expressed as where x m,l (l = 1, 2, 3, . . ., N * ) indicates the index of the node where the l th fixture are located.For a fixture, it can be placed on any node on the part except the nodes on the edge.However, considering the large number of optional nodes and large solution space, it will become difficult to search feasible solutions.To improve the efficiency, the optional position range of each fixture is constrained.The original set C s is divided into N * subsets.These N * subsets have no intersection with each other, and their combination constitutes C s .For the l th fixture, its position meets the condition that x m,l ∈ C l s , where C l s represents the set of optional nodes for the l th fixture. As the assembly process involves two parts and the fixtures are scattered on them, a particle is also composed of two parts.One part represents the fixture layout on part 1 and the other part represents the fixture layout on part 2. As shown in Figure 2 In the general PSO algorithm, the variable has a range.Once the variable exceeds the range, it needs to be pulled back.However, the goal of this paper is to optimize the number of fixtures.Therefore, when the variable is out of range, it is not pulled back to the range, but is regarded as the position of virtual fixtures.The number of fixtures is reduced in this way.For example, suppose after iteration t , X m (t) = [x m,1 , x m,2 , x m,3 , . . ., x m,p , . . ., x m,q , . . ., x m,N * ] , where x m,p is outside the optional range C p s , and x m,q is outside the optional range C q s .Therefore, x m,p and x m,q are regarded as the position of virtual fixtures.In this case, the number of real fixtures becomes (N * − 2) , which means the fitness value of the m th particle N (X m ) equals to (N * − 2) .In this way, the number of fixtures is optimized. Updating Particle by Integrating Shrinkage Factor and Adaptive Inertia Weight During the iteration, the velocity V m (t) , the current posi- tion X m (t) , the best position a particle reached in pre- vious iterations X pbest m (t) , and the best position of all particle X gbest (t) , determine the new position of particle m .Every dimension of velocity V m (t + 1) and position X m (t + 1) is updated as follows: where represent the dth element of V m (t + 1), V m (t), X pbest m (t), X m (t), X gbest (t) and X m (t + 1) , respectively. Learning factors c 1 and c 2 reflect the information exchange between particles.Setting a larger c 1 will cause too many particles to search in the local range, while a larger c 2 will make the particles trapped in local opti- mal.To control the velocity of particles and make the algorithm achieve the balance between local search and global search, the shrinkage factor ϕ is introduced [26].This adjustment method can ensure the convergence of PSO algorithm [27].Therefore, the improved velocity and position update equation is as follows: (13) In PSO algorithm, inertia weight coefficient ω is a very important parameter.It indicates the ability of particles to maintain the motion state of the previous moment.Also, it can help the algorithm keep the balance between local search and global search.The larger ω is, the stronger the global search ability can be.When ω is small, the local search ability become strong.For the optimization algorithm, we hope that it not only has good global search ability to find better solutions, but also can accurately search the local space of better solutions.Therefore, many researchers are interested in the improvement of the inertia weight coefficient ω [28][29][30][31].This paper chooses to introduce adaptive inertia weight [32].The calculation of ω of the m th particle is as follows: where ω min and ω max are the preset minimum and maxi- mum inertia weights respectively, N (X(t)) is the average fitness of all particles in the t th iteration, min(N (X (t))) is the minimum fitness in the t th iteration, and N (X m (t)) means the fitness of the m th particle in the t th iteration.The meaning of this equation is as follows: The smaller the fitness value is, the closer it is to the optimal solution, which means a greater need of local search.Conversely, global search is more needed. ( Selection of Optimal Fixture Layouts Figure 3 illustrates the specific flow of the IPSO algorithm. As can be seen from the flow chart, selecting and updating X pbest m and X gbest is a very important step in PSO algo- rithm.For unconstrained optimization problems, only the fitness value is needed when selecting X pbest m and X gbest .However, the optimization problem in this paper is a constrained optimization problem.Therefore, when updating X pbest m and X gbest , the constraint H ′ (X m ) needs to be care- fully considered.H ′ (X m ) can be obtained based on Eq. (11).In this process, DSM is applied to calculate node deformation vectors, which are necessary for calculating H ′ (X m ) .The specific application of DSM is as shown in Section 2.2.H ′ (X m ) will be one of the key bases for updating X pbest m and X gbest . The criteria of updating X pbest m and X gbest are as follows: (1) When there are particles to make H ′ meet the con- straint of H ′ ≤ σ , select the one with the smallest N ; (2) When there is no particle to make H ′ meet the con- straint of H ′ ≤ σ , select the one with the smallest H ′ ; (3) When there are multiple particles that can make H ′ meet the constraint of H ′ ≤ σ , and their N is equal, select the one with the smallest H ′ . The criteria help us avoid selecting a fixture layout that has a small number of fixtures but cannot control part deformation. As for the m th particle, the best position it reached in previous t iterations is recorded as X pbest m (t) .After the (t + 1) th iteration, the resulting fixture layout is recorded as X m (t + 1) .The process of updating X pbest m (t + 1) is shown in Algorithm 1.The process of updating X pbest m (t + 1) is actually a process of selecting a better solution between X pbest m (t) and X m (t + 1) .H ′ and N are two important criteria for evaluating which solution is better.There are many situations when comparing H ′ and N , so we listed how to make choices in various situations in Algorithm 1. After updating X pbest m (t + 1) , X gbest (t + 1) is waiting to be updated.X gbest (t + 1) is recorded as the swarm's best position after (t + 1) iterations.It is selected from is chosen from two solutions, while X gbest (t + 1) is cho- sen from NP solutions, the update procedures are differ- ent.The new swarm's best position X gbest (t + 1) is updated according to Algorithm 2. H ′ and N are both considered when updating X gbest (t + 1). Process of Fixture Layout Optimization By combining the FEM with the IPSO algorithm, this paper optimizes the number of fixtures for large thin-walled parts.The specific optimization steps are shown in Figure 4. Step 1. Build a three-dimensional model according to the shape and size of the part.After modeling, export the data interaction file of the model. Step 2. Meshed the model by the finite element meshing tool, and derive the data interaction file of the model.The number of finite element mesh nodes of two parts and the coordinates of each node are extracted from the data interaction file. Step 3. Read the data interaction file using the FEA software.Set the material parameters.According to the actual situation of the assembly and splicing of the two thin plates, reasonably simplify the model, set that there is no interaction between the two parts and load gravity.After the parameter setting is completed, the stiffness matrixes and force vectors of the two parts are derived and recorded as K 1 , K 2 , F 1 and F 2 respectively. Step 4. Use MATLAB to read the derived stiffness matrixes and force vectors.Set the positions of the three fixtures placed on the 2nd and 3rd datum planes according to the "N-2-1" positioning principle, and modify K 1 , K 2 , F 1 and F 2 . Step 5. Apply the IPSO algorithm to iteratively find the number and position of fixtures that meet all Select the optimal solution according to H ′ and N. Step 6. Validate and visualize the optimization results.According to the optimal fixture layout, we found out that the FEA software is used to design constraints at the corresponding grid nodes of the finite element model.Carry out finite element simulation, obtain the deformation nephogram of the parts, and complete the result verification and visualization. Case Study A case of ship curved plane assembly is studied to demonstrate the effectiveness of our method.The curved plane used in ship assembly has the characteristics of large size and low out-of-plane stiffness, which is easy to deform under the effect of force.Therefore, it is essential to reasonably design the fixture layout to reduce the deformation.Section 4.1 introduces the background and significance of the problem and constructs the finite element model.The optimization results are shown in 4.2.The optimization result of this method are compared with that obtained by iterative trial-and-error procedure based on simulated annealing (SA) algorithm [11] in Section 4.3, which shows the advantages of this method. Problem Description of Ship Curved Panel Assembly Large curved panels are widely used in ship building.Hundreds of large curved panels with different shape and size are assembled and welded together to be ship hull.With the characteristics of large in-plane dimensions, small thickness and low out-of-plane stiffness, large curved panels are very easy to deform under the action of gravity.The deformation of panels will introduce assembly gap in the assembly process.Incompliant assembly gap will affect the welding quality and efficiency [33].Therefore, it is necessary to control the assembly gap dimension in the ship assembly process. It is a common approach to restrain the part deformation by using reasonably arranged fixtures.At present, the uniformly distributed jig frame is usually used to support the large cured panels and restrain the deformation in the assembly process.In fact, this experience-based fixture layout not only makes it difficult to achieve satisfactory results in restraining deformation, but also causes unnecessary waste due to the excessive number of fixtures.To reduce the assembly cost and effectively restrain the deformation, the fixture number and the corresponding layout need to be optimized. The research objects of this case are two ship parts located at the bow.Part 1 and part 2 are two curved parts. The welding method used in the part assembly process is laser welding or arc welding.During the welding process, two curved parts are welded by long continuous welding seam [11].The lengths of four sides for part 1 are 5200 mm, 4800 mm, 2800 mm and 4200 mm.The lengths of four sides for part 2 are 5200 mm, 5100 mm, 1500 mm and 1900 mm.Both parts are 6 mm thick.The density, Poisson's ratio, and Young's modulus of the two parts are 7.85×10 −3 g/mm 3 , 0.3, and 210000 N/mm 2 respectively.Using HyperMesh 13.0 for mesh generation.The size of the mesh is 100 mm × 100 mm.Part 1 is divided into 1751 elements with 1828 nodes; Part 2 is divided into 826 elements with 892 nodes.Figure 5 shows the finite element model.This case only considers the influence of the gravity of the parts themselves on the deformation during the assembly process.The gravity is evenly distributed on the two parts in Z direction, and the gravitational acceleration is set as 9.81 m/s 2 . After establishing the finite element model, the stiffness matrixes K 1 and K 2 , and force vectors F 1 and F 2 are derived.Table 1 shows the input parameters which are set according to the obtained model and the known information in the assembly process.The parameter settings of the IPSO algorithm are shown in Table 2.The number of particles, the number of iterations, and the particle velocity variation range are determined through trial and error.To balance the search time and search scope, these parameters are set to the values in Table 2.The variable dimension is determined according to the maximum number of fixtures on two parts.The values of learning factors and the variation range of inertia weight are set according to Ref. [26]. Results of Optimal Design After setting the initial parameters, the appropriate fixture layout is found after several iterations.This section introduces the optimized fixture layout and the changes of deformation and assembly gap before and after After 25 iterations, the optimized number and position of fixtures are obtained.The iterative process is shown in Figure 6.Table 3 shows the number of fixtures on the two parts and nodes where the fixtures are located before and after optimization.Before optimization, there are 28 fixtures on part 1 and 14 fixtures on part 2. The fixtures are evenly distributed on the two parts in the X-Y plane (shown in Figure 7).After optimization, the number of fixtures on part 1 is m 1 = 21 , and the number of fixtures on part 2 is m 2 = 11 .The total number of fixtures is reduced from 42 to 32. Figure 8 shows the X-Y plane projection of the fixture layout. Although the number of fixtures is reduced, the optimized fixture layout still has a good effect of controlling deformation through the optimization of fixture position.Table 4 shows the deformation of parts with different fixture layouts.Comparing the dimension of the gap between the two parts, it can be found that the mean gap is 1.32 mm and the maximum gap is 1.86 mm when the fixture is evenly distributed.The optimized fixture layout controls the mean gap dimension to 0.37 mm and the maximum gap to 0.69 mm.They are reduced by 72.0% and 62.9% respectively compared with the former.It can be found that when the number of fixtures is 42, uniformly distributed fixtures cannot effectively control the part deformation.The maximum node deformation of the part is 4.53 mm.With the optimized fixture layout, the maximum deformation of parts is 2.20 mm, 51.4% reduced than the former.In terms of average deformation, before optimization, the mean deformation is 0.78 mm, while after optimization, the mean deformation is 0.36 mm.The average deformation is reduced by 53.8%. To intuitively show the part deformation under different fixture layouts, the deformation diagram of the part is drawn using ABAQUS 6.14. Figure 9 shows the deformation of parts under uniform layout and optimized layout.It can be found from the figure that the most of nodal deformation of the two parts is well-controlled when the fixtures are evenly distributed.However, the deformation of some nodes at the edge of part 2 is serious.This is because when the fixtures are evenly distributed, the fixtures at the edge are arranged sparsely due to the size and shape of part 2. After optimization, the deformation of the two parts is well controlled because the deformation constraints are set before optimization.The maximum deformation of part 2 is reduced from 4.53 mm to 2.20 mm. Figure 9 also shows the assembly gap between two parts before and after optimization.It can be found from Figure 9 that when fixtures are uniformly distributed, the size of the assembly gap between the two parts increases along the X-direction and the gap is obvious.With the same deformation scale factor in ABAQUS 6.14, the assembly gap between these two parts is significantly reduced after optimization. Figure 10 shows two different distribution histograms of node deformation before and after optimization.Figure 10(a) shows the deformation when a uniform layout is adopted.It can be found that the displacement of most nodes is less than 1 mm, and the displacement is mostly concentrated in 0.5−1 mm.However, the displacement of a few nodes has not been well controlled, exceeding 3 mm.Figure 10(b) shows the deformation after the fixture layout is optimized.It can be found that the nodal displacement is concentrated between 0−0.5 mm, and only a small part of the nodal displacement exceeds 1 mm. Figure 11 shows the gap dimensions at each node along the assembly edge.Compared with that before optimization, the gap dimensions are significantly reduced.Figures 10 and 11 illustrate that although our objective is to reduce the number of fixtures, the deformation and assembly gap dimensions of the two parts are reduced due to the setting of constraints. Comparison and Discussion This section proves the reliability and advantages of the proposed method from two aspects.Firstly, we validate the calculation accuracy of DSM.Secondly, the performance of IPSO algorithm for the number of fixtures optimization are compared with the traditional trial-anderror procedure. To prove the accuracy of the calculation results of DSM, the deformation obtained by the DSM is compared with that obtained by ABAQUS 6.14.Taking the optimized fixture layout as an example, the absolute differences of the deformation results obtained by the two 5.It can be found that the order of magnitude of the absolute differences is 10 −4 , so the calculation result of DSM is credible.This means that the method of using DSM to calculate deformation does not need to call the FEA software frequently, and the calculation process is simpler. Based on general PSO algorithm, the IPSO algorithm integrates the shrinkage factor and adaptive inertia weight to improve the search ability.To verify the superiority of the IPSO algorithm, we compare general PSO and IPSO through experiments.We run two algorithms 10 times respectively.The parameters of the IPSO algorithm are shown in Table 2, while PSO algorithm does not have shrinkage factor, and the inertia weight is fixed at 0.9.The results are plotted as a boxplot, as shown in Figure 12.It can be seen from the figure that the It is also essential to compare the method with the traditional iterative trial-and-error process to evaluate the performance of our method.Du et al. [11] used the combination of DSM and SA algorithm to optimize the fixture layout.Their purpose was to minimize the assembly gap between the two parts.The number of fixtures was determined before optimization.Based on the method proposed by Du et al., this paper carries out iterative trial-and-error to reduce the number of fixtures.Then, the trial-and-error procedure is compared with the IPSO algorithm.The iterative trial-and-error procedure on the basis of the method developed by Du et al. is as follows: Firstly, optimize the fixture layout with a constant number of fixtures until the assembly gap and deformation meet the requirements.Then, remove a fixture at random and continue to optimize the fixture layout until the requirements are met.Iterate until the number of fixtures is reduced to the target. According to the optimization results shown in the previous section, the initial number of fixtures is 42 and the target number is 32.The relevant parameters of SA algorithm are consistent with those in the research conducted by Du et al. [11].The data related to the finite element models of the parts are modified according to the case in this paper. Through the above method, the number of fixtures is reduced.Table 6 shows the comparison between the result found by the iterative trial-and-error process and the proposed method.Run these two programs on a laptop with 11th Gen Intel (R) Core (TM) i5-11400H @ 2.70 GHz processor, 16 GB of RAM, it takes longer time for the iterative trial-and-error procedure to reduce the number of fixtures by the same degree.Besides, the ability of the fixture layout found by iterative trial-and-error procedure in controlling deformation and assembly gap is not as good as that found by the proposed method.With the fixture layout obtained by iterative trial-and-error method, the maximum deformation of the two parts is smaller, but the maximum gap, average gap and average deformation are not as small as that with the fixture layout obtained by the IPSO.The results in Table 6 show that the IPSO algorithm has higher efficiency and higher solution accuracy. Conclusions (1) An optimization model with the objective of minimizing the number of fixtures is proposed.We designed constraints to ensure that the deformation of parts and the assembly gap meet the requirements while reducing the number of fixtures.The stiffness matrixes and force vectors are derived by FEA software.The DSM is used to calculate the deformation.Using DSM can avoid frequently calling FEA software and make the process of calculating deformation easier.(2) The IPSO algorithm is used for optimization.Each dimension of a particle represents the position of a fixture.To improve the solution efficiency, we restrict the feasible position of each fixture to ensure that the feasible positions of a fixture are in a small area.The number of feasible solutions is greatly reduced, and the phenomenon that multiple fixtures concentrated in a unified area is avoided.In addition, this paper introduces adaptive inertia weight and shrinkage factor to weigh local search and global search.(3) A case of ship curved plane assembly is prepared. This case demonstrates that our approach can significantly reduce the number of fixtures while controlling the deformation and assembly gap very well.Comparing the deformation calculated by DSM with that calculated by FEA, it is proved that DSM can accurately calculate the deformation. In addition, the IPSO algorithm is compared with the general PSO algorithm, and the result shows that the IPSO algorithm has stronger search ability.Finally, comparing our method with the Iterative trial-and-error method, it is proved that our method has higher computational efficiency. 2 - , the first n * 1 -dimensional variable represents the positions of n * 1 fixtures for part 1, while the last n * dimensional variable represents the positions of n * 2 fixtures for part 2. n * 1 and n * 2 mean the maximum number of fixtures for part 1 and part 2, respectively.The sum of n * 1 and n * 2 equals N * . 1 and r 2 are the random real numbers in range [0, 1].c 1 and c 2 are the learning factors.V max and V min are the maximum and minimum values of particle velocity.ω is an inertia weight coefficient.The sub- script d indicates the d th dimension of the vectors. v Figure 2 1 Algorithm 2 Figure 2 Sample particle in the IPSO algorithm Figure 3 Figure 3 Flow chart of the IPSO algorithm Figure 4 Figure 4 Fixture layout optimization process Figure 5 Figure 5Finite element models of two parts to be assembled Figure 6 Figure 6 Iterative process of the IPSO algorithm Figure 7 XFigure 8 Figure 7 X-Y plane projection of fixture layout before optimization Figure 11 × 10 − 4 Mean 1 . Figure 11 Comparison of gap dimensions of nodes along the assembly edge before and after optimization Figure 12 Figure 12 Comparison of IPSO and PSO divided the plate into meshes Table 1 Setting of input parameters Table 2 Setting of algorithm parameters Table 4 Comparison of part deformation and assembly gap under different fixture layout layout Mean gap (mm) Max gap (mm) Mean deformation (mm) Max deformation (mm) Deformation and assembly gap with different fixture layouts Figure 10 Nodal deformation distribution overall optimization result obtained by IPSO is better.The median of the numbers of fixtures obtained by IPSO algorithm is 32, while that obtained by PSO algorithm is 34.In addition, the results obtained by PSO algorithm are more scattered, which indicates that the search ability of PSO is unstable.The comparison results show that IPSO algorithm has better and more stable search ability. Table 6 Comparison of two different methods
9,280.6
2024-01-02T00:00:00.000
[ "Engineering" ]
Innovation of Precise and Intelligent Teaching Mode in Vocational Colleges from the Perspective of Big Data : In order to study the precise and intelligent teaching mode of Vocational Colleges from the perspective of big data, promote the effective reform and innovation of teaching mode, and improve the teaching quality and efficiency of vocational colleges, this paper first analyzes the characteristics of the precise and intelligent teaching mode from the perspective of big data, then summarizes the precise and intelligent teaching mode from the perspective of big data, and then analyzes the innovation principles of the precise and intelligent teaching mode from the perspective of big data, Finally, based on the perspective of big data, this paper puts forward several strategies for innovating the precise and intelligent teaching mode of vocational colleges, aiming to meet students' personality and diverse learning needs, promote the high-quality development of the teaching mode of vocational colleges, and provide reference for relevant personnel. Introduction With the rapid development of the Internet, great changes have taken place in the way people live and work. When big data changes people's living environment and thinking perspective, it has become the core engine of innovative Internet technology. Under this situation, vocational colleges have also relied on big data to carry out smart campus construction. Teaching management is gradually developing towards intelligence, accuracy and visualization, which has greatly improved the teaching quality and teaching level. In 2018, the Ministry of education mentioned in the information education plan document that teaching activities should be based on big data mining, collection and analysis, effectively integrated into the Internet, and truly achieve the goal of teaching students according to their aptitude. Therefore, from the perspective of big data, innovating the precise and intelligent teaching mode of vocational colleges is not only the general trend, but also an effective way to promote the innovative development of vocational colleges. Characteristics of Precise and Intelligent Teaching Mode (1) Online and offline integration. The precise and intelligent teaching mode is a combination of online and offline teaching. In terms of online teaching, it is not only a supplement to offline teaching, but also a solid foundation for further offline teaching based on the evaluation and analysis of students' learning situation. At the same time, online teaching is an effective way for students to consolidate, feed back and expand offline teaching content. Therefore, precision wisdom teaching is characterized by the combination of online and offline. (2) Big data aided analysis. Under the precise and intelligent teaching mode, it can help teachers analyze students' learning level, learning status and effect in a more detailed way by analyzing the data of the network platform, accurately grasp students' personality characteristics, so as to build a real-time map of students' knowledge and ability, and provide an auxiliary role for teachers to effectively carry out teaching design and other related work [1]. (3) It has the function of screening information and guiding teaching. In the process of carrying out precise and intelligent teaching, teachers can obtain a large amount of data and information from students' feedback on teaching activities. Through effective screening and timely screening, teachers can better understand students' needs, lay a solid foundation for providing a more targeted and more suitable teaching model for students' learning, so as to achieve the goal of accurately promoting learning resources and personalized serving students. (4) It has rich teaching resources. Since the precise and intelligent teaching mode resource library is built on the cloud of the Internet, it contains comprehensive and diverse resources and information, which not only covers video and audio resources of various disciplines, but also has materials and tools such as professional feedback sorting, courseware guidance, exercise consolidation, key and difficult point analysis, which can meet students' different learning needs. Under the background of information interaction and teaching research, both teachers and students can quickly find the necessary resources in precise and intelligent teaching. (5) The interaction between teachers and students is diversified. With the support of intelligent devices and the Internet, the precise and intelligent teaching mode can break the traditional space-time boundaries and barriers, which can not only make the communication between students more three-dimensional and diversified, but also make the communication and feedback between teachers and students more timely, effective and diverse. (6) It has the function of dynamic comprehensive evaluation. Under the precise and intelligent teaching mode, students' evaluation is more comprehensive and scientific, including pre class learning situation analysis, in class real-time monitoring, after-class review and consolidation and other related teaching activities. Then, students' academic situation is comprehensively analyzed according to the data of students' activities and the trajectory of students' activities, so as to achieve the dynamic and comprehensive evaluation goal. Overview of Accurate and Intelligent Teaching Mode Based on Big Data Perspective From the perspective of big data, the precise wisdom teaching mode not only needs to design the common learning progress of all students, but also needs to take into account the development direction of different students' individual abilities. Therefore, when carrying out precise wisdom teaching, we should take students' individual abilities as the teaching core, master students' behaviors and characteristics through data analysis, and lay a foundation for giving full play to the precise wisdom teaching function. In the process of precise and intelligent teaching, teachers should first dynamically and accurately analyze the students' learning conditions such as pre class preview and after class review, and select the appropriate resource base to carry out the teaching work according to the analysis results. If the student groups are different, they can assist the teaching with the help of different teaching resources and learning paths, such as intelligent devices, exercises, class admiration, videos and graphics. After the plan and goal of precise and intelligent teaching are effectively formulated, students can choose learning resources from the main according to their own learning situation and needs, and can also use the tool library pushed by big data to learn and carry out teacher-student cooperation, communication and other activities. Under the precise and intelligent teaching mode based on big data perspective, teachers will monitor the whole process and adjust students' teaching and learning methods in real time. At the same time, in case of problems, both teachers and students can timely feed back and evaluate with the help of the big data platform, and effectively improve the quality of teaching while dynamically adjusting the teaching and learning methods [2]. Innovation Principle of Precise and Intelligent Teaching Mode Based on Big Data Perspective Under the background of more and more mature, diversified and information-based education models, teachers' teaching ideas are also gradually changing, and they generally hope to achieve the goal of innovative and diversified teaching methods with the help of big data and modern equipment. However, when teachers apply big data and other modern technologies, they often have shallow use, superficial use and other phenomena, which makes the precise and intelligent teaching model unable to give full play to the functions of communication, intervention, adjustment and preset. Therefore, from the perspective of big data, when innovating the precise and intelligent teaching mode, we should take its operation characteristics as the basis, and carry out the innovation of the precise and intelligent teaching mode on the basis of following the innovative principles of flexible teaching objectives, open and interactive classes, and procedural teaching evaluation. (1) Pay attention to the flexibility of teaching objectives. Different from the traditional teaching mode, precision and wisdom teaching emphasizes exploratory, active and cooperative learning. Therefore, when setting teaching objectives, we should break the traditional thinking, and effectively cultivate students' abilities of practice, communication, reflection and information application on the basis of giving full play to students' standard and initiative, so as to make them become all-round development talents. Therefore, based on the perspective of big data, we should innovate the precise and intelligent teaching mode. We should adjust the teaching objectives in real time according to the results of data analysis and evaluation to ensure that they are flexible. This can not only dynamically analyze the learning situation in different stages of the classroom, but also enhance the intelligence and timeliness of the classroom. (2) Pay attention to the openness and interactivity of the classroom. Compared with the traditional cramming class, the precision wisdom teaching class emphasizes the students' initiative, autonomy and subjectivity in learning. Therefore, based on the big data perspective, the precise and intelligent teaching mode is innovated. Teachers should ensure that students' learning rights are open, so that they can adjust and optimize their learning plans independently to meet the personalized needs of learning context. At the same time, teachers should guide students to carry out group learning, so that they can improve the experience effect and learning effect in open communication and interaction. (3) Pay attention to the process of teaching evaluation. Each student's personality and learning ability are different, and their learning progress and learning effectiveness are also different. Therefore, when carrying out accurate and intelligent teaching evaluation based on big data perspective, vocational colleges should change the single evaluation mode in which test scores are the main evaluation indicators, and conduct more accurate and comprehensive evaluation based on the analysis of students' knowledge mastery map and learning curve, so as to enhance students' learning motivation Improve learning effect and provide guarantee [3]. Strategies for Innovating the Precise and Intelligent Teaching Mode of Vocational Colleges from The Perspective of Big Data From the perspective of big data, in order to effectively innovate the accurate and intelligent teaching mode, vocational colleges must change the traditional teaching thinking, reform the existing teaching mode, teaching content and teaching strategy, and carry out teaching based on materials on the basis of paying attention to the characteristics of students' differentiation and personalization. Specifically, vocational colleges can carry out the innovation of precision wisdom teaching mode from six aspects: collecting learning behavior related data, mining and analyzing teaching data, formulating precision wisdom teaching objectives, intelligently pushing course content and teaching strategies, carrying out precision wisdom teaching evaluation and diagnosis, and promoting precision wisdom control and intervention, so as to achieve the innovation objectives of precision wisdom teaching mode To improve the teaching quality and level of vocational colleges. Collect Data Related to Learning Behavior Based on the big data perspective, when innovating the precise and intelligent teaching mode, vocational colleges can first collect and analyze learning behavior and other related data with the help of online vocational education and other cloud platforms. Among them, learning behaviors include learning preferences, enthusiasm, interests, habits, etc. relevant data include discussion participation, login platform frequency, homework completion, browsing learning materials, learning duration, etc. By collecting and analyzing these relevant data, teachers can help them grasp students' learning status dynamically and timely. By recording the collected and analyzed data, they can lay a solid foundation for subsequent data mining. Mining and Analyzing Teaching Data From the perspective of big data, vocational colleges can mine the collected information and data through online vocational education and other cloud platforms, and build a big data teaching center to compare, test and analyze students' behavior, performance and results, so as to accurately and intelligently predict the future performance trend of students. In addition, vocational colleges can use SPSS software to mine and analyze students' motivation, tendency, style, preference, etc. on the basis of the analysis results, they can intelligently diagnose the data, obtain the learning results of each class and student, and push them to each teacher to effectively promote accurate and intelligent teaching. Develop Precise and Intelligent Teaching Objectives On the basis of mining and analyzing the learning situation, potential, trend and other results, vocational colleges should combine these data to quantify the internalized learning behavior into the external precise and intelligent teaching goal, so as to make it measurable and clear. When formulating precise and intelligent teaching objectives, vocational colleges should fully consider the actual situation such as the characteristics of students, establish sub objectives at each stage on the basis of precise and intelligent decomposition, conduct in-depth analysis and continuous optimization of sub objectives, and finally develop a decision-making database for precise and intelligent teaching objectives, so as to lay a foundation for intelligent promotion of course content. Intelligent Push Course Content and Teaching Strategy After formulating the decision-making database of accurate and intelligent teaching objectives, vocational colleges should design matching course contents and teaching strategies based on the perspective of big data. Vocational colleges can intelligently push course content and teaching strategies by combining big data analysis results and accurate and intelligent teaching goal decision-making database. If some students fail to achieve the goal of accurate and intelligent teaching, vocational colleges should make cyclic adjustments to the course content and teaching strategies, so as to achieve the goal of a virtuous cycle of accurate and intelligent teaching mode [4]. Carry Out Accurate and Intelligent Teaching Evaluation and Diagnosis From the perspective of big data, vocational colleges should accurately and intelligently assess and diagnose the behavior characteristics of students in the whole learning cycle, fairly and objectively assess whether students' enthusiasm, preferences and abilities at all stages of learning have reached the expected precise and intelligent goals, and make multidimensional and authentic diagnosis of learning results. Through the application of cloud platform related data such as vocational education, it can ensure that vocational colleges can more accurately master the effect of learning on the basis of students' personalization and differentiation. While improving the feasibility and accuracy of accurate intelligent teaching evaluation and diagnosis, it can achieve the goal of transforming traditional single evaluation, summary evaluation to comprehensive evaluation and process evaluation. Promote Precise and Intelligent Control and Intervention Based on the big data perspective, in order to effectively innovate the precision wisdom teaching mode, vocational colleges should start from the individual needs of students and take teaching students according to their aptitude as the core to promote the precision wisdom control and intervention. Precise intelligent control and intervention is to use big data to record and diagnose students' behaviors, judge whether they have achieved the sub goals of this stage, and take appropriate control and intervention actions. If the student has achieved the sub goal of this stage, he can start the sub goal of the next stage; If students fail to complete the sub goal of this stage, they will take control and intervention measures for teachers' teaching and students' learning. Generally speaking, the three stages of teaching class, discussion group and students can be controlled accurately and intelligently. By promoting precise wisdom control and intervention, the goal of precise wisdom teaching can be achieved while continuously optimizing teachers' teaching and students' learning methods. Conclusion In a word, the innovation of precise and intelligent teaching mode in vocational colleges is not only an effective way to implement the student-centered education concept, but also a necessary way to promote the modern development of vocational colleges. Therefore, vocational colleges should, from the perspective of big data and on the basis of giving full play to information technology and data drive, effectively innovate teaching and learning methods, and achieve personalization through collecting data related to learning behavior, mining and analyzing teaching data, formulating accurate and intelligent teaching objectives, intelligently pushing course content and teaching strategies, developing accurate and intelligent teaching evaluation and diagnosis, and promoting accurate and intelligent control and intervention Differentiated teaching, while achieving the effective innovation goal of the precise and intelligent teaching mode in vocational colleges, will provide the society with comprehensive, developmental and innovative talents.
3,621.8
2022-06-15T00:00:00.000
[ "Computer Science", "Education" ]
The equation of state of neutron star matter based on the G -matrix and observations G -matrix theory, which has a sound basis both theoretically and phenomenologically in this region. In addition, a phenomenological third-order term of baryon density is introduced to control the stiffness of the EOS in this density region. In the higher-density region, we introduce a parameterized EOS. The adjustable parameters are fixed utilizing the statistical method developed by Steiner et al. [Astro-phys. J. 722 , 33 (2010)] to be consistent with observational data on neutron stars. As a result, we find that an EOS softened by the additional third-order term of baryon density in the density region lower than 3.5–4.0 times the normal density and a stiff EOS in the higher-density region are preferable. The resultant EOS is similar to an EOS with the assumption of the hadron–quark crossover proposed by Masuda et Introduction The average densities of typical neutron stars are about two times the nuclear density (≈ 3.0 × 10 14 g/cm 3 ). Therefore, an explanation of the equation of state (EOS) of neutron stars is one of the fundamental problems in nuclear physics. In fact, many nuclear physicists have attempted to explain the masses of neutron stars with an EOS based on realistic baryon-baryon interactions [1][2][3]. In this context, the hyperon mixing in neutron stars causes too soft an EOS and fails to explain the masses of typical neutron stars with 1.4 solar mass (M ). Further, the recent discovery of the heavy neutron stars PSR J1614+2230 [4] with mass M = 1.97 ± 0.04 M and PSR J0348+0432 [5] with mass M = 2.01 ± 0.04 M has caused great difficulties with this problem. In addition, the appearance of new degrees of freedom softens the EOS and the maximum mass of neutron stars is reduced considerably. Some authors call this difficulty the hyperon puzzle [6]. The hyperon puzzle can be solved in principle by constructing the EOS to give the maximum mass of neutron stars around 2 M . The solution of the hyperon puzzle can be achieved in frameworks based on the model with quarkmeson coupling [7][8][9] and the vector baryon-meson coupling model [10]. The density-dependent relativistic mean-field model involving meson-hadron coupling constants and meson masses also leads to the stiffening of the EOS and a large neutron star mass [11]. In high-density matter, physical effects such as many-body forces, boson condensations, or effects of quark degrees of freedom are expected to be important. Equation of state The EOS of nuclear matter is an important ingredient to determine the M-R relation for neutron stars. Using realistic baryon-baryon interactions, we determine the EOS of high-density β-stable baryonic matter and calculate the M-R relation for neutron stars by solving the TOV equation. We consider three regions in neutron stars as illustrated in Fig. 1. The first region is the crust of neutron stars. Although the mass of the neutron star crust constitutes only ≈ 1% of the neutron star mass, and its thickness is typically less than one-tenth of the star radius, the neutron star crust plays an important role in determining the M-R relation. In this region, we use the Bogomol'nyi-Prasad-Sommerfield (BPS) EOS [16] and its extrapolation up to the transition baryon density ρ crust = ρ 0 /2, where ρ 0 = 0.17 fm −3 is the nuclear saturation baryon density. The second region is the theoretical EOS region defined by the baryon densities from ρ 0 /2 to nρ 0 , where n is a variable. This region is assumed to be dominated by baryon-baryon interactions. In this Fig. 1. Pressure p as a function of energy density . In the high-density region, we use a parameterized EOS, which is the general piecewise linear function. We take ρ crust = ρ 0 /2, 2.5 ≤ n ≤ 5, and 0 < ν 1,...,4 ≤ 1. 2/18 Downloaded from https://academic.oup.com/ptep/article-abstract/2016/7/073D02/1752908 by guest on 27 July 2018 region, we assume an energy density given by = theor (ρ) + 3 ρ 3 for ρ crust < ρ < nρ 0 (1) where and ρ are the energy density and the baryon density, respectively. theor is determined by the G-matrix theory with baryon-baryon interactions and the 3 term is introduced as a phenomenological third-order term of baryon density. The 3 term does not directly mean the three-body force effect. The aim of the 3 term is not to construct a consistent model describing both symmetric nuclear matter and neutron star matter, but to control the stiffness of the neutron star matter EOS in the second region. In this work, we assume that the saturation properties of symmetric nuclear matter are not affected by the 3 term. It is well known that, for a given neutron star mass, the neutron star radius depends on how stiff or soft the EOS is. Steiner et al. estimated that the radius of a neutron star with mass 1.4 M is between 10.4 and 12.9 km [17]. Because neutron star radii are mainly decided by the theoretical EOS in the second region, the 3 term plays a role in controlling neutron star radii (see Sect. 3). To determine theor , we perform the G-matrix calculation [18] for β-stable baryonic matter with two models of baryon-baryon interactions, NSC97e [19] and fg2014 [20]. Both models have similar properties for NN and N interactions but give very different predictions for N interactions. The former provides an attractive − -nuclear matter interaction, which causes − mixing in neutron star matter at relatively low densities (around 2 ρ 0 ). The latter provides a repulsive interaction and does not cause − mixing in the second region (n < 5). Recent experimental and theoretical knowledge in hypernuclear physics support the repulsive − -nuclear matter interaction. As a result, NSC97e gives a softer EOS than fg2014 at densities higher than two times the nuclear density. Because NSC97e and fg2014 give relatively soft EOSs, both of them produce neutron stars with small radii. To compare with these two models, we also consider the EOS model K=240 (GM3) [21], which gives larger radii for neutron stars than the NSC97e and fg2014 models. If we employ other models to determine the theoretical EOS in the second region, we may obtain quantitatively different results. This means that it may be possible for us to decide the EOS in this region by using a more accurately observed M-R relation in the future. To determine an adequate theoretical EOS of neutron star matter, we must use the baryon-baryon interaction, which reproduces the symmetry energy at around the saturation density (ρ 0 ). Therefore, we calculate the saturation baryon density (ρ s ), the binding energy (B), the symmetry energy (S v ), and its derivative (L), which are defined by where x = ρ p /ρ denotes the proton fraction (x = 1/2 corresponds to symmetric nuclear matter) and m p (m n ) is the proton (neutron) rest mass. Using the NSC97e and fg2014 models, we obtain (ρ s [fm − where Using NSC97e and fg2014, we obtain (ρ, 0) as shown in Fig. 2 (30.8, 34.6) for NSC97e and fg2014, respectively. S 0 and L 0 are not the same as S v and L defined at x = 1/2. Since neutron star matter is very similar to pure neutron matter, we employ S 0 and L 0 as criteria of the properties at the saturation density. When the contribution from the 3 term is taken into account, we obtain the result shown in Fig. 3. From this figure, to ensure that the symmetry energy does not become too small, we adopt the condition 3 > −0.2 and 3 > −0.1 for NSC97e and fg2014, respectively. From Figs. 2 and 3, we find that NSC97e gives slightly a stiffer EOS than fg2014 around the saturation density. As mentioned above, at higher densities (a few times the saturation density), NSC97e gives a softer EOS than fg2014 because of the effect of hyperon mixing. If we extend these EOSs to high densities, they cannot support a neutron star of 2 M . Various effective approaches have been used to reproduce 2 M neutron stars, such as assuming the hadron-quark crossover [12] and the universal repulsive three-body force effect [13,14]. In this paper, we extend the EOS to higher densities (the third region) using piecewise linear functions. The third region is that with densities higher than n times the saturation density. This region is described by an EOS satisfying the causality condition (dp/d < 1). However, the inner structure of neutron stars is still unknown due to the theoretical and observational uncertainties. Theoretically, the inner cores of neutron stars with very high density are expected to be composed of various exotic particles, such as pions, kaons, hyperons [21], and strange quark states [24,25]. However, the EOSs of dense matter beyond the nuclear density are still quite uncertain in particle and nuclear physics. Therefore, for energy densities above 0 = (nρ 0 ), we assume the pressure p( ) as a parameterized piecewise linear function of the energy density given by where ν i is a constant slope. We use four linear functions (N = 4) with slopes ν 1,...,4 , which make it possible to vary the stiffness of the EOS. In numerical calculations, we have confirmed that N = 4 is robust and the results are unchanged even if additional linear functions are introduced. Note that we parameterize the high-density EOS as a function of the energy density . We choose the transition baryon density nρ 0 = (2.5-5.0) ρ 0 and p 0 is determined by the continuity of p( ) at the transition density 0 = (nρ 0 ). We vary the constant slope parameters over the ranges ν i−1 ≤ ν i ≤ 1. Thus we have a total of six EOS parameters n, 3 , ν 1,...,4 . The parameter is large enough to ensure that the EOS in the high-density region could be parameterized reasonably well. It is convenient to remember that the densities 3-5 ρ 0 correspond to the energy densities 450-800 MeV fm −3 . For ≤ 0.3 fm −4 , it is impossible to parameterize the EOS in the high-density region. For ≥ 0.5 fm −4 , we obtained the result with similar values of ν 3 and ν 4 near the causality limit. Therefore, ν 4 becomes redundant because this is the same as the three-line-segment case. Hence, we choose = 0.4 fm −4 . By extending the EOS to higher densities in this way, we show that it is possible to derive constraints on the EOS and on the M-R relation. Mass-radius relation For a given EOS, masses and radii of neutron stars can be determined as functions of central pressure (or central energy density) by solving the TOV equation [26,27] using the EOS. The TOV equation is given by This is a system of simultaneous first-order ordinary differential equations for the pressure p(r), including the mass m(r) with the EOS p( ). We can solve it by integrating from r = 0 with the initial values where c is the central energy density and p c is the central pressure. The neutron star radius R is given by the condition of p(R) = 0, and the neutron star mass M is defined by M = m(R). The piecewise linear function described in the previous section and 3 are used to generate the EOS. Using each EOS, we determine M and R as functions of p c . Plotting M and R for various p c , we obtain an M-R relation for each EOS. If we make the theoretical EOS stiffer in the second region, we obtain larger radii but the maximum mass does not change much, as shown in Fig. 4. This means that accurate determination of neutron star radii is very important to clarify the properties of the EOS around ρ 0 < ρ < nρ 0 (the second region). In Fig. 5, we show the dependence of the M-R relation on the parameterized EOS. By assuming that the parameterized EOS is stiffer, we can obtain a larger maximum mass for neutron stars, but the radii do not change much. Application of statistical methods to constraints on the EOS The mass-radius (M-R) relation is important to determine the EOS of neutron star matter. However, at present, constraints on masses and radii of neutron stars are uncertain. Moreover, the number of neutron stars for which the mass and radius have been measured is very limited. Therefore, we apply a statistical method to obtain the constraints on the EOS of neutron star matter. In this work, we employ the Bayesian statistical method introduced by Steiner et al. [15] for using neutron star observational data to probe suitable EOSs. Based on these analyses, it will be possible to conclude which EOS is the most probable. Bayesian analysis The Bayes theorem shows us how to obtain the quantity of the conditional probability P(M|D) of the model M given the data D. The Bayes theorem [28] is written as follows: In Eq. (11) the masses and radii of neutron stars. Therefore, we need an additional model parameter, e.g., the central pressure p c to uniquely determine the mass and radius for each neutron star. In this treatment, p c for each l is a function of M l and the six EOS parameters. Therefore, M(six EOS parameters, p c for l = 1, . . . , 9) is equivalent to M(six EOS parameters, M 1 , . . . , M 9 ). On the other hand, p c depends strongly on the EOS used and its lower and upper bounds are unclear. Therefore, the p c for l = 1, . . . , 9 are unsuitable as model parameters. This is the reason why we choose to treat the {M l } as model parameters. Substituting in Eq. (12), we have where N = N p + N M = 6 + 9 = 15 is the dimension of our model space, where N p is the total number of EOS parameters and N M is the total number of neutron stars in our data set. Table 1 shows the masses and radii of the 9 observed neutron stars that we use in our paper. In order to apply Eq. (13) to our problem, we assume that P(D|M) is proportional to the product over the probability distributions D l evaluated at the masses that are determined in the model and at the radii that are determined from the model M, i.e., In our calculation, Eq. (13) can be rewritten as where we assume that the prior probability P(M) is uniform under several conditions for model parameters: 2.5 < n < 5, ν i ≤ ν i+1 < 1, 3 > −0.2 for NSC97e (−0.1 for fg2014), and supporting 2 M . For the data D l , we use the probability distributions D l (l = 1, . . . , 9) derived from the 9 neutron star observations listed in Table 1. In this work, we assume that none of the probability distributions D l (M , R) of the 9 neutron stars has probability outside of the ranges M low < M < M high Observed masses and radii of neutron stars Observations of mass and radius information were obtained from astrophysical observations of X-ray bursts and thermal emissions from quiescent low-mass X-ray binaries (LMXBs). When neutron stars pull material away from companion stars, they can become much brighter. Using observation of Xrays at different wavelengths, combined with theoretical models of neutron star atmospheres, we can estimate the relationship between the radius and mass of neutron stars. This work has been performed by Heinke [32], Webb and Barret, as explained in Ref. [33], and Guillot in Ref. [34]. All of these observations were done for neutron star binaries in globular clusters. Because of thermonuclear explosions on their surfaces, the atmospheres of neutron stars expand. If observers catch one of these bursts, they can calculate its surface area based on the cooling process. After that, when this calculation is combined with an independent estimate of the distance to the neutron star, the mass and radius of this star can be estimated. Ozel and Guver have applied this technique in Refs. [29][30][31]35]. The papers referred to above provide information about the neutron star's M-R relation and we use this information to construct probability distributions D l . In detail, the probability distributions D l (M , R) for 4U 1608-52 (l = 1) and 4U 1820-30 (l = 3) are described as the Gaussian distribution with the values given in Table 2. For EXO 1745-248 (l = 2), we use where the values of the parameters are given in Table 3. The values of A l in Eqs. (16) and (17) we estimated the probability distributions shown in Figs. 6, 7, and 8. Our probability distributions are similar to those given by Steiner et al. [15] but not the same. Results from the statistical analysis In Table 4, we summarize the results of the conditional probability − log P[M|D] and the EOS parameters. A smaller − log P[M|D] corresponds to a better fit, i.e., the EOS used is more probable. We can see that, for all models of the EOS, good fits are obtained with small 3 and large slopes ν 3 , ν 4 . This means a soft EOS in the theoretical region, stiff in the high-density region. For each model of the theoretical EOS, corresponding to the best cases of − log P[M|D], we draw the M-R relation to compare with probability distributions of observed neutron stars in Figs. 9, 10, and 11. Our results support the suggestion that neutron stars with mass 2.0 M have small radii 9-10 km. Figures 12 and 13 show the EOS bands consisting of EOSs that fulfill the following three conditions: (1) the EOS supports neutron stars with masses larger than 2.0 M . In the region = 150-600 MeV/fm 3 , the upper limit of the EOS band is determined by condition (2) and the lower limit by condition (3). Condition (1) provides a stiff EOS in the region ≥ 650 MeV/fm 3 . Because of bad results, we discard and do not draw the EOS band for the K=240 model. 10 The reason why the K=240 model has bad results is because most of the neutron star observations used in this work have small radii (see Figs. 9, 10, and 11). Because the K=240 model gives a stiff EOS in the second region, this model supports 1.4 M neutron stars with large radii (∼ 13 km). On the other hand, most of the neutron star observations that we use have small radii (∼ 10 km). In the future, if the radii of neutron stars precisely determined in observations are larger than those used here, this result will surely change. We also discard all the cases with n ≥ 4.5 because they cannot support neutron stars with M ≥ 2.0 M unless large values of 3 (≥ 1), which give bad − log P[M|D], are used. When looking at the contours of the distribution probabilities D l (M , R), most of them support neutron stars with small radii, except X7 in 47 Tuc. Because the differences between the radii of X7 and the others are nearly 5 km, it is impossible to obtain an M-R relation that fits all 9 distribution probabilities. Also for this reason, the − log P [M|D] values are somewhat bad (the best value that we found is ∼ 20). Moreover, the widths of the contours are large and two separate peaks for KS 1731-260, U24, EXO 1745-248 are found. These uncertainties allow many cases of EOSs with similar − log P[M|D]. 12/18 Downloaded from https://academic.oup.com/ptep/article-abstract/2016/7/073D02/1752908 by guest on 27 July 2018 Finally, we consider the case of n = 4. We can see that, with negative values of 3 , it is impossible to support neutron stars with mass 2 M though a very stiff parameterized EOS is used. To compare with observational data, the M-R relations are shown in Figs. 9, 10, and 11. These M-R curves suggest that a relatively small radius (9-10 km) is consistent with our above remarks on the soft theoretical EOS. The EOS, which is stiff in both the theoretical and the parameterized EOS regions, may not be denied. However, this type of EOS cannot give high probability (− log P[M|D] > 25) because of large radii and is disfavored by the present 9 observational pieces of data used in this work. More neutron star observations with mass and radius constraints would enable us to improve our results. The most probable EOS bands are shown in Figs. 12 and 13. In these figures, it is remarkable that, for both NSC97e and fg2014, the EOS bands rapidly become stiff at the energy density ∼ 600-650 MeV/fm 3 (around 3.5-4.0 times saturation density). Various effective approaches have been used to reproduce 2 M neutron stars, such as assuming the hadron-quark crossover [12] and the universal repulsive three-body force effect [13,14]. If the G-matrix EOS is adopted around the normal nuclear density, a stiff EOS is preferable in the high-density region with 2 ρ 0 so as to account for the existence of 2 M neutron stars. As well as making the EOS stiffer at high 15/18 Downloaded from https://academic.oup.com/ptep/article-abstract/2016/7/073D02/1752908 by guest on 27 July 2018 densities, many phase transitions have been considered, such as superfluid transitions [36], kaon or pion condensates [37,38], hyperon matter [39], etc. Eventually, phase transitions from nuclear matter to quark matter [40,41] are expected at very high densities. In general, the EOS becomes soft after the phase transition. The behavior of our EOS is in contrast to the general phase transitions but is similar to that assuming the hadron-quark crossover [12], which leads to a stiffening of the EOS. However, we note that our results are obtained based on Bayesian analysis and do not depend on specific physical assumptions about neutron star matter. Therefore, this behavior of our EOS must be confirmed in the future. Below the transition density, our most probable EOS bands are softer than the EOS bands suggested by Steiner et al. [15] but, above the transition density, our EOS bands are stiffer. Our EOS bands support neutron stars with radii 9-10 km whereas Steiner et al.'s support neutron stars with radii 11-12 km [15]. This difference comes from the differences between our input probability distributions and those of Steiner et al. While Steiner et al. performed their calculations with six observational pieces of data, we add data on NGC2808, U24, and KS 1731-260 to our calculation. Our probability distributions of 4U 1608-52 and 4U 1820-30 are constructed based on central masses, central radii, and their uncertainties, which were determined by Guver et al. [29,30]. On the other hand, using the Monte Carlo method, Steiner et al. [15] constructed them based on their own calculations of Eddington fluxes, angular areas assuming different parameters from Guver et al. For this reason, in the work by Steiner et al., the 4U 1608-52 and 4U 1820-30 probability distributions consist of 1 or 2 peaks (depending on the photospheric radius assumption) while, in our work, they consist of only one peak. It is important to note that, if the probability distributions as input change, the results would change accordingly. In Fig. 14, we show calculated profiles of neutron stars with masses M = 1.48 M and 2.0 M . For heavy neutron stars with mass 2.0 M , we find that a small radius (R < 10 km) implies very high energy densities in the central region, strongly depending on the EOS in the second region (ρ 0 < ρ < nρ 0 ). In the case of the canonical Summary By combining nuclear matter theory and astrophysical observations, we have constructed the EOS of neutron star matter. By using realistic baryon-baryon interactions, we determined the EOS of highdensity β-stable baryonic matter and calculated the M-R relation of neutron stars by solving the TOV equation. We have introduced three regions in neutron stars. The first region is the crust of neutron stars. The second region is dominated by baryon-baryon interactions with densities lower than n times the saturation density, where n is a variable. In this region, we introduce a phenomenological thirdorder term of baryon density. The third is the region with densities higher than n times the saturation density. This region is described by a parameterized EOS satisfying the causality condition. The adjustable parameters are fixed utilizing the statistical method developed by Steiner et al. [15] to be consistent with observational data on neutron stars. The recent discovery of neutron stars with 2 M poses many challenges to theoretical physics. In general, the EOS based on the G-matrix theory is thought to be too soft to account for even 1.4 M neutron stars. However, we have indicated that neutron stars with mass 2 M can be reproduced by stiffening of the EOS at high densities. By applying the Bayesian statistical method, we have found that an EOS softened by the additional third-order term of baryon density in the second region and a stiff EOS in the third region are preferable. These EOSs lead to a constraint on the symmetry energy of S 0 ≤ 32.08 MeV for the NSC97e model (31.47 MeV for fg2014) and its density derivative L 0 ≤ 44.08 MeV for the NSC97e model (39.39 MeV for fg2014). In addition, based on the most probable EOS bands, we predict a rapid change of stiffness around 3.5-4.0 times the saturation density. The behavior of our EOS is in contrast to the general phase transitions but is similar to that assuming the hadron-quark crossover [12], which leads to a stiffening of the EOS.
6,060.6
2016-07-01T00:00:00.000
[ "Physics" ]
Assessment of voltage stability based on power transfer stability index using computational intelligence models Received May 21, 2020 Revised Oct 15, 2020 Accepted Dec 19, 2020 In this paper, the importance of voltage stability is explained, which is a great problem in the EPS. The estimation of VS is made a priority so as to make the power system stable and prevent it from reaching voltage collapse. The power transfer stability index (PTSI) is used as a predictor utilized in a PSN to detect the instability of voltages on weakened buses. A PSI is used to obtain a voltage assessment of the PSNs. Two hybrid algorithms are developed. The (CA-NN) and the (PSO-NN). After developing algorithms, they are compared with the actual values of PTSI NR method. The algorithms installed on the 24 bus Iraqi PS. The actual values of PTSI are the targets needed. They are obtained from the NR algorithm when the input data is Vi, δi, Pd, Qd for the algorithm. The results indicate that a weak bus that approaches voltage collapse and all results were approximately the same. There is a slight difference with the actual results and demonstrated classical methods are slower and less accurate than the hybrid algorithms. It also demonstrates the validation and effectiveness of algorithms (CA-NN, and PSO-NN) for assessing voltage-prioritizing algorithms (CA-NN). The MATLAB utilized to obtain most of the results. INTRODUCTION Arrange of situations where system operators are unable to keep the voltage profile across a system within adequate operational limits constitutes the voltage stability problem in electric power systems. The imbalance between expansion of the system and growth in demand constitutes its long-term causes. Stability may be lost if a minor emergency arises in a system that is already stressed. This will cause a voltage collapse, which is the most serious result of voltage instability. The system will, after a voltage collapse, be dismantled due to the operations of protective devices in the system. It can be said that a stable power system is the spine of industrial and scientific development in every sector in today's world. The importance of frequent, thorough, power system stability studies is underscored by the frequent blackouts in many countries across the globe. The use of new technologies and controls has recently precipitated a significant growth of power systems across the globe. The need for dynamic security assessment of power systems has increased because of the rise in operations that may drive power systems into high stress conditions. A new method of evaluating the transient stability of power systems is proposed in [1][2][3]. It utilizes a probabilistic neural network (PNN). It explains how the PNN is utilized to evaluate transient stability. To A thorough analysis of the stability of the digital single-loop voltage control with linear P or R regulators is presented in [4]. An analysis of the effect of modulation delays and digital computation on system stability is done for the single-loop P or R voltage control schemes. In each case, the critical frequency, above which the nonnegative phase margin (PM) can be preserved, is gotten. Taking into account the effect of the discretization methods of the R controllers, the stability region of the single-loop R voltage is determined. A new procedure for assessing and monitoring voltage stability margins based on ANNs with reduced input data set is developed in [5]. The establishment of the minimal input data set required for the monitoring and assessment of voltage stability is the ultimate purpose of this research. A method that considers the cascading failure of power systems to analysis their angle stability is proposed in [6]. The transfer probability between the elements in the set is calculated by applying the discrete Markov theory to define the cascading failure process thus establishing the system's operating condition set taking into account stochastic events based on the flow transfer theory. The fast processing characteristics of the MLP architecture and the richness provided by the dynamic simulation technique, with the aim of using it in reallife applications, is presented in [7]. The ANN field has, in the past few years, undergone rapid development. It promises potential advantages in efficient computation and the ease of acquiring knowledge. To make multilayer perception networks capable of indicating if and when a voltage collapse might happen in the future, a dynamic model of a power system was used to train it to acquire such capabilities. A sequence is proposed in [8] and a fast method of computing the minimum singular value of a Jacobian matrix is presented in [9]. A study of the application of static voltage stability indices on power systems is presented in this paper. The power system's steady-state voltage limit operating point was also examined. To extract nondominated solutions the improvement of the particle swarm optimization algorithm and its implementation is proposed as shown in [10]. An external store is employed to save all non-dominated solutions during the process of evolution. A vague decision-making method is thereafter utilized to categorize these solutions according to their importance. A high temperature superconducting fault current limiter (HTS-FCL) capable of improving system stability and reducing short circuit currents is shown in [11]. Voltage magnitudes and phase values are used as inputs of ANN in [12]. It considerably enhanced the accuracy of the load active power margin estimation for the New England 29 bus system. Phasor measurement units (PMUs) can provide phase angles and voltage magnitudes for real time applications. Three methods are developed in this paper. Two are hybrid algorithms (PSO-NN and CA-NN). The third method is used to evaluate the PTSI's real values. A comparison of the three algorithms was then made. The MATLAB software was used to obtain all the results. The results validate the use of the new algorithm to estimate the voltage stability assessment. The algorithms were tested on the 24 bus Iraqi power system network. POWER TRANSFER STABILITY INDEX (PTSI) There are many indicators through which it is possible to determine the voltage stability of the electrical power system, but the PTSI indicator is easy to apply and quickly obtain results compared to other indicators. It is useful, in voltage stability analysis, to assess the power systems' voltage stability by utilizing a PTSI and scalar magnitudes that can be observed as the system parameters changes. These indices can be employed by operators to intuitively determine when the system is close to voltage collapse. This knowledge will enable them to react in a timely manner. As shown in Figure 1, the PTSI is derived from the consideration of a simple two bus Thevenin equivalent system with a slack bus linked to a load bus by a one branch [13]. The proposed PTSI index can be defined as (1). where, α is the phase angle of the load impedance, β is the phase angle of the Thevenin impedance, EThev is the Thevenin voltage, SL is the apparent power and ZThev is the Thevenin impedance. In (1) the PTSI at every bus is evaluated utilizing the impedance and load impedance phase angles, load voltage and voltage Thevenin. The PTSI value ranges from 0 to 1 (voltage collapse). The PTSI index value should be kept at less than one to maintain a secure condition [14]. In this part, the voltage stability of power system is studied by gradually increasing the loads until the voltage collapse point is reached, the load is increased regarding as loading factor ( ) which leads to voltage collapse point of power systems. where, is the initial active power at any load bus, is the initial reactive power at any load bus and is the loading factor. MATERIALS AND METHODS The aim in this paper is to study the 400 kV network and its transmission lines and bus bars of Iraqi electrical networks. This network has 30 transmission lines with a total length of 3664.6 Km and 24 bus bars. Its configuration is shown in [15,16]. The following materials and methods are used to simulate the results. Artificial neural network (ANN) Artificial neural network (ANN) serves the objective providing a model which has the ability to relate very complex input [Vi, δi, Pd, Qd] and output [PTSI] datasets. Network training means finding optimal values for the various network weights and biases. Typically, different types of techniques are used to find appropriate values for ANN weights and biases [17,18]. An ANN is a network of neurons interconnected through weights and biases [19]. A typical ANN model is shown in Figure 2. Hybrid algorithm cultural-neural network (CA-NN) for PTSI As shown in Figure 3, the CA is comprised of three major components: communication protocol, population space and belief space [20,21]. In this technique, the desired solution can be achieved by applying basic cultural factors such as creating a population space and a belief space, accepting and updating the belief space, creating progeny vectors by mutation and selecting the best vectors. The method presented here generates the original vectors uniformly distributed within limits on the number of iterations. Compared to other meta-heuristic algorithm, CA is a robust method. It has a better and fast convergence rate; it is more computationally efficient and it is a well-suited optimization method for many multi-objective optimization problems [22]. The cultural approach is used in this paper to find the PTSI in power system networks to make an assessment of the voltage stability. To select the best weights for a neural network, the proposed method expands cultural-neural network CA-NN. The first CA proposed is a process of social development in which behavioral traits are learned. This is presented in [23]. The CA is a high-level searching method which passes acquired knowledge from one generation to the other making succeeding generations more knowledgeable and more equipped to survive. The basic idea of using CA with neural network is to influence the assessment operator so that the current knowledge stored in the search space can be properly exploited. The CA is used to find the best weights for neural networks; then they are both used to assess the voltage stability of power systems. The CA-NN is employed to improve the search process to increase its precision and make it faster. From the power flow, obtain [Vi, δi, Pd, Qd] as the input data, and the PSTI (actual) values are the targets. The CA mathematical model is derived from (3)(4)(5)(6)(7)(8). where, lj t and uj t represent the lower and upper limits of parameter j in general t. The higher limit's recounting and its values are similar to the former one. It can be deducted by analogy. and +1 represent the culture normative at iteration k and k+1 respectively. The settings of CA-NN: The population size is 30, the maximum number of iterations is 70, the acceptance ration is 0.35, and alpha is 0.3. Hybrid algorithm particle swarm optimization-neural network (PSO-NN) for PTSI Inspired by insect swarms, the PSO is derived from the simulation of social rather than from natural evolution as is the case in the evolutionary (genetic) algorithms. The algorithm is very simple. Being a population-based algorithm, it has proven to be a great tool for solving optimization problems. The PSO model is made up of several particles (each representing a possible solution to a numerical problem) that moves about searching for space. Every particle has a velocity vector (Vi) and a position vector (Xi). Every particle, connected to the best solution (fitness) it has achieved so far, follow its coordinates in the problem space. This value is called pbest. The overall best value is another best value traced by the global version of the particle swarm optimizer. Its position is gotten by any particle in the population. The position is referred to as gbest. At each time, the PSO model involves varying the velocity of each particle towards its gbest and pbest. Weighted numbers are utilized to weight the acceleration. In (9) presents the velocity vector [24,25]. The position of every particle is in every iteration, updated utilizing this velocity vector as depicted in (10). The setting of PSO-NN: The population size is 30, the maximum number of iterations is 70, inertia weight w=1, the inertia weight damping ratio is 0.99, the global learning coefficient is c2=2.0 and the personnel learning coefficient is c1=1.5. Objective function and proposed algorithms In this paper the objective function can be defined in (11). The proposed CA-NN algorithm and PSO-NN algorithm of calculated for detected voltage collapse point are implemented as [26,27]: Step 1: collect the input data [Vi, δi, Pd, Qd] and output data (i.e. target) [PTSI]. Step 2: Normalization of data, initialized weights and biases were randomly, initialization of (CA or PSO). Step 3: Create network feed forward ANN, function activated and initial PTSI. Step 4: Determine the objective function value. Step 5: Cheek max iteration is reached (Yes or No), if Yes go to step 6, while if No go to step 2. Step 6: Create network of backpropagation ANN and new weight Step 7: Cheek the objective function (MSE) is min (Yes or No), if Yes go to step 8, while if No go to step 2. SIMULATION RESULTS AND DISCUSSION This paper shows the effectiveness of the proposed algorithms (CA-NN and PSO-NN) in finding the PTSI and testing on the 24 bus Iraqi power system [28,29] to sect proof of the resilience of the proposed methods as demonstrated in Table 1. All the results are obtained using MATLAB programming. When the methods are compared, it is evident that the MSE between them is very small. The CA-NN has less MSE than PSO-NN. The input data and target data, in both cases, are the Hybrid intelligent algorithms' input data. The [Vi, δi, Pd, Qd] from the power flow and the actual values of PSTSI are targets. A MATLAB code was developed to calculate the targets' PTSI. If a bus has a PTSI value close equal to or close to 1, it is the most vulnerable in the system. It can be used to find areas of weakness which needs attention in the system [30,31]. The Hybrid CA-NN mentioned above has a very good algorithm that can be used to solve constrained optimization problems. It can support the general tools for constrained optimization problems as well as express, store and integrate constrained knowledge and filed knowledge. The 24 bus Iraqi network demonstrates that voltage collapse can be reached but with this condition, the voltage will have unacceptable values of less than (0.9). As shown in Figure 4, after 60 iterations, the CA-NN algorithm's in the 24 bus has about 4.4x10 -4 as its best solution while after 60 iterations; the PSO-NN algorithm's best solution is about 1.04x10 -3 . The regression rate is R=0.99736 for CA-NN and R=0.99634 for PSO-NN as shown in Figures 5 and 6 respectively. With errors of 4.0323x10-4 and 8.2547x10-4 respectively as shown in Table 1, the PSTI values are almost the same. CONCLUSION The voltage instability problems refer to loading increasing, weakness networks, and the length of transmission lines. In this paper, the MATLAB code for the Hybrid intelligent algorithm (CA-NN, PSO-NN and classical method NR) is implemented to measure the PTSI as a voltage assessment method. At first the actual values of PTSI calculated from the NR method are used as the targets and the input data into the hybrid intelligent algorithm were [Vi, δi, Pd, Qd]. The 24-bus Iraqi power system was the system in which the methods were tested. For every method utilised, the ranking of the power transfer stability index was done. The accuracy and speed of the hybrid methods utilized are the most important factors to their function as voltage assessment power systems. After comparing the methods, it was found that the PTSI is an effective and efficient indicator of voltage stability. The results obtained demonstrate that there is a match between the input and out data (targets). It is proven that the hybrid methods are effective. It is also shown that the CA-NN and PSO-NN methods are, with a small error to behalf like a CA-NN algorithm, fast and efficient analyzers.
3,743.6
2021-08-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Investigation of Potential of Solar Photovoltaic System as an Alternative Electric Supply on the Tropical Island of Mantanani Sabah Malaysia This article reports on the potential use of a photovoltaic solar system on Mantanani Island. This island has its attractions in terms of flora and fauna as well as the uniqueness of its local community. The electricity supply status of the island is minimal, and the local electricity provider only provides two units of electrical generator that only supply energy from 18:00 to 06:00. This study is motivated by the hypothesis that if the target resident can obtain a better electricity supply, they can generate higher income and improve their standard of living. This study aims to identify the status of solar energy sources, estimate the basic electrical load, and conduct a techno-economic analysis of homestay enterprises of residents. Geostationary satellite data on solar energy resources were gathered and analyzed using Solargis. The electricity load was calculated based on the daily routine activities of the residents and usage of primary electrical appliances. Techno-economic analysis was done by determining the key parameters to calculate the return on investment and payback period. The results showed that Mantanani Island had great potential for implementing a photovoltaic system, by the estimated value of the total annual solar energy and peak sun hour of 1.447 MWh/m2/y and 4.05 h, respectively. The variation in total monthly solar energy was minimal, with a range of only 61.3 Wh/m2. The calculated electrical load was 7.454 kWh/d. The technoeconomic assessment showed that the return on investment was MYR 3600 per year. However, the value of the payback period varies according to the value of the cost of capital spent. Regarding the cost of capital of this study, the shortest and longest payback periods achievable were 2.78 and 13.89 years, respectively. This calculation is in line with a photovoltaic system with a capacity of Introduction Solar energy is a renewable energy that is easy to obtain. This energy can be divided into three, namely electricity (photovoltaic "PV"), thermal, and photovoltaic-thermal PVT [1]. The PV system is used to supply electricity and is divided into grid-connected, off-grid (standalone), and hybrid. The thermal system utilizes the heat from solar radiation for purposes such as drying [2,3]. A study by Evangelisti et al. found various types of collectors in the thermal solar system [4]. The PV-T system is more complex, involving both outputs (electricity and heat), and is widely used in research to obtain the best balance [5]. The standalone solar system has become the top choice for places far from the grid network and remote, such as on islands. Despite requiring a relatively high initial cost compared to other energy resources, easily accessible solar resources with simple maintenance are the main factors in selecting this energy resource. A preliminary study on solar energy the specific tragedy in Bucharest that has crippled the Romanian government [17]. This incident has directly harmed the country's energy sector. The implementation of solar energy can be seen as more than just an environmentally-friendly alternative energy source. There is potential for solar energy to be a tool for alternative development by rejuvenating former mining areas that are now included in world heritage sites in Romania and rejuvenating former poor or less-favored regions in China [18][19][20]. A study using the smart energy system concept conducted by Cabrera et al. on Lanzoorate Island in Spain was aimed to determine the contribution of the solar system combined with wind towards the existing energy system [21]. This study reported that the solar-wind system could contribute between 5.14-24.6% of total energy generation, which is equivalent to 35% of electricity demand in 2018. Mialhe et al. found that the use of SARAH-E data with measuring stations between 2011 and 2015 showed a difference of around 15% for diurnal-seasonal variation [22]. The difference between the coastal and mountainous areas was 100 W/m 2 , and the island area had 20% lower solar resources than the value of the nearby seas. The study was conducted on Reunion Island in France by exploring solar energy sources using satellite-derived Solar Surface Radiation Heliosat-East (SARAH-E) data. Another study conducted by Kumar et al. in the Andaman and Nicobar Islands of India evaluated the performance of a PV solar system with a 10 kWp capacity, as shown in Figure 1 [23]. The results found that the annual capacity factor and performance ratios were 13.71-14.61% and 64.70-64.93%, respectively. In Bangladesh, Saim and Khan investigated the solar system on Hatiya island and found that 83% was for lighting and 17% for both lighting and telephone chargers [24]. The use of this solar system became complicated due to the frequency of changing the lights and charge controllers. Other studies have been conducted by simulating the use of solar energy in island mode. The study included a control on the inverter that showed output distortion below 5% as recommended by IEEE 519-1992 standards [25]. Furthermore, Choi et al. showed that floating photovoltaic simulation integrated with hydroelectric systems minimize battery consumption, stabilize, and maximize power output based on demand response [26]. Sustainability 2021, 13, x FOR PEER REVIEW 3 of 18 the specific tragedy in Bucharest that has crippled the Romanian government [17]. This incident has directly harmed the country's energy sector. The implementation of solar energy can be seen as more than just an environmentally-friendly alternative energy source. There is potential for solar energy to be a tool for alternative development by rejuvenating former mining areas that are now included in world heritage sites in Romania and rejuvenating former poor or less-favored regions in China [18][19][20]. A study using the smart energy system concept conducted by Cabrera et al. on Lanzoorate Island in Spain was aimed to determine the contribution of the solar system combined with wind towards the existing energy system [21]. This study reported that the solar-wind system could contribute between 5.14-24.6% of total energy generation, which is equivalent to 35% of electricity demand in 2018. Mialhe et al. found that the use of SARAH-E data with measuring stations between 2011 and 2015 showed a difference of around 15% for diurnal-seasonal variation [22]. The difference between the coastal and mountainous areas was 100 W/m 2 , and the island area had 20% lower solar resources than the value of the nearby seas. The study was conducted on Reunion Island in France by exploring solar energy sources using satellite-derived Solar Surface Radiation Heliosat-East (SARAH-E) data. Another study conducted by Kumar et al. in the Andaman and Nicobar Islands of India evaluated the performance of a PV solar system with a 10 kWp capacity, as shown in Figure 1 [23]. The results found that the annual capacity factor and performance ratios were 13.71-14.61% and 64.70-64.93%, respectively. In Bangladesh, Saim and Khan investigated the solar system on Hatiya island and found that 83% was for lighting and 17% for both lighting and telephone chargers [24]. The use of this solar system became complicated due to the frequency of changing the lights and charge controllers. Other studies have been conducted by simulating the use of solar energy in island mode. The study included a control on the inverter that showed output distortion below 5% as recommended by IEEE 519-1992 standards [25]. Furthermore, Choi et al. showed that floating photovoltaic simulation integrated with hydroelectric systems minimize battery consumption, stabilize, and maximize power output based on demand response [26]. This study evaluated the suitability of solar energy on Mantanani Island, Malaysia, which is an island with great potential to become a tourist attraction. This 203.7-hectare island is located in the state of Sabah at coordinates 06°42′23′′, 116°21′28′′. The island group is divided into three, namely Mantanani Besar, Mantanani Kecil, and Lungisan. Figure 2 shows the position and cluster of the Mantanani islands. This study evaluated the suitability of solar energy on Mantanani Island, Malaysia, which is an island with great potential to become a tourist attraction. This 203.7-hectare island is located in the state of Sabah at coordinates 06 • 42 23", 116 • 21 28". The island group is divided into three, namely Mantanani Besar, Mantanani Kecil, and Lungisan. Figure 2 shows the position and cluster of the Mantanani islands. The number of residents of this island is estimated at 1200 people, most of whom are Ubian. Their primary sources of income are from working as fishermen, working at resorts on the island, and working as homestay operators. There are 17 resident-owned homestays, and nine resorts identified, but the global COVID-19 disaster has affected their operations. All infrastructure and residential facilities are only available on Mantanani Besar. Among the facilities available on the island are schools, a security forces camp, a communication tower, and a lighthouse and generator set. The number of residents of this island is estimated at 1200 people, most of whom are Ubian. Their primary sources of income are from working as fishermen, working at resorts on the island, and working as homestay operators. There are 17 resident-owned homestays, and nine resorts identified, but the global COVID-19 disaster has affected their operations. All infrastructure and residential facilities are only available on Mantanani Besar. Among the facilities available on the island are schools, a security forces camp, a communication tower, and a lighthouse and generator set. Mantanani Island, which has various attractions, is in the state of Sabah. The increase in tourists visiting Sabah was at 13.5% in 6 years [27]. The tourists were both foreign and local. Foreign tourists were mainly from China, Brunei, Korea, Chinese Taipei, the United Kingdom, and Ireland. The statistics and projections of foreign and local tourists can be viewed in Table 1. The search results found that one of the attractive factors is that 88.7% of these foreign tourists came to visit for its ecotourism such as forests, beaches, and oceans, including the islands [28]. On Mantanani Island, one study has identified six main aspects of local people's attractions that can become tourist attractions: cuisine, handicrafts and carpentry, traditional games, life skills, dance and music, and celebrations and festivals [27]. These attractions can be the island's unique factor that cannot be found elsewhere. The attraction in terms of marine life can be shown based on studies of green turtles and sea cucumbers [29][30][31]. The geological uniqueness of this island has also attracted researchers on morphological changes [32] and the relation between coastal changes and the monsoon [33] beach morphology changes during the northeast and southwest monsoons at the Mantanani Besar Island, Sabah. The status of the electricity supply on this island is limited. Merely a total of 157 houses have electricity supply facilities from two sets of generators supplied by Sabah Electricity Sendirian Berhad, a local electricity supplier. Each generator set has a capacity Mantanani Island, which has various attractions, is in the state of Sabah. The increase in tourists visiting Sabah was at 13.5% in 6 years [27]. The tourists were both foreign and local. Foreign tourists were mainly from China, Brunei, Korea, Chinese Taipei, the United Kingdom, and Ireland. The statistics and projections of foreign and local tourists can be viewed in Table 1. The search results found that one of the attractive factors is that 88.7% of these foreign tourists came to visit for its ecotourism such as forests, beaches, and oceans, including the islands [28]. On Mantanani Island, one study has identified six main aspects of local people's attractions that can become tourist attractions: cuisine, handicrafts and carpentry, traditional games, life skills, dance and music, and celebrations and festivals [27]. These attractions can be the island's unique factor that cannot be found elsewhere. The attraction in terms of marine life can be shown based on studies of green turtles and sea cucumbers [29][30][31]. The geological uniqueness of this island has also attracted researchers on morphological changes [32] and the relation between coastal changes and the monsoon [33] beach morphology changes during the northeast and southwest monsoons at the Mantanani Besar Island, Sabah. The status of the electricity supply on this island is limited. Merely a total of 157 houses have electricity supply facilities from two sets of generators supplied by Sabah Electricity Sendirian Berhad, a local electricity supplier. Each generator set has a capacity of 92 kVA. Figure 3 shows the generator set provided by the local electricity supplier. Electricity is only supplied from 18:00 to 06:00. Resorts around the island have own their generator sets with capacities between 50 and 200 kVA [28]. To diversify and increase their income, some locals have started homestay businesses. These business are seen to have good potential based on the very encouraging response. Figure 3 shows the generator set provided by the local electricity supplier. Electricity is only supplied from 18:00 to 06:00. Resorts around the island have own their generator sets with capacities between 50 and 200 kVA [28]. To diversify and increase their income, some locals have started homestay businesses. These business are seen to have good potential based on the very encouraging response. A report showed a high increase in tourists of 13.5% per annum, and the projection for 2025 was about 6.5 million [27,28]. Homestay customers also increased, as some of them stay in homestays owned by the locals. The record of homestay visitors could not be determined accurately due to the absence of a visitor record system. Due to certain constraints, the energy supplier could only provide a 12-h electricity supply. This study proposes that if these residents were provided with a 24-h electricity supply, various beneficial activities would improve their lives and the economy. Understanding the situation has prompted the study of the solar energy potential on the island and marks the beginning of the implementation of solar energy in the future. The use of solar energy in Malaysia is still at a low level, although this energy source is stable at high concentrations [34]. High solar energy prices contribute to this situation compared to other sources, such as diesel. The price of solar energy is high because Malaysia has a high cost of living and low oil prices compared to all its neighboring countries, except Brunei. This disadvantage has led the consumer to prefer oil based fuel over solar [35]. However, the situation changed after the Malaysian government implemented policy and incentives toward using green energy, such as the feed-in-tariff [36] and Net Energy Metering Schemes [37]. Solar energy is gaining more attention as solar energy prices become more competitive and driven by oil price volatility. Some reports show that there is an increase in the value of solar power generation. Yet, solar power generated is small compared to the total mix of power generation [38,39]. Moreover, most of the solar power in the data is contributed by solar farms or ongrid solar generation. Therefore, the authors see this situation as a research gap to study the potential use of solar energy on the island. This potential is designed and will be described in Section 2. Not many studies have focused on the study of the feasibility of offgrid solar energy on the island. A study concluded that the islands in Sabah, including Mantanani Island, have good potential for solar energy, but they must be hybridized with other energy sources [40]. This study was conducted using Homer software. A study by Lau et al. evaluates several factors such as diesel price and interest rate [35]. The research was conducted to identify the element that economically allows diesel generators to be replaced with a solar system. For diesel prices of USD 0.61/L and an interest rate of 0-3%, the optimal approach is a combination of PV and diesel. However, if diesel price increases (USD 1.22/L or more), the implementation of PV systems is more dominant. Ashourian et A report showed a high increase in tourists of 13.5% per annum, and the projection for 2025 was about 6.5 million [27,28]. Homestay customers also increased, as some of them stay in homestays owned by the locals. The record of homestay visitors could not be determined accurately due to the absence of a visitor record system. Due to certain constraints, the energy supplier could only provide a 12-h electricity supply. This study proposes that if these residents were provided with a 24-h electricity supply, various beneficial activities would improve their lives and the economy. Understanding the situation has prompted the study of the solar energy potential on the island and marks the beginning of the implementation of solar energy in the future. The use of solar energy in Malaysia is still at a low level, although this energy source is stable at high concentrations [34]. High solar energy prices contribute to this situation compared to other sources, such as diesel. The price of solar energy is high because Malaysia has a high cost of living and low oil prices compared to all its neighboring countries, except Brunei. This disadvantage has led the consumer to prefer oil based fuel over solar [35]. However, the situation changed after the Malaysian government implemented policy and incentives toward using green energy, such as the feed-in-tariff [36] and Net Energy Metering Schemes [37]. Solar energy is gaining more attention as solar energy prices become more competitive and driven by oil price volatility. Some reports show that there is an increase in the value of solar power generation. Yet, solar power generated is small compared to the total mix of power generation [38,39]. Moreover, most of the solar power in the data is contributed by solar farms or ongrid solar generation. Therefore, the authors see this situation as a research gap to study the potential use of solar energy on the island. This potential is designed and will be described in Section 2. Not many studies have focused on the study of the feasibility of off-grid solar energy on the island. A study concluded that the islands in Sabah, including Mantanani Island, have good potential for solar energy, but they must be hybridized with other energy sources [40]. This study was conducted using Homer software. A study by Lau et al. evaluates several factors such as diesel price and interest rate [35]. The research was conducted to identify the element that economically allows diesel generators to be replaced with a solar system. For diesel prices of USD 0.61/L and an interest rate of 0-3%, the optimal approach is a combination of PV and diesel. However, if diesel price increases (USD 1.22/L or more), the implementation of PV systems is more dominant. Ashourian et al. reported a feasibility study on using the solar system for an island in Malaysia with two different load conditions from tourists and locals [41]. They proved a 200-kW solar system and 40 kW wind energy could accommodate both rated loads. However, this result is more economical if the diesel price reaches MYR 2.10/L and above. Methodology The scenario of limited electricity supply on the island has become an obstacle to various activities that can help improve the economy and the living standards of the local Sustainability 2021, 13, 12432 6 of 18 community. In line with the problem statement and motivation of the study, the objective of this study was to evaluate the solar energy resources on the island. In addition, this study will determine the basic electrical load for a house and create a technoeconomic analysis simulation for a local homestay enterprise. A summary of the methodology of this study is shown in Figure 4. two different load conditions from tourists and locals [41]. They proved a 200-kW solar system and 40 kW wind energy could accommodate both rated loads. However, this result is more economical if the diesel price reaches MYR 2.10/L and above. Methodology The scenario of limited electricity supply on the island has become an obstacle to various activities that can help improve the economy and the living standards of the local community. In line with the problem statement and motivation of the study, the objective of this study was to evaluate the solar energy resources on the island. In addition, this study will determine the basic electrical load for a house and create a technoeconomic analysis simulation for a local homestay enterprise. A summary of the methodology of this study is shown in Figure 4. All information on solar energy resources was obtained from Solargis, the same source used by Doorga et al. [10] in their study for validation. Solargis calculates solar resource characteristics using data from geostationary satellites and a meteorological model. It is done by considering the factor of penetration of solar radiation into the atmosphere to the ground surface. The study found that PV solar panels would produce optimal output when the surface is perpendicular to the sun's rays and at optimal temperature [34,[42][43][44]. Understanding the annual sun path and azimuth is essential to determine whether the solar system to be built requires a tracking mechanism. In addition, the need for this mechanism will involve costs that will affect the cost of future analysis. Next, the information would be processed and translated into normal distribution values of solar radiation for each hour, day, and year. The results of this analysis will help with the process of sizing the PV solar system. The information that would be observed includes the highest radiation intensity, the difference in radiation distribution per month, the amount of annual solar energy, and the time of the annual peak hour. Electrical loads were determined by simulating the use of essential electrical appliances. The lives of the residents of this island do not depend on the electricity supply. However, with the availability of electricity, their lives will get better. Questionnaires were distributed among the residents to identify the essential electrical appliances they needed and the timing of their use. A total of 100 questionnaires were distributed and answered with the help of researchers. The primary purpose of this ques- All information on solar energy resources was obtained from Solargis, the same source used by Doorga et al. [10] in their study for validation. Solargis calculates solar resource characteristics using data from geostationary satellites and a meteorological model. It is done by considering the factor of penetration of solar radiation into the atmosphere to the ground surface. The study found that PV solar panels would produce optimal output when the surface is perpendicular to the sun's rays and at optimal temperature [34,[42][43][44]. Understanding the annual sun path and azimuth is essential to determine whether the solar system to be built requires a tracking mechanism. In addition, the need for this mechanism will involve costs that will affect the cost of future analysis. Next, the information would be processed and translated into normal distribution values of solar radiation for each hour, day, and year. The results of this analysis will help with the process of sizing the PV solar system. The information that would be observed includes the highest radiation intensity, the difference in radiation distribution per month, the amount of annual solar energy, and the time of the annual peak hour. Electrical loads were determined by simulating the use of essential electrical appliances. The lives of the residents of this island do not depend on the electricity supply. However, with the availability of electricity, their lives will get better. Questionnaires were distributed among the residents to identify the essential electrical appliances they needed and the timing of their use. A total of 100 questionnaires were distributed and answered with the help of researchers. The primary purpose of this questionnaire is to identify the estimated household energy load. This load is used to determine the daily load profile, daily load value (kWh/day), and peak load (kW). The questions are shown in Table 2, and the target respondents are as follows: • Age: 18 years old and below (20%), 18-30 years old (40%), 30 years old and above (40%); • Gender: male (50%), female (50%); • Mandatory for all homestay operators. The energy load calculation for each electrical appliance and the total energy load per day were respectively determined using Equation (1). Then, the pattern and total daily electrical load required could be estimated. where L = load of electrical appliance, Wh n = number of electrical appliances P = power of the electrical appliance, W t = usage period, h I = current of electrical appliance, A V = voltage of electrical appliance, V Energy balance was described based on the source of solar energy, PV output from the solar system, and electrical load. The solar energy source or annual direct average irradiation DNI value was 1.48 MWh/m 2 . The annual DNI value was calculated as a result of adding up the monthly DNI obtained from Solargis. The annual output of the PV system was 3.21 MWh, from a 2.2 kWp solar system with an average efficiency of 15.1%. The monthly efficiency η month and annual efficiency η annual were estimated using Equation (2) and Equation (3), respectively. where Q month = monthly energy output, Wh DNI month = monthly normal irradiance, W/m 2 A pv = PV panel area, m 2 where Sustainability 2021, 13, 12432 8 of 18 η n month = monthly PV efficiency, % The A pv value was fixed at 14.58 m 2 . Electrical load indicates the energy required based on the power rate and period of electricity consumption of each electrical item per day. This determination is a continuation of Equation (1). The monthly load was estimated using Equation (4), while the annual load was used in the energy balance analysis. where L = daily load, Wh D = days of a month PV output was assessed monthly, with minimum and maximum values of 238.0 kWh and 317.6 kWh, respectively. The annual output was 3.21 MWh, which was matched with the annual consumption. The estimated annual load value was 2.72 MWh. The match between PV energy supplied and the energy consumption is essential to ensure adequate energy supply throughout the year. This match is determined by the value of the percentage of energy use calculated using Equation (5). Energy use = (consumption/PV energy supply) × 100 Energy balance analysis can be conceptually referred to in Figure 5 and described in detail in Section 3. ηmonth = (Qmonth/DNImonth.Apv) × 100 (2) where where ηn month = monthly PV efficiency, % The Apv value was fixed at 14.58 m 2 . Electrical load indicates the energy required based on the power rate and period of electricity consumption of each electrical item per day. This determination is a continuation of Equation (1). The monthly load was estimated using Equation (4), while the annual load was used in the energy balance analysis. where L = daily load, Wh D = days of a month PV output was assessed monthly, with minimum and maximum values of 238.0 kWh and 317.6 kWh, respectively. The annual output was 3.21 MWh, which was matched with the annual consumption. The estimated annual load value was 2.72 MWh. The match between PV energy supplied and the energy consumption is essential to ensure adequate energy supply throughout the year. This match is determined by the value of the percentage of energy use calculated using Equation (5). Energy use = (consumption/PV energy supply) × 100 Energy balance analysis can be conceptually referred to in Figure 5 and described in detail in Section 3. The technoeconomic analysis in this study calculated the effect of the solar system usage on the homestay enterprises of the residents of Mantanani Island. First, the enterprises' revenue or return on investment (ROI) of the homestay was determined based on the solar system operating cost, the homestay operating cost, and the annual income. Annual income was determined by the annual number of visitors, the overnight rental rate, and the number of days of stay. Finally, the annual ROI was calculated using Equation (6). Return on investment ROI where Figure 5. Energy balance of a 2.2 kWp solar system. The technoeconomic analysis in this study calculated the effect of the solar system usage on the homestay enterprises of the residents of Mantanani Island. First, the enterprises' revenue or return on investment (ROI) of the homestay was determined based on the solar system operating cost, the homestay operating cost, and the annual income. Annual income was determined by the annual number of visitors, the overnight rental rate, and the number of days of stay. Finally, the annual ROI was calculated using Equation (6). where T = annual number of visitors R = overnight rental rate D = number of days of stay CO pv = solar system operating cost per year CO hs = homestay operating cost per year Then, the payback period (PP) was evaluated based on the cost of capital and annual income, as in Equation (7). The value of PP was expected to be in a range because the cost of solar system is different and are described in the Section 3. where CC = capital cost. Figure 6 shows the average direct value of monthly irradiation for each hour. In general, the shape of the graph for each month is the same, where the reading starts at around 06:00, then rises until it reaches a high value at around 12:00, then decreases and ends around 18:00. The highest and the lowest month graphs were recorded in April and January, respectively. Based on Figure 6, which is the contour diagram of the average value of direct normal irradiation for each hour interval, the highest peak reading observed in April at the 11:00-12:00 interval is 691 Wh/m 2 . The lowest peak reading is in January at the 11:00-12:00 interval is 408 Wh/m 2 . In addition, the sunset is relatively late in February to August and is at the 06:00-19:00 interval, compared to September to January, where the sun sets at the 05:00-18:00 interval. Then, the payback period (PP) was evaluated based on the cost of capital and annual income, as in Equation (7). The value of PP was expected to be in a range because the cost of solar system is different and are described in the Section 3. Results and Discussion where CC = capital cost Figure 6 shows the average direct value of monthly irradiation for each hour. In general, the shape of the graph for each month is the same, where the reading starts at around 06:00, then rises until it reaches a high value at around 12:00, then decreases and ends around 18:00. The highest and the lowest month graphs were recorded in April and January, respectively. Based on Figure 6, which is the contour diagram of the average value of direct normal irradiation for each hour interval, the highest peak reading observed in April at the 11:00-12:00 interval is 691 Wh/m 2 . The lowest peak reading is in January at the 11:00-12:00 interval is 408 Wh/m 2 . In addition, the sunset is relatively late in February to August and is at the 06:00-19:00 interval, compared to September to January, where the sun sets at the 05:00-18:00 interval. The amount of solar energy or direct average irradiation was 1477 kWh/m 2 /year, and the peak sun hour value (PSH) was estimated to be 4.05 PSH. A total of 1447 kWh of energy can be obtained from every square meter in a year. The PSH value shows 4.05 h of solar radiation intensity at 1000 W/m 2 . These values are essential to be used as a reference if a solar system, whether PV, thermal, or PVT, is to be developed in this area. Results and Discussion Based on the questionnaire findings, all respondents think that electricity supply is essential in daily life and indicates a high level of awareness among the islanders. In addition, they also know and have enjoyed an electricity supply. As much as 95% of respondents think that an electricity supply is mandatory, while 5% think the opposite. We found that all respondents in this minority group were among the elderly. They have lived without electricity for a long time before and feel that it is not mandatory. Seventy-eight percent of respondents think that the electricity supply should be upgraded to 24 h instead of 12 h. They argue that if there is a 24-hour electricity supply, many activities can be done. Prolonged periods of electricity supply are required for fans, televisions, and refrigerators. Twelve percent of respondents say electricity supply is enough for 12 h now and does not need to be supplied for 24 h. Financial factors are the main factors explaining why this group thinks so. They are worried that they will not be able to pay for electricity if it is supplied continuously, and that its consumption will also increase. Another 10% of respondents could not give a solid answer and did not care whether the supply was 12 h or 24 h. All respondents did not know if it was solar energy. The population has not had exposure to solar energy all this time. All respondents did not care that solar energy was supplied as long as it could be used, just like a conventional power supply. The homestay operators think that their homestay business will prosper if the electricity supply is provided 24 h to provide comfort to their customers. Questions 12-14 are information query questions translated in Table 3. The residents of this island have been living without electricity for a long time. Even with the limited energy supply from local suppliers through generators, residents can continue their lives without electricity. Therefore, the electrical load calculation in this study was only based on essential electrical appliances and a minimal usage period. This electrical appliance selection will focus on lighting, ventilation, fresh food storage, and communication. After conducting a brief questionnaire on the island, the list of electrical appliances, estimated appliance power, and usage period is shown in Table 3. The amount of solar energy or direct average irradiation was 1477 kWh/m 2 /year, and the peak sun hour value (PSH) was estimated to be 4.05 PSH. A total of 1447 kWh of energy can be obtained from every square meter in a year. The PSH value shows 4.05 h of solar radiation intensity at 1000 W/m 2 . These values are essential to be used as a reference if a solar system, whether PV, thermal, or PVT, is to be developed in this area. Based on the questionnaire findings, all respondents think that electricity supply is essential in daily life and indicates a high level of awareness among the islanders. In addition, they also know and have enjoyed an electricity supply. As much as 95% of respondents think that an electricity supply is mandatory, while 5% think the opposite. We found that all respondents in this minority group were among the elderly. They have lived without electricity for a long time before and feel that it is not mandatory. Seventy-eight percent of respondents think that the electricity supply should be upgraded to 24 h instead of 12 h. They argue that if there is a 24-hour electricity supply, many activities can be done. Prolonged periods of electricity supply are required for fans, televisions, and refrigerators. Twelve percent of respondents say electricity supply is enough for 12 h now and does not need to be supplied for 24 h. Financial factors are the main factors explaining why this group thinks so. They are worried that they will not be able to pay for electricity if it is supplied continuously, and that its consumption will also increase. Another 10% of respondents could not give a solid answer and did not care whether the supply was 12 h or 24 h. All respondents did not know if it was solar energy. The population has not had exposure to solar energy all this time. All respondents did not care that solar energy was supplied as long as it could be used, just like a conventional power supply. The homestay operators think that their homestay business will prosper if the electricity supply is provided 24 h to provide comfort to their customers. Questions 12-14 are information query questions translated in Table 3. The residents of this island have been living without electricity for a long time. Even with the limited energy supply from local suppliers through generators, residents can continue their lives without electricity. Therefore, the electrical load calculation in this study was only based on essential electrical appliances and a minimal usage period. This electrical appliance selection will focus on lighting, ventilation, fresh food storage, and communication. After conducting a brief questionnaire on the island, the list of electrical appliances, estimated appliance power, and usage period is shown in Table 3. This PV system for supplying electricity to loads consists of PV panels, a charge controller, battery, inverter, and load. Solar radiation that falls on the surface of the PV panel is converted into electrical energy. This energy is then used to charge the battery, which serves as an energy storage medium. The charge controller controls the charge and discharge process and protects the system from any damage. Among the errors or accidents that can occur and cause damage are polarity errors, short circuits, overcharge, and over-discharge. The inverter converts DC from the battery to AC to be used by electrical appliances. This standalone system is shown in Figure 8. This PV system for supplying electricity to loads consists of PV panels, a charge controller, battery, inverter, and load. Solar radiation that falls on the surface of the PV panel is converted into electrical energy. This energy is then used to charge the battery, which serves as an energy storage medium. The charge controller controls the charge and discharge process and protects the system from any damage. Among the errors or accidents that can occur and cause damage are polarity errors, short circuits, overcharge, and overdischarge. The inverter converts DC from the battery to AC to be used by electrical appliances. This standalone system is shown in Figure 8. A simulation of each electrical appliance usage was conducted based on the daily activities of the residents. Minimum power consumption was 100 W from 23:00 to 10:00. During this period, the residents were sleeping and working in the morning. The men would be going to sea to fish, the mothers would be doing housework, and the children would be going to school. This minimal energy consumption was contributed by a minifridge that operates 24 h. The consumption increased slightly during the period 11:00 to 18:00 and was contributed by the use of fans. The surroundings of the house would be hot and require fan ventilation. Often the fisherman would return home at this time. Consumption continued to increase after 18:00 and reached a maximum load consumption of 372 W from 19:00 to 22:00. Almost all electrical appliances were turned on, including lights, air conditioners, TVs, and telephone chargers. The values of electrical load according to time can be observed graphically in Figure 9. Based on the estimates from this simulation, the maximum, minimum, and total daily electrical load values are 100 W, 372 W, and 7.454 kWh/d, respectively. A simulation of each electrical appliance usage was conducted based on the daily activities of the residents. Minimum power consumption was 100 W from 23:00 to 10:00. During this period, the residents were sleeping and working in the morning. The men would be going to sea to fish, the mothers would be doing housework, and the children would be going to school. This minimal energy consumption was contributed by a minifridge that operates 24 h. The consumption increased slightly during the period 11:00 to 18:00 and was contributed by the use of fans. The surroundings of the house would be hot and require fan ventilation. Often the fisherman would return home at this time. Consumption continued to increase after 18:00 and reached a maximum load consumption of 372 W from 19:00 to 22:00. Almost all electrical appliances were turned on, including lights, air conditioners, TVs, and telephone chargers. The values of electrical load according to time can be observed graphically in Figure 9. Based on the estimates from this simulation, the maximum, minimum, and total daily electrical load values are 100 W, 372 W, and 7.454 kWh/d, respectively. Sustainability 2021, 13, x FOR PEER REVIEW 12 of 18 Energy balance analysis shows that PV efficiency ranges between 13.2 and 17.2% and PV energy consumption between 72.1 and 97.1%. The average efficiency of the PV system is 15.1%. This average value is acceptable because it aligns with values from other studies [45][46][47][48][49]. The percentage of energy consumption shows a reading that fluctuates each month. Because the size of the PV system and the electrical load are constant, the change in the percentage value of energy consumption is due to the monthly solar energy intensity. Figure 10 shows the PV efficiency and PV energy consumption (in %) throughout the year. Based on Figure 10, the most critical month is in January. Almost all the energy supplied is consumed at 97.1%. Figure 11 shows the PV energy supply and energy consumption. The PV output is the total value of energy produced by the PV system each month, and PV energy consumption is the total monthly load. Energy balance analysis shows that PV efficiency ranges between 13.2 and 17.2% and PV energy consumption between 72.1 and 97.1%. The average efficiency of the PV system is 15.1%. This average value is acceptable because it aligns with values from other studies [45][46][47][48][49]. The percentage of energy consumption shows a reading that fluctuates each month. Because the size of the PV system and the electrical load are constant, the change in the percentage value of energy consumption is due to the monthly solar energy intensity. Figure 10 shows the PV efficiency and PV energy consumption (in %) throughout the year. Energy balance analysis shows that PV efficiency ranges between 13.2 and 17.2% and PV energy consumption between 72.1 and 97.1%. The average efficiency of the PV system is 15.1%. This average value is acceptable because it aligns with values from other studies [45][46][47][48][49]. The percentage of energy consumption shows a reading that fluctuates each month. Because the size of the PV system and the electrical load are constant, the change in the percentage value of energy consumption is due to the monthly solar energy intensity. Figure 10 shows the PV efficiency and PV energy consumption (in %) throughout the year. Based on Figure 10, the most critical month is in January. Almost all the energy supplied is consumed at 97.1%. Figure 11 shows the PV energy supply and energy consumption. The PV output is the total value of energy produced by the PV system each month, and PV energy consumption is the total monthly load. Based on Figure 10, the most critical month is in January. Almost all the energy supplied is consumed at 97.1%. Figure 11 shows the PV energy supply and energy consumption. The PV output is the total value of energy produced by the PV system each month, and PV energy consumption is the total monthly load. Sustainability 2021, 13, x FOR PEER REVIEW 13 of 18 Figure 11. Energy balance of 2.2 kWp PV system. March had the highest PV energy output and surplus energy levels, with 317.5 kWh and 86.5 kWh, respectively, and high PV output contributes to this surplus. The balance between energy production and consumption is acceptable. The design and size of the solar system should be optimal and should not have too much excess energy that can lead to oversizing. However, at the same time, energy consumption should not exceed the limit of energy production, resulting in energy shortage. Figure 11 proves that the PV system in this study can supply continuous energy throughout the year optimally. Pricing for each parameter involved in the calculation of annual ROI and PP is essential. This analysis is based on the lowest income, the number of visitors, and the minimum frequency for a year. A study on the island found that a homestay had at least one room to accommodate two visitors. The rental rate is MYR 80 per person for one night. The operating costs of the PV system, homestay operations, and the PV system capital costs are shown in Table 4, and the PV system cost breakdown is shown in Table 5. March had the highest PV energy output and surplus energy levels, with 317.5 kWh and 86.5 kWh, respectively, and high PV output contributes to this surplus. The balance between energy production and consumption is acceptable. The design and size of the solar system should be optimal and should not have too much excess energy that can lead to oversizing. However, at the same time, energy consumption should not exceed the limit of energy production, resulting in energy shortage. Figure 11 proves that the PV system in this study can supply continuous energy throughout the year optimally. Pricing for each parameter involved in the calculation of annual ROI and PP is essential. This analysis is based on the lowest income, the number of visitors, and the minimum frequency for a year. A study on the island found that a homestay had at least one room to accommodate two visitors. The rental rate is MYR 80 per person for one night. The operating costs of the PV system, homestay operations, and the PV system capital costs are shown in Table 4, and the PV system cost breakdown is shown in Table 5. Table 4. Costs assessed in the technoeconomic analysis of the PV system on the study island. No. Cost Value in Malaysian Ringgit (MYR) 1 Capital 10,000-50,000 2 PV system operation 1200 3 Homestay operation 2880 The PV system corresponds to the rated load estimate of the 2.2-kW system size. Yet the findings showed that installation costs vary depending on the component quality, profittaking by the contractor, lack of price uniformity by authorities, high transportation costs, and limited expertise of installers. Therefore, this study set the installation cost or capital in the sensitivity range, that is, every MYR 10,000 starting from MYR 10,000 up to MYR 50,000. Operating costs, including maintenance, were estimated at MYR 100 per month because the PV system was straightforward and required minimal maintenance. Based on Equation (6), the calculated ROI is MYR 3600 per year. Next, the PP was calculated in the form of capital cost range sensitivity. The lowest and highest cost of capital values were in the range of MYR 10,000 and MYR 50,000, and the PP values were 2.78 and 13.89 years, respectively. These values can be observed in the form of a graph shown in Figure 12. Each interception with an ROI line determines the PP period of each capital cost that has been set. The PV system corresponds to the rated load estimate of the 2.2-kW system size. Yet the findings showed that installation costs vary depending on the component quality, profit-taking by the contractor, lack of price uniformity by authorities, high transportation costs, and limited expertise of installers. Therefore, this study set the installation cost or capital in the sensitivity range, that is, every MYR 10,000 starting from MYR 10,000 up to MYR 50,000. Operating costs, including maintenance, were estimated at MYR 100 per month because the PV system was straightforward and required minimal maintenance. Based on Equation (6), the calculated ROI is MYR 3600 per year. Next, the PP was calculated in the form of capital cost range sensitivity. The lowest and highest cost of capital values were in the range of MYR 10,000 and MYR 50,000, and the PP values were 2.78 and 13.89 years, respectively. These values can be observed in the form of a graph shown in Figure 12. Each interception with an ROI line determines the PP period of each capital cost that has been set. These results show that if the homestay owners on that island can reduce their capital costs to as low as MYR 10,000, they will profit after 2.78 years. This period will increase in proportion to the increase in capital cost. Another scenario was analyzed by considering the increase in rates and the number of rental days. The cost of PV system installation is based on Table 4, with the minimum and maximum values of MYR 37,036 and 56,036, respectively. The increase of homestay revenue is 2% annually. This value is based on reports which stated a 13.5% increase in tourists in 6 years, and 88.7% of these tourists would choose islands as their tourist destination [27,28]. Estimates of profit and ROI can be seen in Figure 13. These results show that if the homestay owners on that island can reduce their capital costs to as low as MYR 10,000, they will profit after 2.78 years. This period will increase in proportion to the increase in capital cost. Another scenario was analyzed by considering the increase in rates and the number of rental days. The cost of PV system installation is based on Table 4, with the minimum and maximum values of MYR 37,036 and 56,036, respectively. The increase of homestay revenue is 2% annually. This value is based on reports which stated a 13.5% increase in tourists in 6 years, and 88.7% of these tourists would choose islands as their tourist destination [27,28]. Estimates of profit and ROI can be seen in Figure 13. Both lines represent the installation cost of the solar system at the minimum and maximum values. While the gradients of the two lines are the same, they represent annual gains. Figure [57]. Observations show that the variation value of PP is due to the profit value of service or product produced and numbers of annual operations. Operating costs do not significantly impact PP, but the study gives the minimum and maximum PP values of 10.1 and 15.2 years, respectively. These PP values are categorized as long as they are over 5 years. Homestay rental rates are small compared to the overall cost of the solar system. The PP value for this system can be shortened in several ways, such as by increasing the rental rate per night and the number of rental days. The value of PP can also be shortened if the homestay business owner can reduce the overall cost of the solar system. Conclusions This study reported the status of solar energy resources in Mantanani Island and identified the basic electricity load and the technoeconomic analysis of local homestay enterprises. The island's location close to the earth's equator gives an advantage specifically in terms of sun path and azimuth. Based on the calculations, the annual solar energy and PSH values were 1477 kWh/m 2 /year and 4.05 PSH, respectively. The electrical loads were given priority for minimal electrical appliance necessities such as lighting, ventilation, fresh food storage, and telecommunications. The total daily load was found to be 7.454 kWh/d. The technoeconomic analysis was performed based on the minimum income for each sensitivity range determined. This range was based on the cost of capital ranging from MYR 10,000 to 50,000. The results showed that the payback period was directly proportional to the cost of capital. For projection, the profits will be achieved after 2.78 years. However, the period may be longer if the capital is higher. Both lines represent the installation cost of the solar system at the minimum and maximum values. While the gradients of the two lines are the same, they represent annual gains. Figure 13 shows the PP for minimum and maximum installation costs are 10.1 and 15.2 years, respectively. The PP for a solar system is important, which shows how quickly the overall cash flow [57]. Observations show that the variation value of PP is due to the profit value of service or product produced and numbers of annual operations. Operating costs do not significantly impact PP, but the study gives the minimum and maximum PP values of 10.1 and 15.2 years, respectively. These PP values are categorized as long as they are over 5 years. Homestay rental rates are small compared to the overall cost of the solar system. The PP value for this system can be shortened in several ways, such as by increasing the rental rate per night and the number of rental days. The value of PP can also be shortened if the homestay business owner can reduce the overall cost of the solar system. Conclusions This study reported the status of solar energy resources in Mantanani Island and identified the basic electricity load and the technoeconomic analysis of local homestay enterprises. The island's location close to the earth's equator gives an advantage specifically in terms of sun path and azimuth. Based on the calculations, the annual solar energy and PSH values were 1477 kWh/m 2 /year and 4.05 PSH, respectively. The electrical loads were given priority for minimal electrical appliance necessities such as lighting, ventilation, fresh food storage, and telecommunications. The total daily load was found to be 7.454 kWh/d. The technoeconomic analysis was performed based on the minimum income for each sensitivity range determined. This range was based on the cost of capital ranging from MYR 10,000 to 50,000. The results showed that the payback period was directly proportional to the cost of capital. For projection, the profits will be achieved after 2.78 years. However, the period may be longer if the capital is higher. In line with the Malaysian Government policy as detailed in the Green Technology Master Plan Malaysia 2017-2030, renewable energies are critical areas that the government needs to address. It is projected that by 2030, Malaysia will achieve 30% power provided from renewable energy sources. Hence, the proposed method in this paper will be able to facilitate the government's future direction. The contribution of this study has provided a clear picture of the potential use of solar energy in the tropical islands of Malaysia. This potential is refined through solar energy source information, questionnaires, and technoeconomic analysis. The effect of this study proves the need for solar energy and the impact on new economic activities that are gaining popularity, namely homestay businesses. In addition, this report has also given an overview of the incurred cost and the cash flow of the homestay business if the proposed solar system is to be installed. This trend is shown in the results of the discussion of the technoeconomic analysis.
12,654.6
2021-11-11T00:00:00.000
[ "Environmental Science", "Engineering" ]
Photo-Induced Vertical Alignment of Liquid Crystals via In Situ Polymerization Initiated by Polyimide Containing Benzophenone Vertical alignment of liquid crystal (LC) was achieved in an easy and effective way: in situ photopolymerization of dodecyl acrylate (DA) monomers initiated by polyimide based on 3,3′,4,4′-benzophenonetetracarboxylic dianhydride and 3,3′-dimethyl-4,4′-diaminodiphenyl methane (BTDA-DMMDA PI). The alignment behavior and alignment stabilities were characterized by a polarizing optical microscope (POM), which showed a stable vertical alignment after 12 h of thermal treatment. The chemical structures, morphology, and water contact angles of alignment films peeled from LC cells with and without DA monomers were analyzed by means of a Fourier transform infrared spectrometer (FTIR), a scanning electron microscope (SEM), and a contact angle tester, separately. The results confirmed that the DA monomers underwent self-polymerization and grafting polymerization initiated by the BTDA-DMMDA PI under ultraviolet irradiation, which aggregated on the surfaces of PI films. The water contact angles of the alignment films were about 15° higher, indicating a relative lower surface energy. In conclusion, the vertical alignment of LC was introduced by the low surface free energy of PI films grafted with DA polymer and intermolecular interactions between LC and DA polymers. Introduction A liquid crystal (LC) alignment layer is a crucial component of liquid crystal displays (LCDs), which has a great influence on the LCDs' optical and electrical performance in terms of view angles, response time, and voltage holding ratio, among others. Pretilt angle and anchoring energy governed by the chemical and topological structures of alignment layers are two critical factors in adjusting the characters of LCDs. The vertical alignment mode (VA mode), which needs a pretilt angle above 88 • has many advantages, including a high on-axis contrast ratio, a wide viewing angle, satisfactory cost, and simultaneous applicability of reflective and transmissive mode over other alignment modes such as twist nematic mode and in-plane switching mode [1,2]. Therefore, VA mode has received much research attention and has been adopted into many types of LCDs, ranging from minor-sized cell phones to large-sized televisions and other devices [3]. In order to achieve perfect vertical alignment, many methods have been taken to control the pretilt angle, such as rubbing vertical alignment [4][5][6][7][8][9], polymer-sustained vertical alignment (PSVA) [10][11][12], and photo-induced vertical alignment [13][14][15][16][17]. Among these alignment methods mentioned above, the PSVA technology showing strength through its fast response, high transmittance, and simple manufacturing process [18] has been widely investigated. Generally, this technology is conducted as follows: the LC cell containing LC and UV-curable monomers is UV-irradiated under a voltage larger than the Freedericksz transition voltage, and the pretilt angle is fixed by the polymer networks formed during UV irradiation. Improved electro-optical properties and image quality, with a higher light transmittance, a lower rising time, and a lower operating voltage, were reported in PSVA LCDs. Many kinds of UV-curable monomers were used to realize PSVA, including reactive mesogen [1,[19][20][21], long alky monoene, and polyene [22,23]. In the meantime, photoinitiators were added to obtain a fast reaction rate. However, the photoinitiators became impurity ions resulting in image sticking when they remained after photopolymerization [24]. The UV-curable monomers polymerized without additional initiators, such as 4,4 -diacryloyloxybiphenyl [12] and phenanthrene-carrying monomers [25], were used, aimed at solving the above problem as reported previously. Furthermore, Kang et al. [26] discovered that the pretilt angle of homogeneous alignment polyimide (PI) film was controlled using photocurable monomer (NOA65) without photoinitiator. Inspired by these results, we proposed using photosensitive PI as a photoinitiator to initiate long alkyl monoene to obtain a uniform and stable vertical alignment. Benzophenone (BP) is an efficient photoinitiator and has been adopted for surface grafting modification through hydrogen abstracting [27][28][29]. Further, Yu et al. [30] found that the PI containing BP groups could induce homogeneous alignment of LC after polarized UV irradiation via intermolecular crosslinking initiated by BP. Therefore, the PI containing BP group could be targeted as polymer initiator. In addition, in order to obtain a higher reaction rate, the diamine 3,3 -dimethyl-4,4 -diaminodiphenyl methane (DMMDA) was used as a hydrogen donor for photoinitiating. In this work, the PI (BTDA-DMMDA PI) synthesized through the polycondensation of 3,3 ,4,4 -benzophenonetetracarboxylic dianhydride (BTDA) and DMMDA served as a polymer photoinitiator, and dodecyl acrylate (DA) was grafted onto the PI film to generate a uniform vertical alignment. Furthermore, the chemical structure and morphology of the PI films peeled from LC cells with and without DA monomers, as well as the alignment behavior and its thermal stability, were characterized and analyzed. This will provide the vertical alignment method with a simple procedure and free of additional micromolecular initiators. Synthesis of Poly(Amic Acid) Polyimide (PI) was prepared via a typical two-step method with a synthesis of poly(amic acid) (PAA) and subsequent thermal imidization. Specifically, 1.00 mmol DMMDA was charged into a 50 mL three-necked flask, and 4.94 g of NMP was added to dissolve DMMDA under magnetic stirring. Exactly 1.00 mmol BTDA was added after DMMDA dissolved completely. The reaction was conducted under N 2 atmosphere for 4 h at room temperature in order to obtain the viscous PAA solution. Subsequently, another 5.48 g of NMP was added to dilute the PAA solution to 5 wt % to obtain proper viscosity for spin-casting on ITO glass. Preparation of Liquid Crystal Cells The ITO glass was washed with 3 wt % NaOH aqueous, detergent, and alcohol successively and dried at 120 • C in an oven for 3 h. The PAA solution was spin-coated onto the ITO glass at a rotation speed of 600 rpm for 9 s and 2500 rpm for 30 s. Then, the coated ITO glass was heated on a plate heater at 80 • C for 30 min, 120 • C for 30 min, 180 • C for 30 min, and 230 • C for 1 h in turn to achieve imidization of polyimide based on 3,3 ,4,4 -benzophenonetetracarboxylic dianhydride and 3,3 -dimethyl-4,4 -diaminodiphenyl methane (BTDA-DMMDA PI) films. Two pieces of coated ITO glass were rubbed with a rubbing machine (TianLi Co. Ltd., Guangdong, China) and assembled in the antiparallel rubbing direction with a cell gap of 40 µm, which was set by an adhesive film spacer. DA was mechanically mixed with LC at weight ratios of 2/98, 1/99, 0.5/0.95, and 0/100 under magnetic stirring for 4 h at room temperature. The mixtures were charged into cells by a capillary action at 95 • C on a plate heater, and the cells were maintained at 95 • C for another 20 min to eliminate the flow effect. The cells were radiated with unpolarized UV light (OSRAM 300 W, ORSAM, Munich, Bayern, Germany) for 0.5 h and the distance between light and cells was 10 cm. Characterization The alignment performance of LC was characterized by polarizing optical microscope (POM) (Shanghai Millimeter Precision Instrument Co. Ltd. (Shanghai, China)) and pretilt angle tester (Changchun Institute of Optics, Fine Mechanics and Physics (Changchun, China)). The Fourier transform infrared (FTIR) spectra of alignment layers were recorded with Nicolet 560 FTIR spectrometer (Thermo Nicolet Corporation, Madison, WI, USA) to determine the chemical structures. Scanning electron microscopy (SEM) photographs were taken with a Quanta 250 scanning electron microscope (FEI, Hillsboro, OR, USA) under an acceleration voltage of 20 kV to characterize the surface morphologies of alignment layers. The alignment layers were carefully peeled off from cells with a process of being soaked in acetone and deionized water for 30 min each and washed several times to completely remove LC, unreacted monomers, and homopolymers prior to FTIR testing. The alignment layers peeled off from cells were named DA-0.5, DA-1, and DA-2, separately, while DA-0 referred to the BTDA-DMMDA PI film, shown in Table 1. The contact angles of the alignment layers were measured by a contact angle-meter (DSA100, Kruss, Hamburg, Germany), and the total surface free energy was calculated with Method-EOS. The cells were disassembled to test the SEM and water contact angles via bath and washing with acetone, i.e., the alignment layers still adhered to the glass. The contrast ratios of LC cells were determined by a ZKY-LCDEO-2 liquid crystal electro-optic effect comprehensive tester (Chengdu century Zhongke Instrument Co., Ltd., Chengdu, China). The cells were UV irradiated and thermal annealing at 120 • C for 30 min before the test. For comparison, a regular PSVA mode cell supplied by Yantai Xianhua Chem-Tech Co., Ltd., (Yantai, China) was determined using the same method. Analysis of Chemical Structures of Polyimide FTIR is useful for comparing the chemical structures of alignment layers before and after UV photo radiation. As shown in Figure 1A, spectrum a showed the characteristic absorption peaks of initial BTDA-DMMDA PI (DA-0) without photo radiation at 1779, 1727, and 1376 cm −1 , ascribed to the symmetric stretching vibration of C=O, the asymmetric stretching vibration of C=O, and the stretching vibration C-N in imide groups, separately [31]. The peak of C=O in BP providing photoinitiating sites was located at 1670 cm −1 . The breathing vibration of aromatic rings near 1505 cm −1 [32] remained the same before and after UV irradiation, which was a proper internal standard for measuring the reaction degree of C=O in BP. Polymers 2017, 9, 233 4 of 10 stretching vibration C-N in imide groups, separately [31]. The peak of C=O in BP providing photoinitiating sites was located at 1670 cm −1 . The breathing vibration of aromatic rings near 1505 cm −1 [32] remained the same before and after UV irradiation, which was a proper internal standard for measuring the reaction degree of C=O in BP. It is well known that the BP initiates free radical polymerization through hydrogen abstracting as mentioned above. Hydrogen abstracting will break the C=O in BP and generate -OH [30]. Therefore, as shown in Figure 1A, -OH stretches (weak) appeared at 3342 cm −1 , and the peak intensity of C=O in BP at 1670 cm −1 decreased after photoirradiation, which provided support for photoinitiating C=O in BP. The peak intensity of C=O at 1721 cm −1 was increased after UV irradiation, due to the grafting of DA. Previous work [33] reported that the peak could be fitted with Lorentzian functions to estimate the reaction degree of C=O in BP. Similarly, the peaks of FTIR spectra were fitted with Lorentzian functions prior to comparing the amounts of grafting of DA between cells containing different weight ratios of DA monomer as shown in Figure 1B. The values of S1720/S1505 and S1670/S1505 were displayed in Table 1. The PI in four LC cells added with different weight ratios of DA showed different intensity increase ratios of the peak centered at 1721 cm −1 and intensity decrease ratios of C=O in BP. Generally, the peak at 1721 cm −1 increased and the peak at 1670 cm −1 decreased with the increase in DA monomer weight ratios. This may have resulted from the increased reaction probabilities of DA monomers grafted onto PI films in LC cells with more weight ratios of DA. Alignment Behavior of Liquid Crystals The orthoscopic and conoscopic (inset) POM graphs of LC cells with different weight ratios of DA were shown in Figure 2. The LC cells with 2%, 1%, and 0.5% of DA showed dark state under It is well known that the BP initiates free radical polymerization through hydrogen abstracting as mentioned above. Hydrogen abstracting will break the C=O in BP and generate -OH [30]. Therefore, as shown in Figure 1A, -OH stretches (weak) appeared at 3342 cm −1 , and the peak intensity of C=O in BP at 1670 cm −1 decreased after photoirradiation, which provided support for photoinitiating C=O in BP. The peak intensity of C=O at 1721 cm −1 was increased after UV irradiation, due to the grafting of DA. Previous work [33] reported that the peak could be fitted with Lorentzian functions to estimate the reaction degree of C=O in BP. Similarly, the peaks of FTIR spectra were fitted with Lorentzian functions prior to comparing the amounts of grafting of DA between cells containing different weight ratios of DA monomer as shown in Figure 1B. The values of S 1720 /S 1505 and S 1670 /S 1505 were displayed in Table 1. The PI in four LC cells added with different weight ratios of DA showed different intensity increase ratios of the peak centered at 1721 cm −1 and intensity decrease ratios of C=O in BP. Generally, the peak at 1721 cm −1 increased and the peak at 1670 cm −1 decreased with the increase in DA monomer weight ratios. This may have resulted from the increased reaction probabilities of DA monomers grafted onto PI films in LC cells with more weight ratios of DA. Alignment Behavior of Liquid Crystals The orthoscopic and conoscopic (inset) POM graphs of LC cells with different weight ratios of DA were shown in Figure 2. The LC cells with 2%, 1%, and 0.5% of DA showed dark state under orthogonal polarization after photoirradiation. Moreover, a dark cross brush in the conoscopic POM graphs indicated that the LC aligned vertically. The pretilt angles were 89.7 • of cells with DA monomers, which also demonstrated vertical alignment. However, the cell with 2% DA showed some light spots, which may be due to the incomplete phase separation of DA homopolymers in LC [34]. Compared with the cell with 2% DA, the cells with 1% DA and 0.5% both revealed a much better dark state. This is likely owing to the less self-polymerized DA polymers in cells with a lower monomer concentration. Polymers 2017, 9,233 5 of 10 orthogonal polarization after photoirradiation. Moreover, a dark cross brush in the conoscopic POM graphs indicated that the LC aligned vertically. The pretilt angles were 89.7° of cells with DA monomers, which also demonstrated vertical alignment. However, the cell with 2% DA showed some light spots, which may be due to the incomplete phase separation of DA homopolymers in LC [34]. Compared with the cell with 2% DA, the cells with 1% DA and 0.5% both revealed a much better dark state. This is likely owing to the less self-polymerized DA polymers in cells with a lower monomer concentration. Figure 3A showed that, after thermal annealing at 120 °C for 30 min, the cell with 2% DA exhibited a better dark state, proving that the light spots were caused by incomplete phase separation. In addition, the dark states of cells with 1% DA and 0.5% were also better off. Moreover, the contrast ratios of LC cells were determined. The results showed that the contrast ratios (Table 1) of the LC cells with 1% and 0.5% DA were above 400:1 and similar to the regular PSVA mode cell (408:1). The LC cell with 2% DA had a relative lower contrast ratio, which may be due to the scattering of DA polymer protrusions. This was consistent with the POM results. In order to reassure that the BTDA-DMMDA PI could initiate the photopolymerization of DA, two contrast experiments by varying the structure of diamine and dianhydride have been carried out. The DA monomers in cells with BTDA-ODA PI films and ODPA-ODA PI films scarcely polymerized, as the POM graphs showed no signs of vertical alignment after the same period of photoirradiation, as shown in Figure 4. This was probably due to the UV absorption by PI films and the lack of initiator. These results further prove that the monomers were initiated by BTDA-DMMDA PI, and the initiating process was consistent with previous reports [28,29], as shown in Figure 5. The reactions in and on BTDA-DMMDA PI were the intermolecular crosslinking and grafting of DA. Taking the above facts, the POM photographs, and FTIR spectroscopy results into account, the conclusion that the DA monomers were mainly photoinitiated by BTDA-DMMDA PI and grafted onto it was drawn, which was in concurrence with self-polymerization in minor portions. Figure 3A showed that, after thermal annealing at 120 • C for 30 min, the cell with 2% DA exhibited a better dark state, proving that the light spots were caused by incomplete phase separation. In addition, the dark states of cells with 1% DA and 0.5% were also better off. Moreover, the contrast ratios of LC cells were determined. The results showed that the contrast ratios (Table 1) of the LC cells with 1% and 0.5% DA were above 400:1 and similar to the regular PSVA mode cell (408:1). The LC cell with 2% DA had a relative lower contrast ratio, which may be due to the scattering of DA polymer protrusions. This was consistent with the POM results. orthogonal polarization after photoirradiation. Moreover, a dark cross brush in the conoscopic POM graphs indicated that the LC aligned vertically. The pretilt angles were 89.7° of cells with DA monomers, which also demonstrated vertical alignment. However, the cell with 2% DA showed some light spots, which may be due to the incomplete phase separation of DA homopolymers in LC [34]. Compared with the cell with 2% DA, the cells with 1% DA and 0.5% both revealed a much better dark state. This is likely owing to the less self-polymerized DA polymers in cells with a lower monomer concentration. Figure 3A showed that, after thermal annealing at 120 °C for 30 min, the cell with 2% DA exhibited a better dark state, proving that the light spots were caused by incomplete phase separation. In addition, the dark states of cells with 1% DA and 0.5% were also better off. Moreover, the contrast ratios of LC cells were determined. The results showed that the contrast ratios (Table 1) of the LC cells with 1% and 0.5% DA were above 400:1 and similar to the regular PSVA mode cell (408:1). The LC cell with 2% DA had a relative lower contrast ratio, which may be due to the scattering of DA polymer protrusions. This was consistent with the POM results. In order to reassure that the BTDA-DMMDA PI could initiate the photopolymerization of DA, two contrast experiments by varying the structure of diamine and dianhydride have been carried out. The DA monomers in cells with BTDA-ODA PI films and ODPA-ODA PI films scarcely polymerized, as the POM graphs showed no signs of vertical alignment after the same period of photoirradiation, as shown in Figure 4. This was probably due to the UV absorption by PI films and the lack of initiator. These results further prove that the monomers were initiated by BTDA-DMMDA PI, and the initiating process was consistent with previous reports [28,29], as shown in Figure 5. The reactions in and on BTDA-DMMDA PI were the intermolecular crosslinking and grafting of DA. Taking the above facts, the POM photographs, and FTIR spectroscopy results into account, the conclusion that the DA monomers were mainly photoinitiated by BTDA-DMMDA PI and grafted onto it was drawn, which was in concurrence with self-polymerization in minor portions. In order to reassure that the BTDA-DMMDA PI could initiate the photopolymerization of DA, two contrast experiments by varying the structure of diamine and dianhydride have been carried out. The DA monomers in cells with BTDA-ODA PI films and ODPA-ODA PI films scarcely polymerized, as the POM graphs showed no signs of vertical alignment after the same period of photoirradiation, as shown in Figure 4. This was probably due to the UV absorption by PI films and the lack of initiator. These results further prove that the monomers were initiated by BTDA-DMMDA PI, and the initiating process was consistent with previous reports [28,29], as shown in Figure 5. The reactions in and on BTDA-DMMDA PI were the intermolecular crosslinking and grafting of DA. Taking the above facts, the POM photographs, and FTIR spectroscopy results into account, the conclusion that the DA monomers were mainly photoinitiated by BTDA-DMMDA PI and grafted onto it was drawn, which was in concurrence with self-polymerization in minor portions. Thermal Stability of Alignment Because the running of LCDs was an exothermic process, the thermal stability of alignment films should be studied. The LC cells were heated on the plate heater at 120 °C for 12 h to check the thermal stability of the vertical alignment, and the POM graphs of the heated cells were recorded and are shown in Figure 6. The POM graphs of all LC cells with different DA weight ratios exhibited a satisfactory dark state without disorders of LC after 12 h of thermal heating, which indicated satisfactory thermal stabilities. However, the cell with 2 wt % DA monomers ( Figure 6A) showed small protrusions, which probably resulted from the complete phase separation and aggregation of copolymerized DA monomers. In comparison to cells with 2 wt % DA monomers, the cells with 1 wt Thermal Stability of Alignment Because the running of LCDs was an exothermic process, the thermal stability of alignment films should be studied. The LC cells were heated on the plate heater at 120 °C for 12 h to check the thermal stability of the vertical alignment, and the POM graphs of the heated cells were recorded and are shown in Figure 6. The POM graphs of all LC cells with different DA weight ratios exhibited a satisfactory dark state without disorders of LC after 12 h of thermal heating, which indicated satisfactory thermal stabilities. However, the cell with 2 wt % DA monomers ( Figure 6A) showed small protrusions, which probably resulted from the complete phase separation and aggregation of copolymerized DA monomers. In comparison to cells with 2 wt % DA monomers, the cells with 1 wt Thermal Stability of Alignment Because the running of LCDs was an exothermic process, the thermal stability of alignment films should be studied. The LC cells were heated on the plate heater at 120 • C for 12 h to check the thermal stability of the vertical alignment, and the POM graphs of the heated cells were recorded and are shown in Figure 6. The POM graphs of all LC cells with different DA weight ratios exhibited a satisfactory dark state without disorders of LC after 12 h of thermal heating, which indicated satisfactory thermal stabilities. However, the cell with 2 wt % DA monomers ( Figure 6A) showed small protrusions, which probably resulted from the complete phase separation and aggregation of copolymerized DA monomers. In comparison to cells with 2 wt % DA monomers, the cells with 1 wt % and 0.5 wt % DA monomers exhibited a smooth surface. This fact provided further evidence for the speculation that the incomplete phase separation was reasonably responsible for the light spots in cells containing 2% DA before thermal annealing (Figure 2A). Polymers 2017, 9, 233 7 of 10 % and 0.5 wt % DA monomers exhibited a smooth surface. This fact provided further evidence for the speculation that the incomplete phase separation was reasonably responsible for the light spots in cells containing 2% DA before thermal annealing (Figure 2A). Surface Morphology of Alignment Layers SEM is an effective method for investigating the morphology of alignment films. The alignment films (bottom one) from disassembled LC cells, and the LC and DA homopolymers were removed before the test. The SEM graphs of different cells were depicted in Figure 7. In order to obtain a clear overview of DA polymers grafted onto the BTDA-DMMDA PI films, 1200 magnification was used to DA-1 ( Figure 7C) and DA-05 ( Figure 7D). The alignment films in the cells without DA monomers showed smooth surfaces; comparatively, films in cells with DA showed surfaces that were quite rough and with many grains [22]. Moreover, the films from cells with different weight ratios of DA monomers showed different polymer particles densities. The polymer particles on alignment layers DA-2 and DA-1 were aggregated and larger. By contrast, particles on alignment DA-0.5 were dispersive and smaller. Overall, the polymer particles became larger and more intensive with the increase in the weight ratios of the DA monomers in cells. Consequently, the particles were probably DA-grafted polymers, which were inclined to aggregate on the PI films. Surface Morphology of Alignment Layers SEM is an effective method for investigating the morphology of alignment films. The alignment films (bottom one) from disassembled LC cells, and the LC and DA homopolymers were removed before the test. The SEM graphs of different cells were depicted in Figure 7. In order to obtain a clear overview of DA polymers grafted onto the BTDA-DMMDA PI films, 1200 magnification was used to DA-1 ( Figure 7C) and DA-05 ( Figure 7D). The alignment films in the cells without DA monomers showed smooth surfaces; comparatively, films in cells with DA showed surfaces that were quite rough and with many grains [22]. Moreover, the films from cells with different weight ratios of DA monomers showed different polymer particles densities. The polymer particles on alignment layers DA-2 and DA-1 were aggregated and larger. By contrast, particles on alignment DA-0.5 were dispersive and smaller. Overall, the polymer particles became larger and more intensive with the increase in the weight ratios of the DA monomers in cells. Consequently, the particles were probably DA-grafted polymers, which were inclined to aggregate on the PI films. Polymers 2017, 9,233 7 of 10 % and 0.5 wt % DA monomers exhibited a smooth surface. This fact provided further evidence for the speculation that the incomplete phase separation was reasonably responsible for the light spots in cells containing 2% DA before thermal annealing (Figure 2A). Surface Morphology of Alignment Layers SEM is an effective method for investigating the morphology of alignment films. The alignment films (bottom one) from disassembled LC cells, and the LC and DA homopolymers were removed before the test. The SEM graphs of different cells were depicted in Figure 7. In order to obtain a clear overview of DA polymers grafted onto the BTDA-DMMDA PI films, 1200 magnification was used to DA-1 ( Figure 7C) and DA-05 ( Figure 7D). The alignment films in the cells without DA monomers showed smooth surfaces; comparatively, films in cells with DA showed surfaces that were quite rough and with many grains [22]. Moreover, the films from cells with different weight ratios of DA monomers showed different polymer particles densities. The polymer particles on alignment layers DA-2 and DA-1 were aggregated and larger. By contrast, particles on alignment DA-0.5 were dispersive and smaller. Overall, the polymer particles became larger and more intensive with the increase in the weight ratios of the DA monomers in cells. Consequently, the particles were probably DA-grafted polymers, which were inclined to aggregate on the PI films. Contact Angles of Polyimide Alignment Layers The surface wettability, shown in Figure 8, was determined by a contact angle tester to give a preliminary explanation for the mechanism of the vertical alignment. The contact angle of the film without photoirradiation was 79.6 • , which showed homogeneous alignment after rubbing. In contrast, the BDTA-DMMDA PI films from cells with DA monomers exhibited relatively high contact angles and the specific values of DA-2, DA-1, and DA-0.5 were 96.4 • , 97.5 • , and 94.8 • , separately. The high contact angles were induced by the DA polymers grafted onto the PI film, which made the surface quite rough and hydrophobic. Furthermore, the total surface free energies calculated with method-EOS exhibited that the intact PI film took on the highest total surface free energy of 35.73 mN/m. This result revealed that the DA monomers grafted onto the PI film caused an elevation of contact angles and the reduction of total surface free energy, contributing to the vertical alignment of LC [26,33]. In addition, the LC molecules with a rod-like shape bear alkyl groups at one end and benzene rings at the other end. Therefore, the DA polymers with long alkyl groups were able to interact with the alkyl groups of the LC molecules, which was another important factor leading to vertical alignment. Contact Angles of Polyimide Alignment Layers The surface wettability, shown in Figure 8, was determined by a contact angle tester to give a preliminary explanation for the mechanism of the vertical alignment. The contact angle of the film without photoirradiation was 79.6°, which showed homogeneous alignment after rubbing. In contrast, the BDTA-DMMDA PI films from cells with DA monomers exhibited relatively high contact angles and the specific values of DA-2, DA-1, and DA-0.5 were 96.4°, 97.5°, and 94.8°, separately. The high contact angles were induced by the DA polymers grafted onto the PI film, which made the surface quite rough and hydrophobic. Furthermore, the total surface free energies calculated with method-EOS exhibited that the intact PI film took on the highest total surface free energy of 35.73 mN/m. This result revealed that the DA monomers grafted onto the PI film caused an elevation of contact angles and the reduction of total surface free energy, contributing to the vertical alignment of LC [26,33]. In addition, the LC molecules with a rod-like shape bear alkyl groups at one end and benzene rings at the other end. Therefore, the DA polymers with long alkyl groups were able to interact with the alkyl groups of the LC molecules, which was another important factor leading to vertical alignment. Conclusions Vertical alignment of LC was easily achieved through in situ photopolymerization of dodecyl acrylate (DA) monomers initiated by BTDA-DMMDA PI. The dark state and dark cross brush under orthogonal polarized and conoscopic microscope separately were clearly observed, indicating vertical alignment. In addition, the vertical alignment remained stable after 12 h of thermal treatment. The morphology and chemical structures of PI films peeled from cells with and without DA monomers revealed grafting and self-polymerizing of DA monomers. In addition, the contrast experiments with different dianhydrides and diamines also provided support for grafting DA monomers onto BTDA-DMMDA PI. The vertical alignment induced by the DA-grafted PI films, with low surface free energy on the surface of the PI films, which was easily obtained and thermal stable, was a novel method adopted for vertical-alignment-mode LCDs without extra small molecular photoinitiators. Conclusions Vertical alignment of LC was easily achieved through in situ photopolymerization of dodecyl acrylate (DA) monomers initiated by BTDA-DMMDA PI. The dark state and dark cross brush under orthogonal polarized and conoscopic microscope separately were clearly observed, indicating vertical alignment. In addition, the vertical alignment remained stable after 12 h of thermal treatment. The morphology and chemical structures of PI films peeled from cells with and without DA monomers revealed grafting and self-polymerizing of DA monomers. In addition, the contrast experiments with different dianhydrides and diamines also provided support for grafting DA monomers onto BTDA-DMMDA PI. The vertical alignment induced by the DA-grafted PI films, with low surface free energy on the surface of the PI films, which was easily obtained and thermal stable, was a novel method adopted for vertical-alignment-mode LCDs without extra small molecular photoinitiators. Conflicts of Interest: The authors declare no conflict of interest.
7,158.2
2017-06-01T00:00:00.000
[ "Materials Science" ]
THE SIMULATION OF GRANULAR PARTICLE ON DRY AND MOISTURIZED POROUS HORIZONTAL SURFACES Simulations were carried out to visualize the ratio of granular attachment to porous surfaces. This simulation uses a uFlex three-dimensional simulation using three sizes of porous surface systems in the condition of the smallest human pores and the most extensive human pores and the condition of wet skin and dry skin. Each system was tested using five granular particle sizes according to the range of the makeup granules’ size to determine the optimal adhesive. The results show that the number of cosmetic granular particles entering the porous surface system is directly proportional to the porous surface volume and moisture and inversely proportional to the granular cosmetic size. The larger the cosmetic granular used, the less granular enters the pore. INTRODUCTION The development of simple particle geometry simulations has been carried out in analyzing granular material [1]. Granular particle simulation is done by neglecting the effect of microscale geometric configuration on the macroscopic scale response. The mechanical behavior of granular particles is generally studied by considering the contact properties of the particles that occur [2]. Preliminary research has been carried out, including numerical simulation techniques using the discrete element method (DEM) to analyze the undrained shear behavior of sand containing dissociated hydrate gas [3], characteristics of the internal flow structure of microscopic movements of coarse particles in the pipe [4], and simulate the dispersion process of active pharmaceutical ingredients (API) after collision with powder inhalers used for healing lungs [5]. Granular computing is inspired by structured thinking, structured problem solving, and structured information processing [6]. The wide distribution of data, simulations, and complexity makes granular computing seen as an interdisciplinary study of computing. The granular computing system will produce valuable identification into the underlying macroscopic structure of the granular system [7]. This research simulated a granular system on the application of cosmetics to porous surfaces. A porous surface is intended as a simple form of visualization of pores in humans. Pores can be enlarged in size and amount based on exposure to sunlight and lifestyle, which is an open problem for the skin [8]. The size of cosmetics in the market is very diverse and is taken as samples in the diameter range of 1 to 3 μm [9][10][11]. Recently there has also been an interest in the presence of nanoparticles in products such as cosmetics, which are under 100 nm [10], [12]. Some techniques only report the center point and distribution. Others provide greater detail throughout the detected upper and lower particle sizes. The distribution of particle size can be calculated based on several models: most often as a number or volume/ mass distribution [10]. The sample pores are pores in the human face area (scalp not included) with comparable densities (i.e., 200-300/cm 2 ), have different sizes, and have a diameter of about 5-10 µm [8,13]. Besides, this study also simulates the use of cosmetics on dry and moist skin types. Moist skin means healthy skin, where the balance between sebum and lipids is balanced. The skin will have a natural system for storing water in or on the skin [14]. The primary purpose of developing moisturizers is to restore lipids to the surface of the skin after cleaning. Granular attachment simulation will be analyzed using sphere packing with a random close pack system. Sphere packing is the determination of the composition of the densest particles that do not go out of space and reach a maximum density of φ. The random packing of equal spheres generally has a packings fraction φ≈0.64 [15]. A granular attachment on a porous surface will have a rule, the same as attaching a sphere to a big sphere. However, in this simulation, we use a sphere inside a half prolate spheroid (pore shape). The determination of packing square to the big square even began to be developed long time ago [16], same as packing spheres into cylinder [17], but in this research, we discuss | 73 SPEKTRA: Jurnal Fisika dan Aplikasinya Volume 6 Issue 1, April 2021 small spheres in big sphere [18]. In this research, the simulation is carried out to find out the effect of attaching the granular system to dry and moisturized surfaces. These results are expected to be a reference in subsequent studies. MATERIALS AND METHODS Every particle in the system acts as a rigid body and will not break, reflecting the properties of the indestructible cosmetic component. The particle size distribution curve is shown in FIGURE 1, with a grain diameter of 1 -5 µm. The granular used is categorized as coarse to fine granular for cosmetic purposes. The coefficient of friction between particles in this simulation was identified as 0.5. The condition of the restitution coefficient is assumed on the porous surface and on each particle, equal to 0.1. The numerical samples of each granular grain had various diameters and contained 32768 particles. DEM Model DEM simulation has been used to investigate cosmetic behavior in the process of shipping from manufacturing to natural antioxidant skin [12], simulation of skin aging and wrinkle as a cosmetic reference [19], simulating makeup by manipulating the actual appearance using appropriate cosmetics [20], simulation to get an understanding more deeply about exposure to particles and droplets in the air during the application of cosmetic products [21], Xu & Song studied the undrained sliding behavior of sand containing dissociated hydrate gas [3], Ting & Xinzhou understand the internal flow structure characteristics of microscopic movements of coarse particles in pipes [4], and Ariane & Sommerfeld simulates the process of collision of inhalers powder used for lung healing [5]. The Falling Granular Volume of Powder is identified based on EQUATION 1. Where the granular is seen as a perfect ball. At the same time, the overall volume of VCosmetis cosmetics is the product of the volume of VGranular granules multiplied by the total amount of granular NG. (1) Based on these equations, we can identify the granular volume VGranular and the cosmetic volume VCosmetic. These results are identified in covered by the particles. The problem that has been a source of attraction for mathematicians and scientists for centuries is the determination of the densest arrangement of particles that do not go out of space and reach a maximum density of φ [22]. Packing the sphere shape φ can be reviewed in TABLE 2. Based on the TABLE 2, for equal spheres in three dimensions, the densest packing uses approximately 74% of the volume. For equal spheres, it has only recently been proved that the rhombohedral lattice has the highest possible packing fraction [22,24]. Random packing of equal spheres generally has a packings fraction φ ≈ 0.64 [25]. Based on the theory, a simulation using a random packing sphere was carried out with packings fraction φ ≈ 0.64 and 32768 granular. Furthermore, the granular attachment will be analyzed in each pore using simulation and compared with the results of the calculation. Skin Pores as Horizontal Surfaces Pores are holes of the sebaceous glands and sweating that widens on the surface of the skin. Sebaceous glands have lobular structures, and their size varies from one region to another [25]. Skin appearance, when viewed perpendicularly, will consist of several appearances, namely wrinkles, pores, moles, and spots covering it [25]. The state of the porous surface system is made in such a way as to match the state of the pores of the human skin. The human pore is illustrated in FIGURE 2. In this simulation, three porous cross-sections will be made, which are assumed to be pores on human skin. The diameter of each porous hole is an interpretation of the minimum, middle, and maximum diameter of a human pore. The 3D volume dimension of the pores can be determined using the ellipse flat-plane equation identification, which is then rotated to the yaxis. Then the equation is simulated using DESMOS software the prolate ellipsoid is formed as FIGURE 3. The prolate ellipsoid formed was then simulated using GeoGebra software. Furthermore, the expected pore volume was done by cutting into 2 equal parts into a double decomposed ellipsoid as in FIGURE,4. In each simulated pore hole, the diameter is illustrated in the range of 5 µm to 10 µm, and the pore depth is 10 µm as illustrated in FIGURE. 4. The volume equation of the plane can find the volume of the truncated prolate rotated on the y-axis as follows [29]. Moisturizer Skin Moisturizing the skin is the most fundamental aspect of human hygiene, which affects health and skin disease [30]. A functioning stratum corneum (SC) is essential for healthy skin [31]. To maintain integrity, SC uses a number of natural systems to store water in the skin [14]. The primary purpose of developing moisturizers is to restore lipids to the surface of the skin after cleaning. In recent developments, moisturizers have been developed to distribute active ingredients cosmetics [32][33][34]. Moisturizers must meet four essential needs so that the interests of consumers: make skin smooth and soft, increase skin hydration, improve appearance, and possibly send ingredients to the surface of the skin [30]. Dry skin or xeroderma is a widespread problem that can be caused by complex interactions between environmental and individual factors [35]. Xerosis, xeroderma, asteatosis, and "winter itch" have all been used as synonyms for dry skin. Epidermal hydration is determined by the non-invasive use of electronic devices, a multitester that measures resistance based on the well-known fact that hydrated skin has less resistance to current flow than dehydrated skin. The level of hydration of the stratum corneum is assessed by measuring changes in skin resistance and is referred to as galvanic skin response or electric skin resistance. The reported skin resistance in ohms with electrodes [1 cm 2 size] was measured 30 minutes and 6 hours after application of the formulation [continuous for up to 3 weeks] at 1000 kHz, 10mA, AC current [36]. Skin characteristics such as moisture, mechanical flexibility, and skin taste can be improved with several personal care products. These products contain moisturizers, occlusive; lubricant; emollients to improve texture, rubbing, and softness, and fragrances to increase consumer acceptance of the product. Moisturizers as a class include humectants and occlusions. Humectants are substances that, when absorbed, help the skin retain moisture, thus making the skin more supple and softer. Common humectants include glycerin, propylene glycol, pyrrolidone carboxylic acid, sodium lactate, urea, and certain natural lipid mixtures [37]. In this study, the moisturizer category used is lotion. Lotions are emulsions containing hydrophilic and hydrophobic ingredients. Oil-in-water (O/W) emulsions are the most popular for moisturizer use; however, emulsifiers are responsible for many of the problems associated with moisturizers as they can also solubilize intercellular lipids [30]. Moisturizers can make the skin feel smoother, a property known as emoliation. Cracks and crevices between desquamated corneocytes are filled with moisturizers, thereby reducing the quality of rough skin. Moisturizers also reduce skin friction, increase lubrication [38,39]. The right time and method for using moisturizers play an important role in optimal efficacy [40]. Face and body; normal skin. Less greasy, spread more easily Cream The thicker film, the oil-inwater emulsion. Higher viscosity than lotions, harder to spread Ointment The thickest film contains no water. Hands and feet; barrier disrupted diseased skin. RESULTS AND DISCUSSION A large collection of non-overlapping solid (particle) solids in the Euclidean space Ddimensional R d called packing. The packing density φ is defined as the fraction of R d space covered by the particles. The problem that has been a source of attraction for mathematicians and scientists for centuries is the determination of the densest arrangement of particles that do not go out of space and reach a maximum density of φ [22]. Lattice in a 3-Dimensional Space R 3 is an infinite set of points generated by a set of discrete translation operations (defined by a linear integer combination based on Dimension Three R 3 ). Lattice packing is packaging where the center of mass of non-overlapping particles is located at the center of the lattice, each oriented in the same direction. The three-dimensional Space R 3 can then be geometrically divided into identical regions of F called fundamental cells (Granular simulated as fundamental cells in this study), each of which contains only the center of mass of one particle. Thus, the number of granules in the container is given by EQUATION 12 and 13 [22,41]. where VOblate is the volume of a particle, and VParticle is the volume of a fundamental cell and packing density φ (φ = 0.64 (Close random packing)) that show in TABLE 5. Based on theoretical calculations, simulations are performed using granular on porous surfaces. This simulation demonstrates a granular attachment to a dry and mois porous system. Granules that are simulated fall will be done in 5 different sizes; 1 μm, 2 μm, 3 μm, 4 μm, and 5 μm. Whereas pore simulated has 3 different sizes; 5 μm, 7.5 μm, and 10 μm in diameter (TABLE 5). In the next step, the skin and the granular simulated using Unity3D. FIGURE. 5 simulates the final condition of 1 μm granular attachment on a porous surface. From the data in FIGURE 14 shows that the number of granular attachments of the three pores identifies the number zero. The simulation in FIGURE 13 produces a zero due to a large coefficient of restitution so that when it enters the pore, the granular will be reflected back. This situation is caused by the skin being too dry when using cosmetics, to handle this, a moisturizer is needed in stages which make the coefficient of restitution as small as possible, so that it can accept the sticking of cosmetics. When compared with the calculation of theory sphere packing and simulation results, the data in TABLE 11. If an analysis is performed on the graph, the graph in FIGURE 15 is obtained in x-axis and yaxis. These results lines with the preliminary research that show the porous hole diameter and the granular restitution coefficient influenced the particles entering the porous surface system [42]. Based on comparative analysis with theory, the simulation performed on dry skin is still far away from the minimum sphere packing, this is because in the cosmetic attachment that is simulated with granular there is still excessive restitution on the surface of dry skin. Further simulation is to add other moisterizer that functions as a moisturizer that minimizes restitution. With the moisterizer we get a simulation of a better cosmetic attachment than dry skin. Cosmetic attachment is easier to do by using a moisterizer because of its nature which reduces the coefficient of restitution and reduces friction. However, using moizterizer is not enough.
3,423
2021-04-29T00:00:00.000
[ "Engineering" ]
The β-cell GHSR and downstream cAMP/TRPM2 signaling account for insulinostatic and glycemic effects of ghrelin Gastric hormone ghrelin regulates insulin secretion, as well as growth hormone release, feeding behavior and adiposity. Ghrelin is known to exert its biological actions by interacting with the growth hormone secretagogue-receptor (GHSR) coupled to Gq/11-protein signaling. By contrast, ghrelin acts on pancreatic islet β-cells via Gi-protein-mediated signaling. These observations raise a question whether the ghrelin action on islet β-cells involves atypical GHSR and/or distinct signal transduction. Furthermore, the role of the β-cell GHSR in the systemic glycemic effect of ghrelin still remains to be defined. To address these issues, the present study employed the global GHSR-null mice and those re-expressing GHSR selectively in β-cells. We here report that ghrelin attenuates glucose-induced insulin release via direct interaction with ordinary GHSR that is uniquely coupled to novel cAMP/TRPM2 signaling in β-cells, and that this β-cell GHSR with unique insulinostatic signaling largely accounts for the systemic effects of ghrelin on circulating glucose and insulin levels. The novel β-cell specific GHSR-cAMP/TRPM2 signaling provides a potential therapeutic target for the treatment of type 2 diabetes. β -cells and insulin release from islets 13,14 . However, the molecular identity of the receptor that is coupled to G i for insulinostatic ghrelin action in β -cells remains to be defined. Presence of unidentified ghrelin receptor has been suggested by the observation that ghrelin exerts some effects in the cells and tissues that do not express GHSR [15][16][17] . In vivo analysis revealed that administration of ghrelin attenuates insulin release and impairs glucose tolerance in rodents and humans 7,11,18,19 . Ghrelin transgenic mice with increased circulating ghrelin exhibited deteriorated glucose tolerance without change in blood glucose levels during insulin tolerance tests (ITT) 20 . Conversely, administration of ghrelin antagonists 7,21,22 and inhibitor 23 of ghrelin O-acyltransferase (GOAT), the enzyme that acylates the third serine residue of ghrelin 24,25 , enhances insulin secretory responses and lowers blood glucose concentrations during glucose tolerance tests (GTT). In consistent with the pharmacological studies, mice lacking gene of ghrelin 21 and those of GOAT 26 showed improved glucose tolerance and enhanced plasma insulin release under normal chow condition. Furthermore, in these ghrelin-knockout (KO) mice and GOAT-KO mice, high-fat diet-induced glucose intolerance was prevented due to enhanced insulin secretory response to glucose. Ablation of ghrelin also improved glucose tolerance and enhances insulin secretion in leptin-deficient ob/ob mice 27 . These findings suggest that the insulinostatic function of ghrelin would affect blood glucose levels, and manipulation of the β -cell ghrelin action could provide a novel tool to optimize insulin release for achieving normoglycemia. Thus, KO studies on the ligand ghrelin have been substantially conducted. In contrast, GHSR-KO mice have been little analyzed for their insulin releasing ability and glucose metabolism, although they reportedly show improved insulin sensitivity when fed a high-fat diet 28,29 . In a gain-of-function study, ghrelin was reported to induce peripheral insulin resistance in humans 30 and glucose output from porcine hepatocytes 31 . Thus, whether GHSR, particularly that expressed in β -cells, is implicated in ghrelin's glycemic effect is not fully understood. In this study, we aimed to clarify whether insulinostatic ghrelin action is mediated by ordinary GHSR in islet β -cells by using GHSR-null mice. Furthermore, the study explored the role of the β -cell GHSR in the glycemic effects of ghrelin administration and endogenous ghrelin as assessed by the effect of ghrelin antagonists. Results Ghrelin attenuates glucose-induced insulin release in a GHSR-dependent manner in mouse islets. In wild-type mice, insulin release from isolated islets under static batch-incubation was stimulated by 8.3 mM glucose compared to 2.8 mM glucose (P < 0.01, Fig. 1A). The glucose (8.3 mM)-induced insulin release was inhibited by exogenous ghrelin (Fig. 1A), as reported 7 . In isolated islets from GHSRnull mice, in contrast, ghrelin (10 nM) failed to attenuate the glucose (8.3 mM)-induced insulin release (Fig. 1A). Furthermore, the glucose (8.3 mM)-induced insulin release per se was significantly larger in GHSR-null mice than wild-type mice, while basal levels of insulin release at 2.8 mM glucose were not different (Fig. 1A). Insulin content per islet and islet size were identical between wild-type and GHSRnull mice (Fig. 1B,C), suggesting an equal β -cell masses. These data indicate that both exogenous ghrelin and endogenous islet-derived ghrelin attenuate glucose-induced insulin release in a GHSR-dependent manner. Ghrelin inhibits glucose-induced cAMP production via GHSR in mouse islets. In the presence of phosphodiesterase (PDE) inhibitor IBMX (500 μ M), 8.3 mM glucose stimulated cAMP production in islets under static incubation, compared to 2.8 mM glucose (P < 0.05) (Fig. 2). The glucose (8.3 mM)-induced cAMP increase was significantly inhibited by application of ghrelin (10 nM) (Fig. 2). In islets isolated from GHSR-null mice, ghrelin (10 nM) failed to suppress the 8.3 mM glucose-induced cAMP production, demonstrating that ghrelin inhibits cAMP production via interacting with GHSR. In contrast, noradrenaline (1 μ M) strongly suppressed cAMP production in islets of both groups (Fig. 2), indicating that α 2-adrenergic receptor-mediated G i/o signaling is intact in the islets of GHSR-null mice. Ghrelin attenuates glucose-induced TRPM2 activation in a GHSR-dependent manner. The non-selective cation channel (NSCC) currents in mouse β -cells under amphotericin B-perforated whole-cell clamp were measured in the presence of 100 μ M tolbutamide to inhibit the K ATP channel and thereby exclude involvement of this channel in the currents. At a holding potential of − 70 mV, an increase in external glucose concentration from 2.8 mM to 8.3 mM increased the NSCC current in wild-type β -cells in a reversible manner (Fig. 4A). Ghrelin markedly decreased the 8.3 mM glucose-elicited current densities to − 0.89 ± 0.33 pA/pF from − 2.45 ± 0.47 pA/pF (P < 0.05, n = 5) (Fig. 4B,C). In β -cells from Scientific RepoRts | 5:14041 | DOi: 10.1038/srep14041 GHSR-null mice, the NSCC currents were increased by 8.3 mM glucose (Fig. 4D) to a level similar to that in wild-type β -cells (Fig. 4C,F) but, notably, the 8.3 mM glucose-induced currents were not altered by ghrelin (Fig. 4E,F). Previous study 32 demonstrated that this glucose-induced NSCC current is inhibited by 2-APB, a blocker of transient receptor potential melastatin 2 (TRPM2) channel, and is not elicited in TRPM2-deficient In the presence of PDE inhibitor IBMX (500 μ M), glucose (8.3 mM)-induced cAMP production in isolated islets from wild-type mice was inhibited by ghrelin (10 nM). Ghrelin failed to suppress 8.3 mM glucose-induced cAMP production in isolated islets from GHSR-null mice. Noradrenaline (NA) (1 μ M) strongly suppressed the cAMP production in both groups of islets (n = 6 tubes of batch incubation). *P < 0.05; **P < 0.01. Scientific RepoRts | 5:14041 | DOi: 10.1038/srep14041 β -cells, indicating that the current is passing through TRPM2 channels. As confirmed in Fig. 4G, glucose (8.3 mM) failed to induce NSCC current in β -cells from TRPM2-KO mice. We examined whether the ghrelin-induced attenuation of TRPM2 currents is involved in the ghrelin action to suppress insulin release. The glucose (8.3 mM)-induced insulin release was significantly lower in isolated islets from In β -cells from GHSRnull mice, the TRPM2 currents were increased by 8.3G to a similar extent to those in wild-type β -cells, but the currents increased by 8.3G were not altered by ghrelin. (G) The glucose (8.3 mM) did not induce NSCC current in β -cells from TRPM2-KO mice. n = 5-8 (single β -cells). *P < 0.05 vs. 2.8G; # P < 0.05 vs. 8.3G alone in wild-type. Arrow heads indicate zero current level. (H) The glucose (8.3 mM)-induced insulin release was significantly lower in isolated islets from TRPM2-KO mice than that from wild-type mice, while basal insulin release at 2.8 mM glucose was unchanged. Ghrelin failed to attenuate glucose (8.3 mM)induced insulin release in the TRPM2-KO islets, while noradrenaline (NA) inhibited it (n = 8-12 tubes of batch incubation). *P < 0.05, **P < 0.01. TRPM2-KO mice than that from wild-type mice, while basal insulin release at 2.8 mM glucose was unchanged (Fig. 4H). Moreover, ghrelin failed to attenuate glucose (8.3 mM)-induced insulin release in the TRPM2-KO islets (Fig. 4H). In contrast, noradrenaline (1 μ M) inhibited glucose-induced insulin release in islets from TRPM2-KO, as well as wild-type, mice (Fig. 4H). These results suggest that the action of ghrelin to inhibit TRPM2 channel conductance is causally implicated in its insulinostatic action. Specific re-expression of GHSR in β-cells of GHSR-null/Ins-Cre mice. In order to specifically re-express GHSR in β -cells of GHSR-null mice, GHSR-null mice were bred with rat insulin promoter-driven Cre recombinase (Ins-Cre) mice to obtain GHSR-null/Ins-Cre mice. RT-PCR analysis showed that the Ghsr mRNA was expressed in isolated islets and pituitary from wild-type mice, but not detectable in those from GHSR-null mice. As expected, GHSR-null/Ins-Cre mice exhibited Ghsr mRNA expression in islets but not in pituitary, indicating that GHSR expression was specifically restored in islets of the null mice by bred with Ins-Cre mice (Fig. 5A). Functional re-expression of GHSR was also examined. In isolated islets from GHSR-null/Ins-Cre mice, ghrelin (10 nM) markedly attenuated the glucose (8.3 mM)-induced insulin release (Fig. 5B) and cAMP production (Fig. 5C). In single β -cells from GHSR-null/Ins-Cre mice, ghrelin significantly suppressed glucose (8.3 mM)-induced [Ca 2+ ] i increases (Fig. 5D,E), in which the suppression of response amplitude was identical to that in wild-type β -cells (Fig. 3A,C). These data demonstrated that re-expression of GHSR renders β -cells responsive to ghrelin and thereby restores its insulinostatic signaling. Endogenous and exogenous ghrelin regulate systemic blood glucose and plasma insulin levels via β-cell GHSR. To assess the role of endogenous ghrelin, the effects of ghrelin antagonist on systemic glucose and insulin levels were studied in mice fasted overnight. In GTT, when a ghrelin antagonist [D-Lys 3 ]-GHRP-6 (1 μ mol/kg) was intraperitoneally (i.p.) injected simultaneously with 2 g/kg glucose into the wild-type mice, increases in blood glucose at 30 and 60 min were significantly attenuated (Fig. 6A) and the plasma insulin response at 15 min was markedly enhanced (Fig. 6B). Administration of another ghrelin antagonist JMV3002 (0.3 μ mol/kg i.p.) also attenuated the blood glucose increase and enhanced the plasma insulin response in GTT (data not shown). These results revealed the physiological functions of endogenous ghrelin to increase blood glucose and to suppress insulin release in GTT. To examine whether these physiological functions of endogenous ghrelin are mediated by GHSR, GHSR-null mice were investigated. In GTT, the GHSR-null mice exhibited smaller blood glucose increases and larger plasma insulin responses than wild-type mice (Fig. 6C,D). Notably, the glucose and insulin curves in GTT in GHSR-null mice were similar to those in wild-type mice receiving ghrelin antagonist (Fig. 6C,D, open circles vs. Figure 6A,B, filled circles). The area under the curve (AUC) of blood glucose increase for 120 min during GTT in GHSR-null mice was significantly lower than that in wild-type mice and comparable to that in wild-type mice receiving ghrelin antagonist (Fig. 6G). The AUC of plasma insulin increase for 30 min in GHSR-null mice was larger than that in wild-type mice and comparable to that in wild-type mice receiving ghrelin antagonist (Fig. 6H). Moreover, a ghrelin antagonist failed to affect blood glucose and plasma insulin responses in GTT in GHSR-null mice (Fig. 6C,D). Notably, in GHSR-null/ Ins-Cre mice, the glucose and insulin curves in GTT were similar to those in wild-type mice (Fig. 6E,F vs. Fig. 6A,B). Furthermore, administration of [D-Lys 3 ]-GHRP-6 became competent to attenuate blood glucose increases and enhance plasma insulin responses in GTT (Fig. 6E,F). Consequently, the AUC of blood glucose rise was significantly decreased and that of plasma insulin response was significantly elevated to the levels indistinguishable from those in wild-type mice (Fig. 6G,H). Thus, replenishment of GHSR only in β -cells reproduced all the phenotypes of GTT and ghrelin effects on blood glucose and plasma insulin of wild-type mice. Effects of exogenous ghrelin on glucose tolerance were next examined. When ghrelin at 10 nmol/ kg was i.p. injected simultaneously with 2 g/kg glucose, the blood glucose levels at 30 and 60 min were higher in comparison to saline-injected control values (Fig. 7A). In GHSR-null mice, i.p. injection of ghrelin failed to affect blood glucose responses during GTT (Fig. 7B). In GHSR-null/Ins-Cre mice, i.p. administration of ghrelin significantly elevated the blood glucose increases in GTT (Fig. 7C), indicative of restoration of the ghrelin effect on glycemia by selective re-expression of GHSR in β -cells of the GHSR-null mice. The profiles of blood glucose levels during ITT and the HOMA-IR index exhibited little differences between GHSR-null, GHSR-null/Ins-Cre and wild-type mice (Fig. 7D,E), suggesting that insulin sensitivity was not significantly altered. These results suggest that insulinostatic ghrelin function via β -cell GHSR affects systemic blood glucose levels, and β -cell GHSR is required for both endogenous and exogenous ghrelin action to regulate glucose tolerance. Discussion In the present study, we found that ghrelin failed to affect glucose (8.3 mM)-induced insulin release in islets from GHSR-null mice, while ghrelin inhibited it in islets of wild-type mice. Furthermore, glucose-induced insulin release in GHSR-null islets was greater than those in wild-type islets, while insulin content per islet was unaltered in GHSR-null mice. These results in islets of GHSR-null mice are similar to those reported in ghrelin-KO mice 21 . These findings suggest that endogenous islet-derived ghrelin attenuates insulin release via GHSR. The ability of ghrelin to inhibit glucose-stimulated cAMP productions in islets and [Ca 2+ ] i increases in β -cells were also blunted in GHSR-null mice. These results demonstrate that ghrelin attenuates glucose-induced insulin release and cAMP and Ca 2+ signaling via direct interaction with GHSR in β -cells. We further demonstrated that the blood glucose-increasing effects of endogenous and exogenous ghrelin are mediated predominantly by the GHSR in islet β -cells. In the current study, ghrelin impaired and ghrelin antagonist enhanced glucose tolerance in wild-type mice, confirming our previous report 21 . In global GHSR-null mice, neither ghrelin antagonist nor ghrelin affected glucose tolerance, indicating that endogenous and exogenous ghrelin regulate glucose tolerance via GHSR. To explore the role of GHSR in β -cells, we introduced β -cell-specific re-expression of GHSR in global GHSR-null mice. In these GHSR-null/Ins-Cre mice, GHSR mRNA was re-expressed specifically in the islets, and the ability of ghrelin to inhibit insulin release, cAMP production, and [Ca 2+ ] i increases in islet β -cells was fully restored. In these mice in which GHSR has been rescued exclusively in β -cells, administration of [D-Lys 3 ]-GHRP-6 was able to attenuate blood glucose rises and enhance plasma insulin responses in GTT. Exogenous ghrelin administration also became capable of deteriorating glucose tolerance in these mice re-expressing GHSR in β -cells. Remarkably, these in vivo phenotypes recaptured in GHSR-null/Ins-Cre mice were not distinguishable from those in wild-type mice. These data support that GHSR in islet β -cells primarily mediates the glycemic effect of ghrelin, at least under conditions of glucose challenge. However, our result cannot exclude a possibility that the glycemic effect of ghrelin additionally involves GHSR in other tissues implicated in insulin action [28][29][30] . Previous studies using similar Cre-mediated re-expression in the same GHSR-null line reported that the brain GHSR signaling is implicated in counter-regulatory action of ghrelin against the fasting-induced hypoglycemia 33,34 . Hence, GHSR in the brain might contribute to counter-regulatory action of ghrelin under hypoglycemic conditions. In addition, regulation of glucagon secretion by ghrelin via islet α -cell GHSR 35 would be implicated under hypoglycemic conditions. Precise roles of the β -cell, α -cell and brain ghrelin/GHSR signaling in systemic glucose homeostasis remain to be further studied. It has been well known that the GHSR is coupled to the phospholipase C-linked Gα q/11 family of G-proteins and [Ca 2+ ] i increases 2 . Our present results together with previous reports 13,14 clearly demonstrate that ghrelin suppresses glucose-induced insulin release via GHSR in islet β -cells coupled to PTX-sensitive Gα i and attenuation of cAMP production. Although the coupling mechanisms by which GHSR activates G i -proteins and suppresses cAMP cascade in β -cells are still unclear, possible direct coupling of Gα i/o to GHSR has been demonstrated in in vitro GTPγ S assays 36,37 . The conformation of purified monomeric GHSR was altered by ghrelin in the presence of Gα i2 β 1 γ 2 , suggesting an interaction between GHSR and G i under stimulation with ghrelin 38 . Alternatively, it has been reported that GHSR is capable of forming heterodimers with other GPCRs 39-43 and thereby transduces distinct signaling. However, further studies are definitely required to elucidate the mechanisms through which the β -cell GHSR is coupled to Gα i/o . Glucose-stimulated insulin secretion in β -cells is initiated by closure of the K ATP channel, followed by plasma membrane depolarization. In this process, opening of background inward current through NSCCs might facilitate depolarization after K ATP channel closure 32 . We previously reported that the TRPM2 channel, a type of NSCC, in β -cells plays an essential role in glucose-induced and incretin-potentiated insulin secretion 44 . Both glucose metabolism and glucagon-like peptide-1 (GLP-1) receptor stimulation increase the activity of TRPM2 channels via the cAMP signaling 32 . The present study further uncovered a novel role of TRPM2 channels in ghrelin signaling in β -cells. Ghrelin markedly counteracted the glucose (8.3 mM)-induced activation of TRPM2 current in islet β -cells. Furthermore, in islets from TRPM2-KO mice, ghrelin failed to attenuate glucose (8.3 mM)-induced insulin release. These results suggest that ghrelin suppresses glucose-induced insulin secretion at least partly by inhibiting TRPM2 channels. In contrast, noradrenaline inhibited glucose (8.3 mM)-induced insulin release in islets from TRPM2-KO mice, indicating that its insulinostatic mechanism involves the step other than TRPM2, in consistent with the consensus that noradrenaline attenuates multiple steps of insulin release including adenylate cyclase and distal exocytosis machinery 45 . TRPM2 channel activity is increased by cAMP elevating agents such as GLP-1, glucose-dependent insulinotropic polypeptide (GIP) and membrane-permeable cAMP analogues 32 . Hence, ghrelin may decrease membrane excitability by attenuating the cAMP signal-activated TRPM2 channel, thereby suppressing [Ca 2+ ] i signaling and insulin release. Whether other physiologic insulinostatic hormones and neurotransmitters, including somatostatin, neuropeptide Y and noradrenaline, also exert their inhibitory effects via suppression of TRPM2 currents remains to be clarified. In addition, we previously reported that ghrelin enhances voltage-dependent K + (Kv) channel activity partly by suppressing cAMP pathway 13,14 . Possible cooperation between TRPM2 inhibition and Kv channel activation in the ghrelin action in islet β -cells remains to be elucidated. In conclusion, the present study demonstrated that ghrelin attenuates glucose-induced insulin release via direct interaction with β -cell GHSR that is coded by ordinary Ghsr gene but exceptionally coupled to unique cAMP/TRPM2 signaling. Moreover, this interaction with β -cell GHSR largely accounts for the acute effect of ghrelin on the glycemia. Circulating ghrelin levels rise before meals and decrease after meal 46 . The reduction in plasma ghrelin level after oral glucose load is impaired in patients with type 2 diabetes 47 . The impaired ghrelin suppression after meals may partly cause impaired insulin secretion and postprandial hyperglycemia in type 2 diabetes. Plasma ghrelin levels are elevated in Chinese Han people with impaired glucose tolerance 48 . Thus, elevated ghrelin-GHSR activity could be causally implicated in type 2 diabetes. Hence, developing methods to specifically intervene the GHSR-cAMP/TRPM2 cascade in β -cells may provide a potential therapeutic tool to treat patients with type 2 diabetes. Methods Animals. Wild-type C57BL/6J mice (Japan SLC, Hamamatsu, Japan), GHSR-null mice (provided from Drs. Jeffrey M. Zigman and Joel K. Elmquist at the University of Texas Southwestern Medical Center) 6,49 and TRPM2-KO mice 32,44 were housed on a 12-hour light/dark cycle in accordance with our institutional guidelines and with the Japanese Physiological Society's guidelines for animal care. KO mice were backcrossed onto a C57BL/6J mice at least for nine generations. The disrupted sequence in genomic DNA from the KO mice was detected using PCR to produce an amplicon with one primer inside the targeting sequence in combination with a gene-specific primer. Male age-matched (10 weeks-old) KO mice and wild-type littermates as controls were used. As the GHSR-null mice were generated by inserting a loxP-flanked transcriptional blocking cassette (TBC) into a Ghsr gene, mating the GHSR-null mice with tissue-specific Cre mice leads to the removal of the loxP-flanked TBC and enables the tissue-specific restoration of GHSR expression 6,49 . We bred the GHSR-null mice with rat insulin promoter-driven Cre recombinase (Ins-Cre) mice (The Jackson Laboratory, Bar Harbor, ME) to obtain GHSR-null/Ins-Cre mice in which GHSR is re-expressed in islet β -cells. All mice were given free access to rodent normal chow and water. Experimental protocols for animal studies were approved by the institutional committee on animal care in Jichi Medical University. Preparation of pancreatic islets and single β-cells. Islets of Langerhans were isolated by collagenase digestion, as reported 7 with slight modification. Mice were anaesthetized by intraperitoneal injection of pentobarbitone at 80 mg/kg, followed by injection of collagenase at 1.14 mg/ml (Sigma-Aldrich, St. Louis, MO) into the common bile duct. Collagenase was dissolved in HEPES-added Krebs-Ringer bicarbonate buffer (HKRB) solution (in mM): NaCl 129, NaHCO 3 5.0, KCl 4.7, KH 2 PO 4 1.2, CaCl 2 2.0, MgSO 4 1.2 and HEPES 10 at pH 7.4 with NaOH, supplemented with 5.6 mM glucose and 0.1% bovine serum albumin (BSA). HKRB solution containing 0.1% BSA was used for measurements of cytosolic Ca 2+ concentrations ([Ca 2+ ] i ) and insulin release but not for patch-clamp study. Pancreas was dissected out and incubated at 37 °C for 16 min. Islets were hand collected under a microscope and were immediately used for the measurement of insulin secretion. For β -cell experiments, islets were dispersed into single cells in Ca 2+ -free HKRB, and the single cells were plated sparsely on coverslips and maintained for 1 day at 37 ˚C in an atmosphere of 5% CO 2 and 95% air in Eagle's minimal essential medium containing 5.6 mM glucose supplemented with 10% fetal bovine serum, 100 μ g/ml streptomycin, and 100 U/ ml penicillin. Measurements of insulin release and cAMP productions in mouse islets. For measurements of islet insulin release, groups of 10 islets were incubated for 1 hr at 37 °C in HKRB with 5 mM glucose for stabilization, followed by test incubation for 1 hr in HKRB with 2.8 or 8.3 mM glucose. In cAMP measurements, islets were incubated for 1 hr in HKRB with 500 μ M 3-Isobutyl-1-methylxanthine (IBMX), a phosphodiesterase (PDE) inhibitor (Sigma-Aldrich), to avoid degradation of cAMP in the samples. Ghrelin (Peptide Institute, Osaka, Japan) and noradrenaline (Sigma-Aldrich) was present throughout the incubation. Insulin release and total cAMP productions in islets were determined by ELISA (Morinaga, Yokohama, Japan) and EIA kit (Enzo Life Sciences, NY). Histological analysis of pancreatic islets. Pancreata from wild-type and GHSR-null mice were fixed in 4% paraformaldehyde, and three random sections were generated per pancreas from three mice of each genotype. The sections were incubated overnight with guinea pig anti-insulin antibody (Dako Japan, Tokyo, Japan) at a dilution of 1:1000 at 4 °C. Samples were then incubated with Alexa Fluor 488labeled goat anti-guinea pig IgG (Molecular Probes, Eugene, OR). Immunofluorescence images were then obtained with a fluorescence microscope (Olympus, Tokyo, Japan). Measurements of [Ca 2+ ] i in single β-cells. Dissociated single β -cells on coverslips were mounted in an open chamber and superfused in HKRB. Cytosolic Ca 2+ concentrations ([Ca 2+ ] i ) in single β -cells were measured at 33 °C by dual-wavelength fura-2 microfluorometry with excitation at 340/380 nm and emission at 510 nm using a cooled charge-coupled device camera 7,50 . The ratio image was produced on an Aquacosmos system (Hamamatsu Photonics, Hamamatsu, Japan). Data were taken exclusively from the cells which fulfilled the reported morphological and physiological criteria of β -cells including the diameter and responsiveness to glucose (8.3 mM) and K ATP channel blocker tolbutamide (Tolb) (100 μ M) 51 . For [Ca 2+ ] i measurements, β -cells were prepared from at least three mice in each experiment. Patch-clamp experiments in mouse single β-cells. Perforated whole-cell currents were recorded using an amplifier (Axopatch 200B; Molecular Devices, Foster, CA) in a computer using pCLAMP10.2 software, as reported 32 . For perforated whole-cell clamp experiments, pipette solution contained amphotericin B (200 μ g/mL), 40 mM K 2 SO 4 , 50 mM KCl, 5 mM MgCl 2 , 0.5 mM EGTA, and 10 mM HEPES at pH 7.2 with KOH. For recording of the non-selective cation channel (NSCC) current, cells were voltage-clamped at − 70 mV, which is close to potassium equilibrium potential, and treated with tolbutamide (100 μ M) to inhibit the K ATP channel. After recording the control current in the presence of 2.8 mM glucose, the external bath HKRB solution was changed to a solution containing 8.3 mM glucose. To examine the effects of ghrelin, cells were treated with 10 nM ghrelin 5 min before changing glucose concentrations to 8.3 mM and through the end. After measurements, the voltage-clamped β -cell was identified by insulin immunostaining 32 . Electrophysiological experiments were performed at 27 °C. Intraperitoneal glucose tolerance tests and insulin tolerance tests. An intraperitoneal glucose tolerance tests (GTT) and insulin tolerance tests (ITT) were performed with male GHSR-null mice and wild-type littermates, and GHSR-null/Ins-Cre mice (10 weeks-old) fasted overnight, as previously reported 21 . In GTT studies, 2 g/kg glucose was injected intraperitoneally (i.p.) into mice, followed by blood sampling from the tail vein. In ITT studies, insulin (0.75 units/kg) was i.p. injected, followed by collection of blood samples from the tail vein. Blood glucose concentrations were measured using a GlucoCard DIA meter (Arkray, Kyoto, Japan), while insulin concentrations were determined using an ELISA kit (Morinaga Institute of Biological Science). The homeostasis model assessment insulin resistance (HOMA-IR), as a surrogate index of insulin sensitivity, was calculated using the following equation: HOMA-IR = fasting glucose (mg/dL) × fasting insulin (ng/mL)/22.5.
6,036.8
2015-09-15T00:00:00.000
[ "Biology", "Medicine" ]
Clinical and cellular features in patients with primary autosomal recessive microcephaly and a novel CDK5RAP2 mutation Background Primary autosomal recessive microcephaly (MCPH) is a rare neurodevelopmental disorder that results in severe microcephaly at birth with pronounced reduction in brain volume, particularly of the neocortex, simplified cortical gyration and intellectual disability. Homozygous mutations in the Cyclin-dependent kinase 5 regulatory subunit-associated protein 2 gene CDK5RAP2 are the cause of MCPH3. Despite considerable interest in MCPH as a model disorder for brain development, the underlying pathomechanism has not been definitively established and only four pedigrees with three CDK5RAP2 mutations have been reported. Specifically for MCPH3, no detailed radiological or histological descriptions exist. Methods/Results We sought to characterize the clinical and radiological features and pathological cellular processes that contribute to the human MCPH3 phenotype. Haplotype analysis using microsatellite markers around the MCPH1-7 and PNKP loci in an Italian family with two sons with primary microcephaly, revealed possible linkage to the MCPH3 locus. Sequencing of the coding exons and exon/intron splice junctions of the CDK5RAP2 gene identified homozygosity for the novel nonsense mutation, c.4441C > T (p.Arg1481*), in both affected sons. cMRI showed microcephaly, simplified gyral pattern and hypogenesis of the corpus callosum. The cellular phenotype was assessed in EBV-transformed lymphocyte cell lines established from the two affected sons and compared with healthy male controls. CDK5RAP2 protein levels were below detection level in immortalized lymphocytes from the patients. Moreover, mitotic spindle defects and disrupted γ-tubulin localization to the centrosome were apparent. Conclusion These results suggest that spindle defects and a disruption of centrosome integrity play an important role in the development of microcephaly in MCPH3. Introduction Primary autosomal recessive microcephaly (MCPH) delineates a genetically heterogeneous and rare subgroup of congenital microcephalies characterized by a pronounced reduction of brain volume, particularly of the neocortex, simplified gyral pattern and intellectual disability [1,2]. Homozygous mutations of the Cyclin-dependent kinase 5 regulatory subunit-associated protein 2 gene, CDK5RAP2 (OMIM*608201), were identified in 2005 as a cause for MCPH type 3 (MCPH3, OMIM#604804) [3]. To date, three different mutations have been identified: two in three Pakistani families and one mutation in a Somali patient: (i) a nonsense mutation in exon 4 (c.246T > A, p.Y82X), (ii) an A to G transition in intron 26 (c.4005-15A > G, p. R1334SfsX5) introducing a new splice acceptor site, a frame shift and a premature stop codon, and (iii) a nonsense mutation in exon 8 (c.700G > T, p.E234X) [3][4][5]. All three mutations have been proposed, but not shown, to lead to a truncated protein and a loss of CDK5RAP2 function. CDK5RAP2 is associated with the centrosome, microtubuli and Golgi apparatus, is enriched in neural progenitors within the ventricular and subventricular zone of the immature brain, can be also detected in glial cells and early neurons, and is strongly downregulated with brain maturation [6,7]. One current model for the microcephaly phenotype caused by CDK5RAP2 mutation invokes a premature shift from symmetric to asymmetric neural progenitor cell divisions with a subsequent depletion of the progenitor pool and a reduction in the final number of neurons, and decreased cell survival [6,8]. Underlying mechanisms include a deregulation of the role of CDK5RAP2 in centrosome function, spindle assembly and/or response to DNA damage [6,8]. Despite considerable interest in MCPH as a neural stem cell defect and window into the control of neurogenesis in humans, the underlying pathomechanisms have not been definitively established and specifically for MCPH3, no detailed radiological descriptions of patients or functional analyses in patient samples have been reported to date. In the present study, we report a novel CDK5RAP2 mutation and describe for the first time in detail the clinical, radiological and cellular phenotype in two MCPH3 patients of European descent. We are thereby able to attribute the microcephaly phenotype in MCPH3 at least partially to a mitotic spindle defect and centrosome disorganization. Patients Informed consent was obtained from the parents of the patients for the molecular genetic analysis, the publication of clinical data, magnetic resonance images (MRI) and studies on immortalized lymphocytes (LCLs). DNA was extracted from EDTA blood samples using the Illustra BACC2 DNA extraction kit (GE Healthcare, Munich, Germany). Samples from microcephaly patients and controls were used in this study with approval from the local ethics committees of the Charité and the Freiburg University (approval nos. EA1/212/08 and 494/11, respectively). Haplotype analysis using microsatellite markers Six microsatellite markers were selected for each of the MCPH1 to 7 and PNKP loci, so that three markers were located on each side of each gene. The markers flanking the CDK5RAP2 gene were: CHLC.GGAA23B10, D9S258, D9S2152, D9S103, D9S116 and D9S1823. PCR was performed with 1 ng patient DNA and primer pairs in which the forward primer was always labeled with 6-FAM. PCR fragments were resolved by capillary electrophoresis on an ABI 3100 sequencer. Fragment analysis was performed using GeneScan software (Applied Biosystems, Foster City). Haplotypes were constructed in the family by inspection of the microsatellite fragment lengths. PCR and DNA sequencing Thirty-eight coding exons of the CDK5RPAP2 gene and at least 50 bp of the intronic, exon-flanking sequence were analyzed by PCR (Taq Polymerase, Qiagen, Hilden, Germany), and cycle sequencing using the ABI Prism BigDye Terminator Cycle Sequencing Ready Reaction Kit Version 1.1 (Applied Biosystems, Darmstadt, Germany). Capillary electrophoresis was performed using an ABI 3100 sequencer (Applied Biosystems, Foster City, CA, USA). Sequence data were analyzed using SeqPilot DNA sequence analysis software (JSI, Kippenheim, Germany). The database sequence NM_018249 for the CDK5RAP2 gene was used as reference, and primers were developed in our laboratory (available on request). Protein extraction procedure and Western blot Protein extracts for Western blots were isolated from LCLs by homogenization in radio-immunoprecipitation assay (RIPA) buffer containing 1 mM phenylmethylsulfonyl fluoride (PMSF; Sigma-Aldrich) and 1 protease inhibitor cocktail tablet per 10 ml RIPA buffer (Complete Mini; Roche Diagnostics, Mannheim, Germany), 20 min incubation on ice and centrifugation at 4°C for 10 min at 3000 g and for 20 min at 16000 g. Protein concentrations were determined using a bicinchoninic acid (BCA) based assay, according to the instructions of the manufacturer (BCA Protein Assay Kit; Pierce Biotechnology, Rockford, IL, USA). Protein extracts (30 μg per sample) were denaturated in Laemmli sample loading buffer at 95°C for 5 min, separated by sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and electrophoretically transferred in transfer buffer in a semi-dry fashion using Trans-Blot SD Semi-Dry transfer cell (Bio-Rad, Munich, Germany) onto nitrocellulose membrane (Bio-Rad, Munich, Germany). The membranes were incubated for 1 h at RT in blocking buffer (TBS-T 1x with 5% bovine serum albumin (BSA)), rinsed three times with TBS-T (1x) for 8 min each at RT on a shaker and then incubated overnight at 4°C with rabbit anti-CDK5RAP2 (1:200, HPA035820, Sigma-Aldrich; also verified with antibody from Abnova PAB17507, 1:200), mouse anti-γ-tubulin (1:5,000) or mouse anti-CHK1 (1:1000, Sigma-Aldrich) antibodies. After incubation with the corresponding secondary antibodies donkey anti-rabbit (1:2000; Amersham Biosciences, Freiburg, Germany) and goat anti-mouse (1:10,000; Dako, Hamburg, Germany), the immunoreactive proteins were visualized using a technique based on a chemiluminescent reaction. The gel pictures were obtained with a Bio-Rad imager (Bio-Rad laboratories, Munich, Germany). Western blot experiments were run in triplicate. Phenotype of patients with MCPH3 The first son (Patient 1) was born prematurely to Italian parents who were third cousins ( Figure 1A), at gestational week 35, with a birth weight of 2570 g (exact birth parameters not available). At the age of 3 months, he weighed 4530 g (370 g < 3 rd centile), was 53 cm long (3,9 cm <3 rd centile, -4.1 SD), and had an occipitalfrontal head circumference (OFC) of 33,5 cm (4,8 cm < 3 rd centile, -5.9 SD). Further progression of the OFC is shown in Figure 1B. A closed fontanel, a simian crease, an abdominal hernia and slightly increased muscle reflexes, but no pyramidal signs were observed. Skeletal scintigraphy ruled out craniosynostosis (premature closure of the fontanels) as a cause of microcephaly. The results of routine blood tests including a full blood count, electrolytes, liver, kidney and thyroid parameters, CK and tests for TORCH and metabolic diseases were normal. Clearly defined areas of hyperpigmentation were apparent on the medial side of the right leg, left ankle and left pectoral. Chromosome analysis revealed a normal male karyotype. The results of an ophthalmological investigation as well as ultrasound of the kidneys and hip joints were normal. At age 4,5 months, audioacoustic emissions could not be detected, and brainstem audiometry revealed a slightly elevated absolute threshold of hearing of 35-40 dB. However at age 1 year tests (BERA) were repeated, and his hearing was found to be normal. On cranial MRI, microencephaly, simplified gyral pattern, particularly frontally, and agenesis of the corpus callosum were apparent ( Figure 1C, 5-8). On EEG, the oscillations were slower than expected for age, but epileptic discharges were absent. His initial short stature appeared to become less prominent with age, so that at age 9 years his height was average. At age 8 years the following tests were performed, all with normal results: full blood count, differential blood analysis, glucose, creatinine, CK, LDH, GOT, GPT, TSH, T 4 . An IQ test revealed intellectual disability (IQ 50-69) with slight developmental delay in speech and motor functions, and a short concentration span. According to the Munich Functional Development test (MFE II) at age 5 years his speech and understanding were at an age of 27-36 months and expressive speech was at an age of 25-34 months. Moreover the boy suffered from a tic disorder manifesting as repetitive blinking, nodding or smacking of the lips. He had behavioral problems, with hyperactivity, bouts of rage and aggression, which were severe enough to necessitate short-term admission to a childrens' psychiatric hospital at age 11 years. He was socially inept, easily upset and irascible. However, the behavioral problems responded well to treatment with risperidone. The second son (Patient 2) was born in the 40 th week of gestation with a birth weight of 3130 g (25 th -50 th centile), 49 cm long (25 th -50 th centile) and OFC of 30 cm (2,1 cm <3 rd centile, -3.5 SD). Apgar scores were 9/10/ 10. Full blood count, differential blood analysis, TORCH and newborn metabolic screening, CK, electrolytes, liver and kidney parameters were all normal. Shortly after birth, intermittent breathing pauses were observed, but never again observed thereafter. He had dysmorphic features including a sloping forehead, low set ears, a relatively high-arched palate and simian creases. Similar to his brother, he had relatively large, map-like areas of hyperpigmentation with well-defined borders: four on his inner right leg, five around the ankle, and one on his left pectoral. The further development of the OFC is shown in Figure 1B. Chromosome analysis and tests for lactate, LDH, GOT, GPT and AP at age six months were normal. All further investigations including electrocardiogram (ECG), ultrasound of the cranium, hips, kidneys, adrenals, bladder as well as an ophthalmological examination yielded normal results. Cranial MRI at the age of five months showed microencephaly, simplified gyral pattern particularly of the frontal lobes and corpus callosum hypogenesis ( Figure 2C, [9][10][11][12]. Moreover, an increased space in the posterior fossa, consistent with a megacisterna magna most probably secondary to mild cerebellar hypoplasia, could be visualized ( Figure 2C, 10). At age 11 months, the patient had developed normally with respect to motor skills; however, two years later mild motor and intellectual developmental delay was noted with an index value of 56 for cognitive development on the Bayley Scales of Infant Development. Although he had relatively good speech development, especially considering that he was brought up to be bilingual, he had similar behavioral problems to his brother, with temper tantrums, problems with motivation and concentration and was not able to attend a regular nursery school. Neither of the patients had seizures by ages 11 years (Patient 1) and 6 years (Patient 2). The patients' parents did not consent to the publication of photos of the patients. The clinical findings in both patients as well as those of previously published patients are summarized in Table 1. of both patients, from age 2 months to 5 years in patient 1 (triangles) and birth to 3 years 10 months in patient 2 (dots) (centiles refer to WHO Child Growth Standards) [10,11]. The OFC of patient 2 was below −3.5 SD at birth and further decreased to about −6.4 SD by the age of 3 years and 10 months. (C) T1/2-weighted magnetic resonance images (MRI) of patient 1 at age 2,5 months (5-8) and patient 2 at 3 months (9-12) compared to those of a healthy 3-year old boy. The reduced size of the brain with extraaxial spaces (5,7,(9)(10)(11), sloping forehead (6,7,11), simplified gyral pattern frontally with shallow, wide sulci (5,7,9) and corpus callosum agenesis (6) and hypogenesis (10) are apparent. A novel CDK5RAP2 mutation in a family with MCPH Haplotype analysis using microsatellite markers revealed that both affected sons in the family were homozygous for a haplotype surrounding the MCPH3 locus, shared by the heterozygous parents, who are third degree cousins. Possible linkage consistent with compound heterozygosity in both sons was also suggested for the MCPH4, MCPH7 and PNKP loci; however, sequencing of the STIL and PNKP genes did not reveal mutations. Sequencing of CDK5RAP2 showed that both affected sons were homozygous for the mutation c.4441C > T, which results in the nonsense mutation p.Arg1481*; both parents were heterozygous for the mutation (Figure 2). The resulting CDK5RAP2 protein is predicted to result in a truncation that affects the second SMC, the pericentrin, and the Golgi binding sites (Figure 2). Cellular phenotype of patients with CDK5RAP2 gene mutation We investigated the pathogenicity of the identified missense mutation in immortalized lymphocytes (LCLs) from the two patients with MCPH3 and from controls. In control LCLs, CDK5RAP2 localized to the centrosomes during each stage of the cell cycle ( Figure 3). Consistent with studies in murine cells [12], centrosomal CDK5RAP2 levels were relatively low during interphase, increased in the subsequent prophase and remained high throughout mitosis until telophase, when signals dropped to interphase levels. CDK5RAP2 further accumulates at the Golgi apparatus [13], and we detected a partial colocalization with the cis-Golgi matrix protein GM130 in LCLs during inter-and prophase. In prometaphase, the Golgi apparatus begins to fragment and loses its pericentriolar location close to CDK5RAP2 (Figure 4). In metaphase and anaphase the fragments were still somewhat dispersed in the cytoplasm but some could already be detected in the proximity of the CDK5RAP2-positive centrosomes. During telophase, cytokinesis separates the two daughter cells, and reassembly of the Golgi apparatus occurs in the centrosomal region of each daughter cell. In CDK5RAP2 mutant LCLs, CDK5RAP2 levels were below detection levels when assessed through immunocytology and western blots using two antibodies that bind to different positions at the C-termini of full-length CDK5RAP2 (Figure 3). Since the Golgi domain described previously at the C-terminus [13] is predicted to be lost in our patients, we further analyzed the Golgi integrity through immunostaining with GM130. GM130 immuno-signal clusters were apparent in interphase cells from patients. However, Golgi fragmentation appeared to occur earlier during mitosis and had disappeared by prophase ( Figure 4). Because CDK5RAP2 impacts on human brain size and has been associated with progenitor proliferation, we next sought to examine the integrity of the centrosome and the establishment of the mitotic spindle apparatus in patients and controls. CDK5RAP2 colocalized with the centrosomal protein γ-tubulin throughout the cell cycle in control LCLs (Figure 3). In patient cells where CDK5RAP2 was below the detection level, we did not observe a complete loss of γ-tubulin from the centrosome nor a massive reduction of total γ-tubulin via western blot, but rather a more dispersed γ-tubulin staining around the centrosome (Figure 3). Pericentrin localization was normal in patient cells when compared to control cells ( Figure 5). In addition, spindle defects with an increase of abnormal spindles with broad and unfocused poles of microtubuli (41% and 55% versus 9% of 100 counted metaphase LCLs of patient 1 and 2 versus control; One-way ANOVA, p < 0.001) were detected in CDK5RAP2 mutant LCLs ( Figure 6). There was a trend in patients towards an increase in multipolar spindles (4% and 11% versus 3.5% of 100 counted metaphase LCLs of patient 1 (not significant) and 2 (p < 0.05) versus control; One-way ANOVA) and a decrease of spindle pole distance (5.4 μm and 4.8 μm versus 5.8 μm of 100 counted metaphase LCLs of patient 1 (not significant) and 2 (p < 0.05) versus control; One-way ANOVA) in CDK5RAP2 mutant LCLs ( Figure 6). Also several LCLs from patients showed lagging chromosomes, this was significantly increased in one of the patients (patient 2) and only showed a tendency to be increased in the other patient. CHK1 protein has been shown to be downregulated in Cdk5rap2 mutant cells [14]. Although slightly reduced in both patient cell lines as compared to the control, the difference in the concentration of CHK1 protein was not significant (Figure 7). Discussion In the present study, we have identified the novel nonsense mutation c.4441C > T (R1481*) in the CDK5RAP2 gene in a homozygous constellation in two boys of Italian descent with primary microcephaly (Figures 1 and 2). We thereby, for the first time, provide detailed clinical and radiological information on MCPH3 patients of European descent. The siblings suffer from congenital microcephaly, intellectual disability, speech deficit, a tic disorder and severe behavioral problems. Further tests did not reveal any significant hearing impairment or epilepsy as a cause for the speech deficit. Therefore, although sensorineural hearing loss has been reported in two patients with mutations in CDK5RAP2 [5,15], this is not a consistent finding in MCPH3. Both patients had microcephaly, simplified gyral pattern of the cerebral cortex, with shallow sulci anteriorly and deep sulci parietally and posteriorly, and corpus callosum hypogensis on cMRI. There was no particular evidence of reduced white matter volume in the patients, despite the fact that CDK5RAP2 is expressed in glial cells of the developing rodent brain. Why the white matter is not more severely affected in MCPH remains unclear. Future studies will need to address the question as to what extent white matter disease also contributes to brain size reduction in MCPH patients. It is unclear whether these clinical and radiological features are also present in the previously reported three pedigrees of Pakistani descent with MCPH due to homozygous CDK5RAP2 mutations. In addition to the brain, we recently reported that CDK5RAP2 is widely expressed in various organs of newborn mice and human fetuses with high CDK5RAP2 mRNA and protein levels in the thymus and the kidney [7]. Moreover, it has been reported that in the MCPH3 murine model‚ "Hertwig's anemia" mice display defects in multiple organs including the thymus and also have a hematopoietic phenotype (hypoproliferative anemia, leucopenia, predisposition to hematopoietic tumors) [16]. In other MCPH subtypes, individual patients have been reported with short stature (especially in MCPH1 and MCPH5 [17][18][19]), early puberty, renal agenesis and polycystic kidneys [17]. As this point warrants further investigation in patients, we investigated the clinical phenotype of our patients in detail with respect to multi-organ involvement. Short stature was (See figure on previous page.) Figure 3 CDK5RAP2 in immortalized lymphocytes and dispersion of centrosomal protein γ-tubulin in CDK5RAP2 mutant patients cells. CDK5RAP2 protein levels were below detection level in immortalized lymphocytes from both patients with the c.4441C > T CDK5RAP2 mutation when assessed by (A) Western blot and (B) immunocytology. Subcellular localization of CDK5RAP2 and γ-tubulin throughout the cell cycle in immortalized lymphocytes of (C) controls and (D) MCPH3 patient 2. In controls, centrosomal CDK5RAP2 levels were weak during interphase, increased in the subsequent prophase and remained high throughout mitosis until telophase, when signals dropped to interphase levels. In the patient cells the alignment of the chromosomes at the spindle poles was less precise than in the control cells. Cells were stained with CDK5RAP2 (red), γ-tubulin (green) as a centrosomal marker, and DNA is stained with DAPI (blue). Scale bars 10 μm. Western blot results reveal that total γtubulin protein levels are similar in patients and controls (Additional file 1: Figure S1). detected in both patients up to an age of two years, but thereafter normalized in patient 1 for whom detailed data were available. This normalization of height after infancy (in contrast to the pattern of head growth) is a disease feature that has been reported similarly in patients with ASPM gene mutations [18]. There was no evidence of further organ involvement or malignancy, specifically no anemia or leucopenia and no kidney or thymus abnormality. The three homozygous mutations in the CDK5RAP2 gene reported so far, 246T > A in exon 4, 700G > T in exon 8 and 4005-15A > G in intron 26, have been proposed, but not shown, to lead to truncated proteins of 82 (Y82*), 234 (E234*) and 1334 (R1334Sfs*5) amino acids, respectively, and a loss of CDK5RAP2 function (full length protein 1893 amino acids; Figure 1). While the first and second mutant protein should lack most of the CDK5RAP2 transcript except for the N-terminus including a part of the γTuRC-binding domain or the Nterminus including the complete γTuRC-binding domain and a part of the SMC-domain, respectively, the third protein should lack the C-terminus of CDK5RAP2, especially the c-terminal SMC domain as well as the pericentrin and the Golgi binding sites. The homozygous nonsense mutation reported here, 4441C > T in exon 30, is predicted to lead to a truncated protein of 1481 amino acids (R1481*). The resulting CDK5RAP2 protein in our patients should lack parts of the second SMC domain as well as the pericentrin and the Golgi binding sites (Figure 1). No studies on patient specimen exist that shed light on the effect of the reported CDK5RAP2 gene mutations. We recently reported a high CDK5RAP2 expression in proliferating progenitors of the germinal matrix and early (not mature) neurons as well as glial cells in the neocortex of murine embryos and human fetuses [7]. This is in concordance with results of neuroimaging studies in MCPH patients due to non-CDK5RAP2 mutations demonstrating a reduced brain volume that affects especially the neocortex [17,18]. Based on results from in vivo and in vitro studies, the human MCPH phenotype is considered to be the result of a premature shift from symmetric to asymmetric neural progenitor-cell divisions (with a subsequent depletion of the progenitor pool) as well as of a reduction in cell survival [6,8]. To study the effect of the reported CDK5RAP2 gene mutation on cell proliferation in our patients, we studied EBV-transformed lymphocytes (LCLs) from both of our patients and from controls. Here, CDK5RAP2 localized to the centrosomes during each stage of the cell cycle in controls but was absent from patient cells, when assessed via immunocytology and western blots (Figure 3). The latter finding of CDK5RAP2 levels below detection levels in cells of our patients indicates that very little or no protein is present secondary to nonsense-mediated decay of the mutated transcript. In contrast to Cdk5rap2 shRNAi studies performed on mouse tissues [20], we detected a failure of the centrosomal protein γ-tubulin to localize properly at the centrosome, while total γ-tubulin protein levels were normal in patient cells (Figure 3, Additional file 1: Figure S1). Pericentrin, which interacts with CDK5RAP2 through defined protein domains [21], was not altered in its localization in patient LCLs ( Figure 5). This result is in line with those of Buchman et al. 2010 [21] who concluded from their studies in murine tissues that the centrosomal recruitment of pericentrin is not dependent upon Cdk5rap2, while the converse is true. Despite the predicted loss of the C-terminal Golgi domain in mutant (See figure on previous page.) Figure 6 Spindle defects in CDK5RAP2 mutant patients cells. Subcellular localization of CDK5RAP2 and α-tubulin throughout the cell cycle in immortalized lymphocytes of (A) control and (B) MCPH3 patient 2. In controls, CDK5RAP2 is weak and centrosomal during interphase and shows abnormal spindle formation. In patients, spindle pole formation did not appear to be as precise as in the control cells, with the chromosomes not as uniformly positioned at the spindle poles. Cells were stained with CDK5RAP2 (red), α-tubulin (green) as a centrosomal marker, and DNA is stained with DAPI (blue). Scale bars 10 μm. (C) Quantification results of abnormal spindles (unfocused α-tubulin staining at spindle poles), multipolar spindles and spindle pole distance.
5,668.6
2013-04-15T00:00:00.000
[ "Biology", "Medicine" ]
Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing Connected and Automated Vehicle (CAV) is a transformative technology that has great potential to improve urban traffic and driving safety. Electric Vehicle (EV) is becoming the key subject of next-generation CAVs by virtue of its advantages in energy saving. Due to the limited endurance and computing capacity of EVs, it is challenging to meet the surging demand for computing-intensive and delay-sensitive in-vehicle intelligent applications. Therefore, computation offloading has been employed to extend a single vehicle’s computing capacity. Although various offloading strategies have been proposed to achieve good computing performace in the Vehicular Edge Computing (VEC) environment, it remains challenging to jointly optimize the offloading failure rate and the total energy consumption of the offloading process. To address this challenge, in this paper, we establish a computation offloading model based on Markov Decision Process (MDP), taking into consideration task dependencies, vehicle mobility, and different computing resources for task offloading. We then design a computation offloading strategy based on deep reinforcement learning, and leverage the Deep Q-Network based on Simulated Annealing (SA-DQN) algorithm to optimize the joint objectives. Experimental results show that the proposed strategy effectively reduces the offloading failure rate and the total energy consumption for application offloading. etc [5]. This will greatly reduce the environmental pollution caused by automobile exhaust emissions. The development of CAV technology has given birth to a series of computing-intensive and delay-sensitive in-vehicle intelligent applications [6], e.g., autonomous driving [7], augmented reality [8], etc. They typically require large amounts of computing resources. But it is challenging for vehicles to meet the surging demand for such emerging applications, due to the limited endurance and computing capacity of vehicles. In recent years, computation offloading has been employed to extend a single vehicle's computing capacity. The computation offloading methods, based on traditional cloud computing platforms [9], offload computing tasks to cloud computing centers with powerful computing capabilities, effectively alleviating the computing burden on vehicles. However, due to the long transmission distance between vehicles and cloud computing centers, it will not only cause serious service delays, but also lead to huge energy consumption, which can not meet the needs of in-vehicle intelligent applications [10]. To address the above challenges, a new networking paradigm, Vehicular Edge Computing (VEC), has been proposed. VEC deploys Mobile Edge Computing (MEC) servers with computing and storage capabilities in Roadside Units (RSU). This enables CAV applications to be either processed in the vehicles locally, or offloaded to other cooperative vehicles or RSUs within the communication range for processing. This paradigm opens up new challenges on how to manage offloading to keep the offloading failure rate and overall energy consumption low. (1) Offloading failure rate could be impacted by the application execution time and the communication link established between the vehicle and the offloading targets. If the offloaded tasks can not complete within the application's tolerance time, the offloading fails; if the communication link is broken during the offloading process, the offloading fails. This requires that the offloading strategy should minimize the overall application execution time, and minimize the communication interruption by taking into consideration the vehicle's continuous movements. (2) Energy consumption also plays an important role in offloading [11,12]. Both communication and task execution consumes vehicle's energy. An offloading strategy that can minimize the energy consumption would benefit vehicle's endurance. Therefore, different offloading strategies can impact both objectives simultaneously, potentially in opposite directions. This necessitates the joint optimization of the two objectives. Researchers have done a considerable amount of work on CAV computation offloading strategy in the VEC environment. Table 1 lists a group of existing research work, where we mark the objectives, conditions and offloading schemes that are considered for each approach. As we can see, they have the following limitations. To address the above limitations, our work establishes a new computation offloading model, which takes into consideration task dependencies, vehicle mobility, and different computing resources to offload tasks to. Our goal is to jointly optimize the objectives offloading failure rate and energy consumption. To this end, our work employs Deep Reinforcement Learning (DRL), which excels at resolving the dimension disaster problem existed in the traditional reinforcement learning methods [26,27]. More specifically, our work designs an efficient computation offloading strategy based on Deep Q-Network Based on Simulated Annealing (SA-DQN) algorithm in the VEC environment. The main contributions of this paper are as follows. • A new computation offloading model for CAV applications in the VEC environment is established based on Markov Decision Process (MDP). Since the computation tasks from application decomposition can be processed locally, offloaded to RSUs or cooperative vehicles, the model introduces task queues in vehicles and RSUs to model the task transmission and processing. Moreover, vehicle mobility and temporal dependency among tasks are also considered in the model. • The work designs a computation offloading strategy based on deep reinforcement learning, and leverages the SA-DQN algorithm to optimize the joint objectives. • The proposed computation offloading strategy is evaluated using real vehicle trajectories. The simulation results show that the proposed strategy can effectively reduce the offloading failure rate and the total energy consumption. The rest of this paper is organized as follows. Section II introduces the related work of computation offloading in VEC. Section III formally defines the computation offloading problem of CAV applications and the optimization goal, and analyzes the computation offloading process using an example. Section IV proposes the computation offloading strategy based on DRL, and designs the SA-DQN algorithm for the computation offloading strategy. Section V presents and analyzes the experimental results, as well as the performance differences between SA-DQN algorithm and traditional reinforcement learning algorithms. Section VI summarizes this work and looks into future work. Related work As shown in Table 1, there have been a wide range of research work on CAV application offloading strategy with different objectives, conditions and offloading schemes. Most existing studies focused on the optimization of either execution delay or energy consumption, but rarely consider joint optimization of execution delay and energy consumption. Wu et al. [13] proposed an optimal task offloading approach using 802.11p as the transmission protocol of inter vehicle communication, in which transmission delay and computation delay are considered to maximize the long-term return of the system. Although a large number of experimental results show that the proposed optimization approach has good performance, the optimization of energy consumption is not considered in this study. Zhang et al. [14] proposed an effective combined prediction mode degradation approach considering the computation task execution time and vehicle mobility. Although the simulation results show that the approach greatly reduces the cost and improves the task transmission efficiency, it does not consider the energy consumption of communication and processing. Jang et al. [15] considered the change of communication environment, jointly optimized the offloading ratio of multiple vehicles, and optimized the total energy consumption of vehicles under the delay constraint. Although the proposed energy-saving offloading strategy significantly reduces the total vehicle energy consumption, it does not consider the processing energy consumption of the computing node. Pu et al. [16] designed an online task scheduling algorithm to minimize the energy consumption of vehicles in the network for multi-vehicle and multi-task offloading problem. Simulation results show that the proposed framework has excellent performance. Wang et al. [17] proposed a dynamic reinforcement learning scheduling algorithm to solve the offloading decision problem. Although the experimental results show that the performance of the proposed algorithm is better than other benchmark algorithms, the offloading of dependent tasks is not considered. Khayyat et al. [18] proposed a distributed deep learning algorithm to optimize the delay and energy consumption. The simulation results show that the algorithm has faster convergence speed. Some research work only focused on independent task offloading in the VEC environment. Dai et al. [22] proposed a method based on utility table learning, which verified the effectiveness and scalability of the method in various scenarios. The work considers both cloud computing and edge computing platform to offload tasks. Ke et al. [23] proposed a computation offloading method Some studies only considered offloading tasks to RSU or processing tasks locally. Han et al. [24] established a MDP model for the problem, and optimized the offloading strategy with deep reinforcement learning. Although the study considers the change of vehicle's position in different time slots, it does not make full use of cooperative vehicle resources. Dai et al. [25] transformed the load balancing and offloading problem into an integer non-linear programming problem to maximize the system utility. Experiments show that the strategy is significantly better than the benchmark strategy in terms of system utility. Although the mobility of vehicles is considered in this study, the offloading mode does not consider offloading tasks to cooperative vehicles. Some studies did not consider the change of vehicle positions in different time slots. Xu et al. [19] proposed an adaptive computation offloading method to optimize the delay of task offloading and resource utilization. Experimental results show the effectiveness of this method. Liu et al. [20] offloaded multiple vehicle applications to RSU, divided each application into multiple tasks with task dependencies, and proposed an efficient task scheduling algorithm to minimize the average completion time of multiple applications. This work divides the application into several tasks to effectively reduce the completion time of application. Guo et al. [21] introduced Fiber-Wireless (FI-WI) integration to enhance the coexistence of VEC network and remote cloud, and proposed two task offloading approaches. The experimental results show that the proposed approaches have advantages in reducing the task processing delay. Problem definition and analysis In this section, we first define the problem by modeling the network, application, communication and computation, then analyze an example of the proposed model. Problem definition Network model The VEC network model is shown in Fig. 1. The vehicles are categorized into Task Vehicle (TaV) and Service Vehicle (SeV) [28]. Both are equipped with OBU, and hence they have certain computing capability. TaV is the user of applications, which can be offloaded to SeVs after application decomposition to utilize the computing resources of cooperative vehicles in the neighborhood. There are fixed RSUs deployed on the roadside. Each RSU is equipped with an MEC server, which is integrated with wired connection [29]. They also have certain computing capability. SeVs and RSUs are referred to as Service Nodes (SNs) [30]. The two offloading schemes are referred to as remote offloading. To better describe the generation, transmission and processing process of CAV applications, we divide the vehicle travel time into t time slots, with each slot of length ε. In each time slot, the VEC system is quasi-static; that is, the relative position of the vehicle and the wireless channel state are stable, while they may change across different time slots [31]. Application model Most CAV applications use algorithms based on computer vision or deep learning to process a large amount of data collected by on-board sensors (cameras, radars, etc). CAV local applications and various third-party applications are usually computation-intensive or delay-sensitive applications. They typically need to use a lot of computing resources to process real-time data to meet the requirements of low execution delay [32]. The OBU on CAVs with limited computing resources cannot meet the requirements of applications. Therefore, to fully utilize the computing resources of RSUs and SeVs within CAV's communication range, CAV applications are decomposed into multiple smaller tasks, potentially with dependecies among them. Let's assume there are z different CAV applications, and each of them can be generated with probability 1/z in each time slot. As shown in Fig. 2, each CAV application can be decomposed to multiple tasks, denoted as A i = {G i , l i }(i ∈ {1, 2, ..., z}), where G i is the temporal dependency of decomposed tasks and l i is the tolerance time for the i-th application. Specifically, the temporal dependency of tasks is represented by a directed acyclic graph (DAG) The direct predecessor task T i u must be completed before T i v can be processed. The set of direct predecessors of a task can be denoted as can not be processed until all tasks in the set of direct predecessors R i v have been completed. Tasks without any direct predecessor are called entry task, while tasks without any direct successor are called exit task. Moreover, each decomposed task can be represented as where u is the decomposed task index, Deep T i u is the task depth defined by Eq. (1), and d i u is the task data size. Task queue model The task queue model is illustrated in Fig. 3. Considering the transmission and processing of task data, we denote a task queue on TaV/SeV as Q t / Q s , while a task queue on RSUs as Q r . Each task queue holds the tasks from the decomposition of CAV applications. Tasks in the task queue are sorted by task depth first and then by task number in the ascending order. For the task queue Q t , we have the following definitions: i) Q t holds the tasks decomposed from TaV applications; ii) TaV can only transmit or process task data at the head of Q t . For the task queues Q s and Q r , we have the following definitions: i) Q s and Q r hold the tasks transmitted by TaV; ii) SeVs can only process task data at the head of Q s and RSUs can only process task data at the head of Q r . Communication model TaV can communicate with SNs to transmit task data at the head of Q t . We define channel bandwidth as B, transmission power of TaV as p tr , channel fading coefficient as h, Gaussian white noise power as χ and path loss exponent as . In the i-th time slot, the transmission rate from TaV to SN j is expressed as where SN i,j is the distance between TaV and SN j, defined by In the i-th time slot, only when the distance between TaV and the SN j is within the coverage radius of SN, the task data can be transmitted. If TaV transmits task data to the SN j, the amount of task data transmitted can be expressed as the data transmission between TaV and SN j will cause energy consumption, which can be expressed as Computation model TaV can either transmit the task at the head of Q t to SNs, or process the task locally. SNs only process the task at the head of task queue locally. The computation model includes two parts: tasks processed by TaV and tasks processed by SNs. a) Tasks processed by TaV. The power consumption of TaV processing tasks locally is expressed as where κ tav is the effective switched capacitance coefficient related to the chip architecture in vehicle [33], and f tav is the local computing capacity of the TaV (i.e., the CPU frequency in cycles/sec). TaV processing tasks will consume a certain amount of energy, expressed as The data size that TaV can process in a time slot is given by where c is the processing density of task data (in CPU cycles/bit). b) Tasks processed by SNs. The power consumption of SN i processing locally is expressed as where κ i is the effective switched capacitance coefficient related to the chip architecture in SN i. f SN i is the processing capability of SN i. SNs processing tasks will consume a certain amount of energy, expressed as The data size that SN i can process in a time slot is given by In a time slot, TaV can process the task data locally or offload the task to the SNs within the communication range. The offloading decision adopted by TaV can be represented by the 0-1 decision variable as shown in Eq. (12). ν i indicates whether TaV processes task data locally in the i-th time slot, and o i j indicates whether TaV offloads task to SN j in the i-th time slot. SNs process a task only when it is at the head of task queue. θ i j indicates whether SN j processes task data in the i-th time slot. β and ζ are the weight coefficients of execution delay and energy consumption respectively, where β + ζ = 1. where fail i is offloading failure penalty in the i-th time slot, expressed as where D loss i is the data size set of the offloading failed tasks (unprocessed tasks belonged to offloading failed applications in task queues), d loss i,j is data size of j-th offloading failed task in i-th time slot. There are two cases that can lead to application offloading failure: 1 While the SNs are receiving task data, distance between TaV and SNs is out of the communication range during data transmission. 2 The completion time of application is greater than its tolerance time. In Eq. (12), δ tav i is the energy consumption caused by TaV processing tasks, given by (14) δ tSN i is the energy consumption caused by SNs processing tasks, given by and δ com i is the energy consumption of communication, given by the constraint indicates that tasks can only be processed locally or offloaded to SNs within a time slot. Figure 4 illustrates an example of the computation offloading process in the VEC environment. Example analysis 1) In the first time slot, TaV generates the first application A 1 with the tolerance time of 4 time slots. It is decomposed into three tasks, which are kept in Q t . At this time slot, TaV processes task data locally. loss 1 is energy consumption caused by TaV processing task T 1 1 locally, multiplied by the weight coefficient of energy consumption optimization. locally. At this time slot, the task T 5 2 is completed, and the task T 3 1 belonged to A 1 has not been processed. So application A 1 offloading failed due to that the completion time of A 1 is greater than its tolerance time. SeV γ 2 processes task data locally, the task T 4 2 is completed, then all the tasks of A 2 have been processed, so A 2 are executed successfully. loss 4 is the weight sum of total data size of unprocessed task d loss 4,1 and the energy consumption caused by TaV processing locally as well as the energy consumption caused by γ 2 processing task. Computational offloading strategy based on deep reinforcement learning Reinforcement Learning (RL) algorithms have four key elements in model building: agent, environment, action and reward. It is usually modeled as Markov Decision Process (MDP) model. In the algorithm learning process, the agent observes the current environment and chooses actions according to strategy. After executing the action, the agent observes the reward and transfers to the next environment. RL algorithms imitate the way of human learning. The purpose of RL algorithms is to maximize the total reward by adjusting the strategy appropriately when the agent interacts with the unknown environment. In this section, we first describe the computation offloading problem by an MDP model to determine the four key elements. Secondly, we introduce Q-learning algorithm. Finally, due to the large dimension of state space in VEC environment, the traditional reinforcement learning optimization method is almost impossible to solve complex computation offloading problem in VEC. Therefore, we adopt SA-DQN to optimize the computation offloading strategy, and describe the computation offloading strategy based on SA-DQN. MDP model In order to design the computation offloading strategy based on SA-DQN, we first establish an MDP model. It can fully describe the offloading scheduling model. MDP model is the basic model of RL algorithms. Since the probability of state transition in real environment is often related to historical state, it is difficult to establish the model. Therefore, the model can be simplified according to Markov property (i.e. the next state in the environment is only related to the current state information, but not to the historical state), so that the next state is only related to the current state and the action taken [34]. Next, we will introduce each key element of MDP. Q-Learning algorithm In this section, we introduce the traditional RL algorithm called Q-learning. Q-learning is a temporal difference (TD) algorithm based on stochastic process and modelfree, and has no state transition probability matrix. The algorithm will select the maximum value for updating the value function, while the action selection does not necessarily follow the action corresponding to the maximum value. It will lead to an optimistic estimation of the value function. Due to this feature, Q-learning belongs to the off-line policy learning method [35]. Q-learning optimizes the value function by four tuple information S k , A k , R k , S k+1 , where S k represents the environmental state in k-th time slot, A k represents the current action chose, R k represents the immediate reward, and S k+1 represents the environmental state of the next time slot after the state transition. The Q-learning value function is updated as follows: where α is the learning efficiency, representing the degree of value function updating; r is the immediate reward, representing the reward obtained by transferring to the next state; γ is the discount factor, representing the impact of the subsequent state's value on the current state; and max A k+1 Q(S k+1 , A k+1 ) is the maximum value of next state. The Equation (17) can be further expressed as where In other words, the updating of Q-learning value function can be expressed as the value function's value plus the product of the difference between target Q-value and estimated Q-value and the learning efficiency. It is also known as TD target. SA-DQN algorithm The value function in Q-learning algorithm can be designed simply by a table. But in practice, the state space of computation offloading problem in VEC is large. If we want to establish a value function table, it will lead to serious memory usage and time cost. To solve this problem, known as dimensional disaster, we describe the computation offloading problem as a DRL process, using the function approximation to combine Q-learning with Deep Neural Network (DNN), transform the value function table into Q-network, and adjust the network weight coefficient by algorithm training to fit the value function [36]. Compared with Q-learning, DQN has three main advantages: i) The Q-network can be expressed as Q(S k , A k ; θ). θ represents the weights of the neural network, and the Q-network fit value function by updating the parameter θ in each iteration. ii) In order to improve the learning efficiency and remove the correlation in the subsequent training samples, DQN adopts experience replay technique in the learning process. The sample observed in k -th time slot e k = S k , A k , R k , S k+1 is stored into the reply memory D first, and then a sample is randomly chosen from D to train the network. It breaks the correlation among samples and makes them independent. iii) Two neural networks with the same structure but different weights are used in DQN. One is called the target Q-network, and the other is the estimated Q-network. The estimated Q-network has the latest weights, while the weights of the target Q-network are relatively fixed. The weights of the target Q-network is only updated with the estimated Q-network every ι time slots. The network used to calculate TD target is called TDnetwork. If the network parameters used in the value function are the same as those of TD network, it is easy to cause the correlation among samples and make the training unstable. In order to solve this problem, two neural networks are introduced. The weights of the target Q-network can be expressed asθ k and that of estimated Qnetwork can be expressed as θ k , whereθ k =θ k−ι , it means thatθ k is updated with θ k every ι time slots. In DQN algorithm, Equation (17) is transformed into: In order to minimize the difference between the estimated value and the target value, we define the loss function as follows: By deriving L(θ k ) over θ k , we obtain the gradient: Therefore, the updating of value function in DQN is transformed to use gradient descent method to minimize the loss function: In order to balance the exploration and exploitation of DQN, the Metropolis criterion [37] is used to choose the action, and cooling strategy is described as follows: where T 0 is the initial temperature, k is the amount of current episode, and θ is the cooling coefficient. The computation offloading strategy based on SA-DQN algorithm is shown in Algorithm 1. In every episode, VEC network needs to be initialized, as shown in Algorithm 2. Then it needs to determine if there is communication interruption and handle it in every time slot, as shown in Algorithm 3. After TaV chooses the offloading decision, the VEC network needs to be updated, as shown in Algorithm 4. If TaV chooses to process tasks locally, the task queue of TaV needs to be updated, as shown in Algorithm 5. If TaV choose to transmit tasks to SNs, the task queue of TaV and SNs also needs to be updated, as shown in Algorithm 6. It needs to determine whether there are applications that offloading failed as it can not complete within its tolerance time in every time slot, as shown in Algorithm 7. The interaction among algorithms is shown in Fig. 5. Parameter settings All the simulation experiments were conducted on a Win10 64-bit operating system with a Intel(R) Core(TM) i7-4720HQ CPU @ 2.60GHz processor and 8GB RAM. We use TensorFlow 1.15.0 with Python 3.6 to implement SA-DQN algorithm. In the experiment, we consider the real vehicle trajectory data set of two hours in the Calculate SN k+1,j , obtain S k+1 ; 12 Store < S k , A k , R k , S k+1 > to replay memory D; 13 Random sampling minibatch of experience 19 Perform gradient descent step on (ȳ i − Q(S i , A i ; θ k )) 2 with respect to θ k and update θ k ; trajectory, i.e. the two-dimensional coordinate (1250,600), we place an RSU with a coverage radius of 300 meters. β is set to 0.4 and ζ is set to 0.6. There are six CAV's applications, each application can be divided into three tasks. The length of the time slot is set to 10 ms. The range of data size is distributed uniformly from 1 to 2, and the range of tolerance time is distributed uniformly from 50 to 100 time slots. Table 2 is the detailed setting of simulation parameters. After a number of experiments on parameter adjustment of the RL algorithms to achieve good convergence, parameters setting of RL algorithms is shown in Table 3. Comparative offloading strategies In order to verify the effectiveness of our proposed computation offloading strategy, we designed comparative In the first part, we select TD(0) algorithm combined with simulated annealing: Q-learning [39], Sarsa [40] and TD(λ) algorithm [41] with that: Sarsa(λ), Q-learning(λ) as comparative algorithms. In the second part, we select four schemes for comparison. It is described as follows: Scheme 1 is our proposed strategy; Scheme 2 only considers tasks processed by TaV; Scheme 3 only considers tasks processed by TaV or offloaded to RSU; Scheme 4 only considers tasks processed by TaV or offloaded to cooperative vehicle. Experimental results Offloading strategies with different algorithms Figure 6 shows the average reward of computation offloading strategy based on SA-DQN and comparative algorithms in every 20 episodes. It can be seen that, in the process of optimizing the offloading strategy, SA-DQN continuously interacts with the environment in every episode, updates the weights of neural network, and approaches the optimal value function. With the amount of episodes increasing, the reward increased. Around the 80th episode, the average reward obtained by SA-DQN tends to be optimal and stable, and remained at about 978. Compared with the comparative algorithms, SA-DQN has faster convergence speed. TD(λ) algorithms converged around the 100th round, while TD(0) algorithm converged around the 120th round. It indicates that TD(λ) algorithm converges faster than TD(0) algorithms. The possible reason is that TD(λ) algorithms introduces eligibility trace and adopts multi-step updating strategy. Therefore, it can accelerate the convergence speed. In the experiment, SA-DQN does not encounter the problems of divergence and oscillation, which proves the feasibility computation offloading strategy based on SA-DQN proposed in this paper. Figure 7 shows the average total offloading energy consumption of strategy optimized by SA-DQN and comparative algorithms in every 20 episodes. It can be seen that the average offloading total energy consumption of SA-DQN and comparative algorithms is decreasing. Around the 160th episode, the average total offloading energy consumption of each algorithm tends to be optimal and stable. Compared with the comparative algorithms, the average total offloading energy consumption of SA-DQN can be maintained at about 30, reaching a lowest energy consumption level. In the comparative algorithms, the average total offloading energy consumption of Sarsa and Sarsa(λ) algorithms maintained at about 35, while that of Q-learning and Q-learning(λ) algorithms maintained at about 40. This shows that the online learning method can converge to a lower level than the offline learning method in optimizing the average total offloading energy consumption. This is because the online learning method updates the value function by the samples generated by the current strategy. Therefore it can converge faster, but the disadvantage is that it is easy to fall into the local optimal solution. Figure 8 shows the average offloading failure rate of applications optimized by SA-DQN and comparative algorithms in every 20 episodes. It can be seen that with the increase of episodes, the average offloading failure rate of every algorithm decreased. In the 160th episode, except for Q-learning, the average offloading failure rate obtained by other algorithms tends to converge, and the average failure rate of SA-DQN can reach a lower level faster than other algorithms. Compared with the offline learning method, the average offloading failure rate of online learning method was lower than that of offline learning method. This shows that the online learning method can converge to a lower level than the offline learning method in optimizing the average offloading failure rate. This is because the online learning method is a conservative strategy, and it can converge to a lower level faster by following the current strategy. Figure 9 shows the average application offloading failure rate of offloading strategies based on different schemes varying from data size. It can be seen that with the increase of data size, the average application offloading failure rate of all strategies increased continuously. The average application offloading failure rate obtained by our proposed strategy reached lower level compared with other strategies when data size is 32. It is because Scheme 1 has various offloading methods, and high offloading flexibility. The average offloading failure rate of Scheme 3 is the closest to that of Scheme 1, because the computing capacity of RSU is higher than that of vehicle. Although it requires certain communication energy consumption to offload to RSU, the completion time of tasks can be greatly reduced, and hence the completion time of application is not easy to exceed its tolerance time, so the penalty Page 14 of 17 of offloading failure decreased. The average application offloading failure rate of Scheme 4 is the highest, because its offloading targets include not only local processing, but also cooperative vehicles, which requires a certain transmission time, and the processing capacity of vehicles is limited, which is not enough to process large amount of data. With the increase of data size, the completion time of application is more likely to exceed its tolerance time, and the penalty for application offloading failure will increase significantly. Therefore, it is obvious that the average application offloading failure rate is rising. Figure 10 shows the average application offloading failure rate of offloading strategies with varying from tolerance time. It can be seen that with the increase of tolerance time, the average application offloading failure rate of all strategies decreased. Compared with other strategies, the proposed strategy reached lower level when tolerance time is 90. This is because when the application tolerance time increases, the application can have more time to offload, and the application completion time is less likely to exceed its tolerance time, and hence the application offloading failure rate decreased. The average offloading failure rate of Scheme 3 is the closest to that of Scheme 1. This is because RSU has strong computing capacity, which can greatly reduce the task completion time. With the increase of the application tolerance time, Scheme 3 can make full use of the computing power of RSU. Therefore, it can be seen that the average application offloading failure rate of Scheme 3 is significantly reduced. Compared with other strategies, the average offloading failure rate of Scheme 2 and Scheme 4 stay at a high level. One possible reason is that both Scheme 2 and Scheme 4 offload tasks to the vehicles with the limited processing capacity. with the increase of application tolerance time, the task offloading with large amount of data size may fail. Thus, the application offloading failure rate is high. On the contrary, both Scheme 1 and Scheme 3 can offload tasks to RSU with stronger computing capacity. Therefore, the number of successful offloading applications increases, and the failure rate of application offloading is lower. Conclusion In order to solve the problem of computation offloading for CAV's applications in VEC environment, this paper proposed an computation offloading strategy based on SA-DQN algorithm. In the simulation experiment, the proposed strategy was evaluated based on the real vehicle trajectory. The experimental results show that our proposed computation offloading strategy based on SA-DQN algorithm has good performance, and further indicates that the strategy proposed can effectively reduce the total offloading energy consumption and offloading failure rate of CAV. In the future work, we will further consider to design collaborative computation offloading strategy in End-Edge-Cloud orchestrated architecture, which can transfer complicated computation tasks to remote cloud for further processing, and it can prompt the flexibility of computation offloading. We will consider more dynamic factors in the VEC environment to make it more suitable for the real world model. In addition, we will take on-board applications in real world into account.
8,057
2021-06-08T00:00:00.000
[ "Computer Science" ]
The harm principle, personal identity and identity-relative paternalism Is it ethical for doctors or courts to prevent patients from making choices that will cause significant harm to themselves in the future? According to an important liberal principle the only justification for infringing the liberty of an individual is to prevent harm to others; harm to the self does not suffice. In this paper, I explore Derek Parfit’s arguments that blur the sharp line between harm to self and others. I analyse cases of treatment refusal by capacitous patients and describe different forms of paternalism arising from a reductionist view of personal identity. I outline an Identity Relative Paternalistic Intervention Principle for determining when we should disallow refusal of treatment where the harm will be accrued by a future self, and consider objections including vagueness and non-identity. Identity relative paternalism does not always justify intervention to prevent harm to future selves. However, there is a stronger ethical case for doing so than is often recognised. AbsTrACT Is it ethical for doctors or courts to prevent patients from making choices that will cause significant harm to themselves in the future? According to an important liberal principle the only justification for infringing the liberty of an individual is to prevent harm to others; harm to the self does not suffice. In this paper, I explore Derek Parfit's arguments that blur the sharp line between harm to self and others. I analyse cases of treatment refusal by capacitous patients and describe different forms of paternalism arising from a reductionist view of personal identity. I outline an Identity Relative Paternalistic Intervention Principle for determining when we should disallow refusal of treatment where the harm will be accrued by a future self, and consider objections including vagueness and non-identity. Identity relative paternalism does not always justify intervention to prevent harm to future selves. However, there is a stronger ethical case for doing so than is often recognised. HArmFul CHoiCes When we know that someone is making a choice that will predictably risk or cause him to suffer significant harm, we have a basic duty of beneficence to try to prevent that. 1 If James (box 1) was our friend or family member, we should try to discourage his choice and promote a better alternative. If we have a professional relationship with the person, (eg, if we are James's doctor), we would have an additional professional responsibility to advise against this decision. 1 But what if the person persists in his choice against our advice? Should we take further steps to restrict him? Should we restrain him or forbid him from making this choice? Should James's employer insist on him being vaccinated to continue to work? Would a state be justified in mandating vaccination? In terms of the law, and intervention by the state, one oft-cited response draws on an important liberal principle articulated by the philosopher John Stuart Mill. Mill's 'harm principle' claims that the only justification for infringing the liberties of an individual is to prevent harm to others; harm to the self does not suffice. (Mill,p23) 2 For debates about vaccination mandates in the COVID-19 pandemic, this means that ethical arguments have focused entirely on the effect of vaccination on risk of transmission to others, or on the use of scarce hospital resources. However, as the vaccination rate in the wider population has increased, and the pressure on hospitals has abated, we might return to the paternalistic reason for wanting James to be vaccinated. i Setting aside any question of harm to others, it would be far better for his own health for James to receive the vaccine. 3 Could that possibly justify intervention to coerce or mandate vaccination? The harm principle provides a simple and firm bolster against paternalism, against others' wellmeaning interference in our own lives. But it draws a sharp line between decisions that are harmful to the self and those that are harmful to others. It seems to imply that these are radically different types of decision and demand different ethical responses. In this paper, I explore some reasons suggested by the philosopher Derek Parfit for dissolving or blurring the distinction between harm to self and harm to others. 4 On Parfit's account, some apparently selfharming decisions are relevantly like harming someone else. ii He noted that his reductionist account of personal identity might be considerably more permissive of paternalism than traditional ethical approaches, though did not clearly identify if paternalism would be justified in cases like that of James. In the paper, I identify two different versions of reductionist paternalism, according to which the harm principle is undermined and health professionals and states may be justified in being paternalistic in a wider range of cases. The i This might also apply if the evidence that vaccination prevents transmission weakens-as appears to be the case with the omicron variant of COVID-19. 29 ii Others have observed some of the implications of Parfit's account for applied ethics and paternalism. For example, Jeff McMahan, in The Ethics of Killing, notes that weakening of the prudential reasons to care for one's far future weakens the arguments against paternalism. (McMahan, p288) 30 Cyril Hedoin has explored some of these issues. 31 32 He argues that the conventional idea of a trade-off between autonomy and beneficence in instances of soft paternalism is mistaken, because a reductionist identity account (or Parfit's separate notions of Rational consent) diminish the significance of personal autonomy. Box 1 Case: vaccination refusal James is in his late 50s and has a number of health problems. He would be at risk of becoming seriously unwell if he were to contract COVID-19. Earlier in the pandemic, James had a number of friends and family members become seriously ill and two died. However, when James is offered a COVID-19 vaccine, he refuses. He has come to believe that the vaccine contains a microchip that would allow him to be tracked. Despite all evidence to the contrary, James persists in his belief and resolutely refuses to be vaccinated. Feature article reductionist might claim that paternalism is more easily justified, or alternatively that what is conventionally thought of as hard paternalism is not actually 'paternalistic'. Although I will take the first approach in this paper, the second would be equally plausible. I also explore the relevance of Parfit's 'wide value-based objective view' of reasons and suggest that this supports what I call Identity Relative Paternalism. To my knowledge, this paper is the first to draw a connection between Parfit's later writing on the nature of reasons (his wide-value-based objective view), and his earlier work on personal identity. I will focus on hard paternalism, since it is here that the reductionist account of identity may have most radical implications, iii and will not discuss specifically other arguments in favour of paternalism. iv I will focus here on medical examples and refusal of treatment by patients, identifying a spectrum of cases where relations of psychological connectedness and continuity might hold to stronger or weaker degrees despite intact cognition and in the absence of brain damage. One reason for focusing on patient refusals of treatment is that these typically (vaccination is an exception) are interpreted as being associated with harm only to the individual and not to other people. In cases where patients demand treatment (that the doctor believes will cause harm to the patient and is tempted to paternalistically refuse to provide) there may often be implications for resources that could potentially have impacts on other patients and provide a separate justification for refusal. v Much has been written previously on cases involving significant changes in personality and cognition-particularly in relation to the validity of advance directives in dementia. [5][6][7][8][9] For this paper, I will briefly outline some possible implications of Identity Relative Paternalism for these cases in the section on practical implications, but will not discuss dementia cases in detail. That is partly because I am interested in exploring the wider potential implications of a reductionist account of identity for paternalism (ie, beyond cases of brain injury or dementia). Second, questions about advance decisions and dementia are complicated by potential changes in moral status and lack of capacity in the later time points, so the individual's contemporaneous views or interests might or might not be taken to have lesser ethical weight than the prior individual's. THe HArm prinCiple the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good is not a sufficient warrant (Mill, p23) 2 Mill articulated his harm principle early in his work, 'On Liberty'. Mill was writing specifically about the justification for government actions and in particular the important limits iii for example, as defined by Kuhse. 9 If these arguments succeed, they will also apply to soft paternalism. iv For example, Ben Davies has argued in favour of paternalism in cases of predictable changes in future evaluative outlook. 33 Reductionist paternalism is potentially compatible with other arguments supporting paternalism. It may be, for example, that in some cases the arguments I present (in relation to personal identity and future selves) do not apply, but other reasons would support paternalistic intervention. v One complication (which I will not have space to explore) is that in some cases, refusals of treatment can also potentially cause harm to other people-for example if they lead to costly complications that then need to be treated in a publicly funded healthcare system. on government interference in individuals' lives. His harm principle warns against enacting paternalistic laws-ones that are designed to restrict individuals' choices for their own good. The same principle, though, can be extended to other situationsfor example to medicine. It provides a strong rebuttal against doctors who might be inclined to make paternalistic decisions motivated by concern for patients' well-being. According to the harm principle, doctors should not limit patients' freedoms for the sake of the patient's own good, though they may be justified in doing so if the patient's actions would cause harms to other people. vi Mill was forthright about the absolute freedom of individuals to make decisions that would affect only themselves: 'Over himself, over his own body and mind, the individual is sovereign'. (Mill,p22) 2 vii In contrast, '[a]cts injurious to others require a totally different treatment'. (Mill,p140) 2 'The distinction between the loss of consideration which a person may rightly incur by defect of prudence or of personal dignity, and the reprobation which is due to him for an offence against the rights of others…makes a vast difference both in our feelings and our conduct towards him'. (Mill, 2 Mill provided two key arguments against paternalism. The first was based on the value for individuals of freedom to have opinions and to act. Such freedom is instrumentally valuable because it enriches individual lives and allows the development of human faculties. 'The free development of individuality is one of the leading essentials of well-being'. (Mill,p102) 2 The second was on the basis of fallibility of paternalistic judgements. Such judgements are necessarily based on presumptions that may be mistaken. (Mill, p1150) 2 The individual's own self-knowledge is much more reliable. viii I will return to these arguments against paternalism. vi NB Some philosophical debate has cast doubt on the distinction between harm to self vs harm to other people in Mill's harm principle. Mill himself admitted that actions will often affect both the self and other people. (Mill, p145) 2 Some philosophers have suggested that it is better to distinguish not between harm to self and harm to other people, but between self-regarding actions and non-self-regarding actions. 34 vii Mill excluded from this principle those who are not 'in the maturity of their faculties'. He contended that those who require care from others should be protected from harm from their own actions (as well as from harm from other people). Ibid p22-23. viii 'With respect to his own feelings and circumstances, the most ordinary man or woman has means of knowledge immeasurably surpassing those that can be possessed by any one else'. (Mill, p137) 2 Box 2 Case: Hospital birth refusal. Jenny is pregnant and expecting her first child soon. Her obstetrician expresses concern that because of the position of the baby (transverse lie) a caesarean would often be required and a home birth would be very high risk for both Jenny and her baby. However, Jenny is clear that she does not wish to have a caesarean section in any circumstances. Jenny explains that she has been part of an online forum, and she strongly wishes to have a natural vaginal home birth. She has plans to 'free birth' and deliver in her home (which is a long way from the hospital) without any involvement of health professionals. i i One complication of this case is that Jenny's decision will affect both her and her child. For the sake of this paper, I wish to set that aside. My focus here is on whether the reasons *for Jenny's sake* alone are sufficient (or indeed provide any reason) to justify compulsion. HArmFul deCisions To help identify the relevance of these ethical arguments it will be useful to have some practical examples. I have mentioned already the case of James and his vaccination refusal. Box 2 and box 3 illustrate two more. These three cases (boxes 1-3) involve adults making decisions in relation to their own life. For the sake of argument, I will assume that at the time of the decisions, each of these patients had capacity-that is to say that they had no disorders affecting their thinking or ability to reason. If formally assessed, it would be clear in each case that they were able to understand and retain the relevant information provided to them, weigh the reasons and communicate their choice. 10 In these cases, many might be inclined to question the rationality of the decisions made. The risks that James, Jenny and John are taking with their own health appear considerable and the reasons that they cite would appear (to most people) to be insufficient. Health professionals might try hard to change the patients' mind. They might even seek a formal psychiatric evaluation. Nevertheless, it is likely that the decisions would ultimately be respected. ix After all, that is what respect for the harm principle and for patient autonomy is thought to require. By way of contrast, we could imagine the following cases (boxes 4-6). In these versions of the cases it seems clear that the third party decision makers would not and should not be permitted to make these harmful choices. x The mother's refusal of a vaccine for James should be overruled. The judge's decision ix The General Medical Council in the UK states emphatically that doctors 'must respect a competent patient's decision to refuse an investigation or treatment, even if you think their decision is wrong or irrational'. (GMC, p24) 11 x The legal status of these three decision makers is different, depending on the jurisdiction. In all three cases, however, even if they had legal authority the surrogates would be ethically and legally required to make decisions in the best interests of the patient. should be urgently appealed. Someone else present when John collapses should call the ambulance against Michael's wishes. It is clear in the third-party versions of these cases that even if these decision-makers are autonomous adults, their decisions will cause harm to other people, and on that basis, should be overruled. But is there such a sharp divide between the first-person and third-person versions of these cases? xi idenTiTy And pATernAlism In his book, Reasons and Persons, the Oxford philosopher Parfit famously defended what he called a 'reductionist view' about personal identity. (Parfit, 4 Parfit argued that the continued existence of an individual over time can be reduced to certain physical or psychological continuities. On this view, we can identify whether and to what extent someone at time point t1 is physically and/or psychologically continuous and connected with a person at a later time point t2. For example, we can ask 'do they share the same body, the same memories, the same patterns of thought and character traits'? According to Parfit, the answers to these questions will tell us how the earlier person is related to the later person, and what is more will tell us everything that matters. xii Although we might be tempted to ask 'but is it the same person at t1 and at t2?', according to Parfit, this question is sometimes empty. (Parfit, 4 At least in some special cases, there is no separate answer to this question. Once we have identified the relevant connections there is nothing additional to meaningfully say. Should we be reductionists about personal identity? Parfit's argument for reductionism is complex, 12 and based on thought xi Other philosophers who have defended forms of paternalism have also suggested that the strong asymmetry implied by the harm principle may not be justified, though for different reasons. Eg. 36 xii According to Parfit, what matters morally is what he calls 'Relation R', and he regarded Relation R as being a function of 'psychological connectedness and/or continuity with the right kind of cause'. (Parfit,p262) 4 In Parfit's view, this included memories, but also beliefs, desires, intentions and character traits. Reductionist paternalism as described in this paper is not dependent on a particular theory of Relation R. Box 3 Case: advance resuscitation refusal. John is 50 years old and is otherwise healthy. In his 20s, John became briefly infatuated with a novel that involved a main character who made decisions based on the rolling of a dice. 27 i At the time John completed an advance directive on the basis of dice rolls. The advance directive indicated that if in the future he were to have a cardiac arrest he would not wish to be resuscitated. John registered this advance decision with his doctor at the time, but has not discussed it since. He stopped rolling dice for decision-making a long time ago, but has never revisited his advance directive. John collapses suddenly at a party. i There are a number of people who have been inspired by Rhinehart's novel to make risky decisions on the basis of dice rolling. 35 Box 4 Case: Third party vaccination refusal As before, James is an adult with underlying health problems. However, in this version of the case, James has a long-standing intellectual disability and is non-verbal. James' mother and longterm carer, Mary, expresses a strong desire that James not have a vaccine on the basis of her belief that these vaccines contain microchips that would connect James to a 5G network. Box 5 Case: Third party hospital birth refusal Jenny is pregnant and expecting her first child soon. Jenny has severe agoraphobia and has not left her home in 3 years. She has been assessed to lack capacity to make decisions about place of childbirth and the case has been referred to court. The judge, Michelle, has been taking part in online forums about natural childbirth, and on that basis decides for Jenny to have a free birth. Box 6 Case: Third party resuscitation refusal John is 50 years old and is otherwise healthy. His husband, Michael has recently become infatuated with a novel about dice rolling for decisions. When John collapses suddenly at a party, Michael rolls a dice and based on the result asks others present not to call an ambulance. Feature article experiments involving divided brains and teletransportation. It is outside the scope of this paper to fully outline these arguments or to defend them in detail. However, it may be helpful to set out a brief intuitive case in favour. When someone has a profound brain injury and permanently loses the capacity for consciousness, family members often report that the person who they were has 'gone'. 13 There is a strong sense in such cases that even if the individual's body remains alive, that the loss of psychological capacities is the end of their existence. In other cases, brain disorders can lead to profound changes in behaviour and personality. For example, in 2003, a 40-year-old school teacher was found to have an egg-sized tumour in his right frontal lobe. 14 The teacher (having never previously behaved in such a way) had developed progressive uncontrollable sexual urges including paedophilia over a period of 3 years, and was diagnosed with a brain tumour only the day before planned prison sentencing. His sexual urges abated after resection of the tumour. What should we say about such a case? It seems highly plausible that the individual who made sexual advances to his prepubescent daughter was different in an important way from his earlier (and then postresection) self. That seems highly relevant to an understanding of whether we should hold the teacher responsible for the past behaviour. In these two cases, there might be disagreement about whether the person still exists after the brain injury, whether it is the same person with or without the brain tumour. But one thing that we should agree on is that the personality loss or change matters profoundly. The reductionist view about personal identity has a number of striking implications xiii . One is that rather than thinking of oneself as a single self, existing from birth to death, it may be useful and natural to conceive of having 'successive selves'. (Parfit,p305,19) 4 Literature and colloquial language sometimes refers to 'an earlier self ' or 'a later self '. In usual circumstances, there will not be clear boundaries between these, but they are nevertheless distinct in important ways. A second implication is that our concern about our own future (our egoistic concern) may not be binary (all or nothing)-instead it is a matter of degree. xiv So, the person A at time point t1 can be closer to the later person B at t2, or further away-depending on the extent of psychological connectedness. The relevant question to ask is-'how closely is A related to B'?, rather than 'Is A identical to B'? There are also a number of moral implications of this view about personal identity. One is that the boundary between the self and others is less distinct and less important. (Parfit, p338) 4 Conventionally, decisions that affect only ourselves are taken to be outside the scope of morality. (Parfit, p319) 4 It might be unwise, or even irrational to make a decision that will cause future harm to ourself, but (on a standard liberal view) it is not a question of morality. (Feinberg,56) 15 Parfit rejected that. He claimed that because the future self is relevantly like a different person, we should think of decisions that affect them in the same way that we think about decisions that will affect a different person. This gives rise to the Parfitian claim about what we morally ought to do xiii Tim Campbell has argued that the interesting normative implications are a result of the combination of the Reductionist view and the view that what matters morally is psychological continuity. Campbell refers to this as Psychological Reductionis. 37 xiv McMahan distinguishes between personal identity (which is necessarily all or nothing), and the rational basis for egoistic concern, which he calls prudential unity (which may come in degrees). It is partly for this reason that McMahan and Parfit hold that identity is not what matters for egoistic concern. (McMahan, p39-43) 30 Reductionist Identity Moral Claim: "If we now care little about ourselves in the further future, our future selves are like future generations…Like future generations, future selves have no vote, so their interests need to be specially protected… We ought not to do to our future selves what it would be wrong to do to other people", (Parfit, In a very short section of his book, Parfit explicitly extended this to defend paternalism. He claimed that coercion or infringement of someone's autonomy could be justified to prevent the individual from causing great harm to himself for no good reason. While we cannot justify restricting someone's personal freedom on the grounds that they are acting irrationally, this is justified if they are acting wrongly. Parfit claimed that individual autonomy does not outweigh such moral concerns: 'Autonomy does not include the right to impose on oneself, for no good reason, great harm'. (Parfit,p321) 4 He went on, restating the moral claim in terms of the obligations of other people to prevent imprudence Reductionist Identity Paternalism Claim: "We ought to prevent anyone from doing to his future self what it would be wrong to do to other people" (Parfit, p321) 4 xv Parfit briefly acknowledged two standard objections to paternalism-that it is good for people to be able to learn from their mistakes, and that generally the individual will be in a better position than others to know what is best for him or her. These objections (which we could call the consequentialist and epistemic objections) are closely related to the arguments given by Mill and noted above. But in Reasons and Persons, Parfit did not clarify whether he thought that these objections outweighed the arguments that he set out in favour of paternalism given a reductionist account of identity. He provided no clear answer to the cases of James, Jenny and John. idenTiTy And TreATmenT reFusAl One point to note about the consequentialist and epistemic objections to paternalism is that they may not always apply. For example, there is little personal learning possible from a fatal error. (That would appear to potentially apply to the treatment refusal cases outlined above.) There will also be cases where we can be confident that the individual is mistaken about their own interests. James, Jenny and John in the treatment refusal cases are making serious errors of judgement. If we respect their decisions, that is not because we think they might be correct, but rather because they have a right to act imprudently. Furthermore, when we compare the first person with the third person versions of the cases, it is clear that neither the consequentialist nor the epistemic objections would come close to justifying third party harm. No matter how strongly we support their freedom to develop and have their own opinion or their knowledge of the individuals concerned, we would not allow Mary, Michelle or Michael to harm others seriously as a consequence. Does the Reductionist Identity Paternalism claim apply to treatment refusal? There are two possibilities. One is that this claim applies equally to all instances of self-harm. It would apply equally to decisions that individuals make that affect them in the very near future, as it does to decisions that xv Parfit's claim might need to be modified, since it could be interpreted too broadly. I return to this in the section on practical implications. individuals make that affect them in the distant future. We could call this Time-neutral paternalism: Individuals should be prevented from doing to themselves (whether in the near or in the further future) what it would be wrong for them to do to others. However, as noted, Parfit's view was that what matters morally is a function of psychological continuity and connectedness over time. This might suggest an alternative version. Identity relative paternalism: Individuals should be prevented from doing to future selves (where there are weakened prudential unity relations between the current and future self) what it would be wrong for them to do to others. On this identity relative account, our response to cases may vary-depending on the relationship between the person making the decision and the later-self harmed by it. For example, in the three cases described at the start of the paper there appear to be different degrees of connection between the individuals making decisions, and their future selves harmed by their decision. In Vaccine refusal, there is likely a short period of time between when James t1 makes a decision and James t2 potentially comes to harm because of contracting severe COVID-19. This means that there are strong psychological connections between the different Jameses. In contrast, in Advance resuscitation refusal, many years have passed since John t1 made his somewhat rash advance treatment decision. Although John t2 has psychological connections with his previous self, we might suspect that they are somewhat weak. It was an earlier self (in his transient Dice Man phase) who made the advance decision to refuse treatment. John t2 is likely to have many different interests, tastes, preferences and priorities from John t1 . The things that were important to him then are likely to be much less so now. Hospital birth refusal lies somewhat in between. In this case, there is a short temporal distance between the pregnant Jenny t1 and the later Jenny t2 who would potentially suffer a catastrophic complication of childbirth. However, the things that the pregnant Jenny t1 may prioritise and value could be significantly different from those of her later self. That is because some life events can profoundly alter our perspective. Such events are sometimes described as 'transformative'. The philosopher Laurie Ann Paul provocatively imagined becoming a vampire. (Paul, 16 Overnight, someone's way of life, their viewpoint, their values and preferences would transform. Such an experience might radically undermine our ability to make informed choices (because of our difficulty in imagining what life would be like). But on the Reductionist Identity account, it might also suddenly and significantly weaken the psychological connections that are morally significant. '[W]hen there has been a significant change of character, or style of life, or of beliefs and ideals-we might say, 'It was not I who did that, but an earlier self '. (Parfit, p305) 4 . Paul cites becoming a parent as a paradigmatic example of a transformative experience. 17 In Jenny's case there might be the additional, even more profoundly transformative, experience of bereavement. A news article describing a real case of a free-birth choice that ended badly, cites a woman whose baby died following planned free-birth. The mother described vividly her subsequent guilt, and her conclusion in retrospect: 'I think I brainwashed myself with the internet'. 18 Her description of a profound shift in perspective marries with the notion that the self who experiences the harm in such cases might be different, to an important degree, from the earlier self who made a harmful choice. xvi On an Identity Relative Paternalistic account, the reasons to be paternalistic would be stronger for Advance resuscitation refusal and Hospital birth refusal, than for Vaccine refusal. Since we would not permit harm to other people in the thirdparty variations of those cases, we potentially should not permit John and Jenny to refuse treatment. Or, at the very least, we should be more inclined to overrule or disallow their decisions. However, that would not apply to James' Vaccine refusal. In contrast, a time-neutral paternalistic account would treat these cases as symmetrical. Since we would not allow third parties to harm other people in these ways and for these reasons, we should allow neither James nor Jenny nor John to harm themselves in the ways that they intend. If we take a reductionist approach to identity, which version should we adopt, which is most plausible? There is some reason to think that Parfit would have supported Identity relative Paternalism. The reductionist identity morality and paternalism claims as articulated by Parfit relate to the harm that individuals do to their 'future selves'. (Parfit, p320-1) 4 In support of these claims he provided examples where the prudential unity relations are weakened. For example a boy starting to smoke and causing great suffering fifty years later. (Parfit, p319) 4 However, it is worth being clear whether and why the moral reasons not to harm oneself apply only to far future selves and not our more proximate selves. Parfit gave two alternative ways of expanding the scope of moral theory to include harm to our future selves. The first would be impersonal. The moral reason not to self-harm is because it results in reduced overall well-being or greater suffering. From an agent-neutral consequentialist perspective, the loss of wellbeing due to self-harm is the same as the loss of well-being incurred when a third party is harmed. xvii The second alternative is agent relative. We could expand our understanding of our special duties and obligations (to kin, to friends, to clients) to include a duty to the self. xviii If we expand the scope of morality in these ways, that would provide a basis for the reductionist identity moral claim and the corresponding paternalism claim. Yet on either basis, that would potentially apply equally to the proximate future self and the distant future self. From an impartial consequentialist perspective it would be just as harmful for John to make an unwise advance decision that shortens his life if the harm accrues shortly after his advance directive or many years later. xix Likewise, it is xvi Although I cite here Paul's description of transformative experience, Identity Relative Paternalism is not dependent on a particular account or definition of such experiences. Experiences may transform individuals to greater or lesser extents, and correspondingly may weaken to a greater or lesser degree the psychological connections between earlier and later selves. xvii This will not always hold. In some cases, when a third party is harmed, both they and those around them may experience more distress than if the individual had harmed himself. xviii Mill recognised the importance of self-regarding virtues.(Mill, p136) 2 However, he did not think that they could generate a duty as they were not enforceable.(Mill, Ch5 para 14) 19 Joseph Kranak has argued that duties to the self should be understood as duties to future selvesdrawing on the reductionist idea (which he refers to as a metaphysical fiction) that the self is not unified over time. 38 xix In a different way, a later harm may be less harmful-since it will potentially result in shorter period of reduced or foregone well-being Feature article not clear why our agent relative duty to self applies to our far future and not our near future self. If the moral reasons not to harm do not change over time, or with weakening of psychological connections, that would appear to support Time-neutral Paternalism. But it would be worthwhile returning to why the Reductionist Identity account provides support for paternalism in the first place. When we recognise that what matters to us (in terms of future selves) is a matter of degree, that may change what we have reason to care about in an egoistic way. We may then come to care less about harms that occur to ourselves in the far future. It would not be irrational to make imprudent decisions if the harms will occur at a time later when the psychological connections to our current self will be relatively weak. (Parfit, 4 The Reductionist Identity account weakens the prudential reasons that we have to avoid harms to future selves. But it does not, itself, generate a corresponding moral reason to intervene. Where does the case for paternalism come in then? As argued, the moral reasons not to harm, are not identity relative, they are time-neutral. What the Reductionist Identity account does is not to create a moral reason to avoid harm to a future self-rather it potentially unmasks those moral reasons. Figure 1 illustrates this. As the figure indicates, for proximate self-harming, there are both prudential and moral reasons not to harm the nearfuture self. For far-future self-harming, the prudential reasons (currently) may be relatively weaker insofar as the psychological connections and continuity are diminished, while the moral reasons remain the same. On this model, it seems that the reductionist identity moral claim should be time neutral. We ought not to do to our future selves (whether or not we are closely psychologically connected to those future selves) what it would be wrong to do to other people. But there is a further question about the Reductionist Identity Paternalism claim. That is because for proximate future selves there is a potential conflict between the prudential reasons not to harm (as perceived by the individual) and the moral reasons. We need to consider then the difficult question of how to balance prudential and moral considerations. pATernAlism And THe duAliTy oF prACTiCAl reAson Sometimes we face choices between what we believe would be best for us, and what would be impartially best. For example, it may be that we could help someone else, but only at a significant personal cost. Or it may be that we could take a course of action that would be better for ourselves, but at the cost of failing to do what we morally (impartially) ought to do. The moral philosopher Sidgwick regarded the potential conflict between these two different types of reasons as 'the profoundest problem of ethics'. (Sidgwick,p386) 20 The problem, as Sidgwick saw it, was that such reasons are not straightforwardly comparable. It is not clear how much sacrifice of personal well-being we are required to make for the sake of impartial beneficence. There is no external viewpoint that would allow us to answer such a question. (Parfit, 21 According to Sidgwick, if we have a choice between what would be impartially best and what would be best for ourselves, it would be rational to take either choice-this is what he referred to as the 'dualism of practical reason'. xx The dualism might not be thought to apply to paternalism cases like the first-party treatment refusal cases cited above. In these cases, the prudential reasons to avoid harm and the moral reasons actually coincide. Indeed, the reason that we might be tempted to be paternalistic is because of concern for the individual's well-being. So there is no conflict between incompatible reasons. However, from the point of view of James and Jenny and John, the prudential reasons do diverge. They believe that the decisions they are making would be best for their current and future selves. They believe that they would be harmed by accepting the vaccine, transfusion or hospital birth. They appear to be mistaken, but from their perspective that is not the case. (Indeed, they are not likely to even recognise this conflict since they would regard the choices they are making as promoting the well-being of their future selves). The question for paternalism is whether health professionals or societies are justified in overruling an individual's personal judgement about what would be best for himself on the basis of a reliable concern that this would in fact cause the future person great harm. This does seem to potentially be a conflict between a type of prudential reason and a moral one. (A familiar way in medical ethics of characterising this conflict would be to see this as a clash between 'autonomy' and 'beneficence'). One response, drawing on the dualism of practical reason, would be to say that there is no way to weigh up or arbitrate between these two types of reason. Sidgwick, himself, could identify no way of balancing these reasons. In that case, faced with any degree of conflict between prudential and moral, autonomy and beneficence concerns, we may decide to give primacy to autonomy and reject paternalism. Indeed, the conventional response to treatment refusal cases takes exactly that path. No matter how great the well-being cost, or how xx Crisp notes that strictly speaking, Sidgwick did not regard a conflict between egoism and utilitarianism as yielding a sufficiency of reasonsrather to practical chaos. (Crisp, p230) 39 Figure 1 The strength of reasons to avoid harm and the time point of harm occurring. Prudential reasons are "reason(s) from the prudential point of view". 28 The moral reasons to avoid harm do not change over time, but the prudential reasons may diminish (where prudential unity relations are weaker). The individual does not necessarily regard their actions as harmful, and may regard it as in their self-interest to eg avoid a transfusion or give birth away from a hospital. However, those self-interested (prudential) reasons potentially attenuate in strength the further in the future the self that would be impacted by the decision. irrational the reason, so long as the individual has the capacity to decide, they should be permitted to refuse treatment. Yet, many philosophers who have followed Sidgwick have rejected his pessimistic conclusion that no answer could be found that would balance the two types of reason. Parfit's own discussion of Sidgwick emphasised that he thought that Sidgwick's account depended on the rational significance of personal identity. Given the unity of each person's life, we each have strong reasons, Sidgwick claims, to care about our own well-being in our life as a whole. And given the depth of the distinction between different people…one person's loss of happiness cannot be compensated by gains to the happiness of others. [32,133] But this was, according to Parfit, to overstate the importance of personal identity. (Parfit,p136) 21 On the reductionist identity account, our reasons to care about our future are based not on the fact that such a future is 'ours' -rather on the basis of the psychological relations between our current and future selves. Moreover, Parfit contended that we can have some similar partial reasons to care for the well-being of other individuals connected to us (friends and relations) and separate impartial reasons to care about everyone's well-being. Parfit's own preferred way of addressing the dualism was through what he called the wide value-based objective view and the notion that such reasons are comparable, although only imprecisely. Wide value-based objective (WVO) view If one of two possible acts would make things go impartially better, while the other act would make things go better from a partial perspective (for ourselves or someone close to us), we could have sufficient reason to act in either way. (Parfit,p137) 21 Drawing on the WVO view, Parfit concluded that strong impartial considerations could sometimes outweigh weak prudential ones. We would have much stronger reasons to save many strangers from death or agony than to save ourselves from some minor harm. (Parfit,p137) 21 For a practical example, on one plausible interpretation of this view, a passing stranger would have stronger reason to save a child drowning in a pond, than to save his expensive suit. xxi Parfit alluded to the relevance of a Reductionist Identity view for the dualism and as support for the WVO view. However, it also seems that we could draw on this WVO view in thinking about paternalism. The WVO is about moral as well as rational permissibility. One problem with time-neutral paternalism is that it appears to give no weight to the prudential reasons for acting-it takes only an impartial perspective. In contrast, identity relative paternalism would defer to the individual's judgement for harms that are proximate, but potentially give greater weight to impartial moral considerations as the psychological connections between current and future-self diminish. This would potentially be supported by the idea, on the WVO, that strong impartial reasons could outweigh weaker prudential reasons. That suggests that paternalism would be most justified in cases where the harm will be accrued by a future self relatively psychologically distant from the current person. xxi Parfit did not (to my knowledge) discuss Peter Singer's famous pond example, but it seems plausible to interpret his view in this way. prACTiCAl impliCATions oF idenTiTy relATive pATernAlism I have defended Identity relative paternalism. What would such a view mean for the harm principle and for refusal of medical treatment? According to the view I have described, if we would not allow an individual to refuse treatment for a third party (eg, where they are a surrogate decision maker), we should potentially disallow refusal of treatment where the harm will be accrued by a future self. How strong a reason there is to act paternalistically will depend on the relative strength or weakness of prudential unity relations. The sharp boundary between harms to self and harms to third parties would be dissolved. expiry of advance directives Such a view could lead to questioning of advance directives written a very long time prior to their application, as occurred in advance treatment refusal. In fact, this already appears to be supported to some degree in practice. 22 Clinicians are asked to consider whether there is reason to think that the individual might have changed their mind since completing their advance directive. xxii Patients are encouraged to review their advance directives periodically (for example every 2 years). 23 However, we could take that further. Advance decisions that are older than a certain period and have not been formally reviewed could lose their legally binding status. xxiii They could still be taken into account, but their status would change. They would then simply indicate the views of the patient at an earlier time point. They might be regarded in the same way as the views of next of kin or family members are for patients without capacity-relevant but not necessarily binding. Where it is overall in a patient's best interests to treat contrary to a much earlier advance directive, that could be authorised-in just the same way that an authorised surrogate decision-maker can be overruled if making decisions contrary to a patient's best interests. It might be thought that this reductionist argument (in favour of potentially ignoring advance refusals of treatment with the passage of time) would apply a fortiori to ignoring advance refusals of treatment in cases of severe brain injury or dementia. xxiv That might mean that patients are unable to write binding advance directives that apply to their future self with dementia. However, while such cases involve significant psychological discontinuity, they are complicated by changes in the capacity of the individual. It certainly appears that the prudential unity relations are significantly weakened. However, it is also extremely difficult or impossible to know what the wishes or values or views of the later self would be. One approach would be to give priority to the wishes of the earlier self (because of their greater autonomy or moral status). Another approach would be to consider the views of the earlier self as akin to the views of a close family member. That might plausibly lead to a presumption in favour of following those wishes, but would also xxii For example, the Code of practice relating to the UK Mental Capacity act is clear that 'If the person's current circumstances are significantly different from those when the decision was made, the advance decision may not be applicable'. MCA CoP 9.51 40 (It is worth noting that rejecting an advance directive because someone appears to have changed their mind, is different from rejecting an advance directive because the self to whom it applies is psychologically distant from the self who wrote it. Both might be justified, but for different reasons.) xxiii Some jurisdictions (eg, Oregon) allow patients to indicate an expiry date for Advance directives. However, I am not aware of any that currently apply an automatic expiry. xxiv See for example Dresser. 6 Feature article allow overriding a prior advance directive that would be clearly harmful to the later self (for example refusal of pain relief or palliative care, or a demand for burdensome treatment despite little/no prospect of benefit). Compelling treatment What of cases like Home birth refusal, where harms will potentially occur in the near future, but we may have reason to think that psychological connections will be weakened. Such cases are more difficult. That is partly because of the challenge of prediction. Not all individuals change in their outlook and perspectives when they become parents, or when they are bereaved, or to the same extent. There may be considerable uncertainty about whether sufficient weakening of prudential unity will occur to warrant paternalistic intervention. There is an important question about whether we should assume prudential unity and give priority to the wishes of the current individual, or assume prudential disunity and prioritise preventing harm to the later self. There is a further complication. Let us assume Jenny T2 is sufficiently different from her earlier self to warrant treating this as a 'harm to others' case. It does not follow that it would be justified to compel her to have medical treatment (eg, a caesarean section). That is because we would not necessarily inflict certain forms of treatment on individuals even to prevent harm to third parties. Consider the example in box 7. We would not compel Cain to donate part of his liver to Abel-even though it would (in the example) prevent severe harm to another. This applies to significant surgical intervention (donating part of a liver, or a kidney). It also arguably would apply to blood transfusion. Someone ought to donate blood to prevent serious harm to a third party. But we would not force them to do so. xxv This suggests that we need to modify our principle. Identity relative paternalistic intervention: Individuals should be prevented from doing to future selves (where there are weakened prudential unity relations between the current and future self) what it would be justified to prevent them from doing to others. On the basis of the identity relative paternalistic intervention principle, it would not be justified to perform a caesarean section on Jenny against her wishes, since we would not justifiably perform major surgery on one person for the sake of another individual. xxvi xxv Even in an emergency situation, with a national shortage of blood donors, we would not usually compel people to donate blood. xxvi In this example, intervention might prevent harm to two other individuals-Jenny's future self, and her future child. 25 I have hitherto set aside taking into account the interests of the future child. However, if that were included, it would still not necessarily justify intervention. For However, some less intrusive steps might be permitted by this modified intervention principle. For example, some states mandate vaccination. 24 That is typically justified on the basis of prevention of harm to third parties. Vaccine mandates are controversial. Yet, if they can be justified on that basis, it could also be possible to vaccinate paternalistically where the harm will accrue to a future self (with weakened prudential unity relations). xxvii That would not apply to James' case, (since we might expect the greatest risk of COVID-19 is in the short term), but it could apply in other situations where the illness prevented is in the further future (for example with the human papilloma vaccine to prevent future cervical cancer). Here is another possibility: I have focused on patient refusal of treatment. But we might imagine another case where a patient requests treatment that would cause harm to a third party. It would be justified for doctors to refuse to perform surgery or to provide a treatment that would harm a third party. Correspondingly, they might also decline to provide a treatment that would harm a future self (even if there is not separate harm to a third party). Acutely life-threatening choices In Vaccination refusal, James is potentially at risk of dying if he contracts severe COVID-19. Although it would plausibly be in James' best interests to have a vaccination contrary to his expressed wishes, the future James harmed by intervention (ie, who would die) is psychologically close to the current James. James t2 is not akin to another person. Mill's harm principle would apply to a case like this. There is a potentially significant difference between acutely life-threatening harms and other harms. For example, one possibility is that James would not die even if he developed severe COVID-19. But he might survive with other serious complications (for example, he might have a cardiac arrest and develop hypoxic brain injury). If James were to survive, there could then be a future self who would be sufficiently psychologically distant from the current James, and who would have been harmed. xxviii This suggests a potential paradox: Identity-relative paternalism might be more permissive of acutely life-threatening choices than of choices that are not life threatening but would lead to survival long term in a harmed state. xxix While this conclusion may be surprising, it is not without precedent. For similar reasons, concern about harm to the future child can provide a stronger reason to intervene with maternal choices that lead to survival of an impaired fetus than with maternal choices that might lead to death of the fetus. 25 This could mean that although it permits paternalistic intervention, Identity relative paternalism would not support prohibition of assisted suicide, even where the doctor has reason to believe that the individual patient's future example, a single blood donation is often separated into different components (red blood cells/plasma/platelets) and can be used to help more than one person. However, we do not ordinarily think that it would be justified to compel someone to give blood even to prevent harm to more than one individual. Correspondingly, it would not be justified to compel a person to undergo significant surgery even to prevent harm to more than one other person. xxvii One example of a vaccine where refusal will potentially cause harm to a future self (but not necessarily harm to third parties) is tetanussince this is usually not transmissible to others, and herd immunity does not apply. xxviii Future James might have weakened psychological connections with the current James either because the experience of serious illness and subsequent disability is transformative, or simply because he will live for long enough that a sufficiently distinct future self will emerge. xxix This would hold even if the impairment or illness were less harmful than dying. Box 7 Case: liver donation refusal Abel has severe liver failure and is listed for transplantation. However, he has a relatively rare tissue type and he lives in a country where there are relatively few deceased donor livers. It seems likely that he will die waiting for a transplant. Abel's brother Cain is the only closely matching family member who would be suitable for a living partial liver donation. This donation would have a high chance of helping Abel and relatively low risk to Cain. However, Cain declines to donate. life would be worth living. This paradox is related to a familiar conundrum in relation to reproduction-the so-called 'Asymmetry'. 26 xxx In this case, there are identity relative paternalistic reasons to intervene where there will be a future (psychologically more distant) individual who is harmed. However, those reasons do not apply in situations where (absent intervention) the future individual will not exist. One possible response to this would be to claim that in cases like that of James, (where his vaccine refusal is life-threatening) doctors/courts would be justified in compelling vaccination since this will lead to (or increase the chance of) existence for a future James t10 who will then be psychologically distant from James t1 . However, if we do so, that is not to prevent harm to others. Future James t10 may benefit from our intervention, but he would not have been harmed had we allowed James t1 to refuse the vaccine, since this future James t10 would not have existed. However, these sorts of concerns also apply to third party decisions made for incompetent patients. In that setting, they are not taken to mean that we must avoid decisions for patients who are unable to decide for themselves, rather that such decisions should be taken with great care. xxxi What is more, these concerns about fallibility and abuse also apply to the decisions that individuals make about their future selves. Individuals may fail to take into account the interests of their future selves, they may be mistaken about what those future selves would care about, or fail to give sufficient weight to those future interests. Disregard for our own future well-being could be regarded as a form of elder abuse or even a type of discrimination against a class of individuals who are unable to protect their own interests-in a similar way to unfair treatment of future generations. The epistemic and consequentialist arguments against paternalism do not succeed in establishing a sharp difference between future-self harm and other-harm. rejection of reductionist identity Others will reject the above arguments because they are sceptical of the reductionist claim that our future selves are relevantly like other people. For example, if someone believes in a Cartesian ego, or a soul, then it is clear that there is a binary answer to questions of identity and a bright line between the self and others. Still others may reject the Parfitian view because they regard some of its radical implications as a reductio ad absurdum. Parfit himself admitted that his view was revisionary. He noted that it yielded some implications that were contrary to conventional views and potentially controversial. The notion that paternalism is more easily justified is one such implication. However, xxx There can be moral reasons not to bring into existence an individual who will have a life not worth living (it would be potentially morally wrong and harmful to do so), though there are not moral reasons to bring someone into existence who will have a life worth living (and it would not be morally wrong or harmful to fail to do this). xxxi As noted earlier, such concerns are not sufficient reason to permit third party harm. it is worthwhile also noting that the alternative view is also counterintuitive and unattractive. The three examples given in this paper of refusal of medical treatment might be uncommon or unusual, yet in some situations patients do make decisions that are profoundly unwise and risk great harm to their future selves. Health professionals in such circumstances often feel deeply conflicted-perhaps precisely because they recognise that such choices are morally wrong even if they are decisions (as things currently stand) that the individual patient has a legal right to make. vagueness and uncertainty I have suggested that there is a difference between decisions that harm a near-future self, and those that harm a far-future self. However, drawing this distinction might be problematic. As explicitly endorsed by the reductionist identity account, there is no clear boundary between near future and far future selves. Rather, there is a continuum characterised by greater connectedness at one end, and lesser connectedness at the other. Moreover, there is likely to be uncertainty about how much personal change an individual will undergo over time. xxxii How will we know when to intervene? This type of concern is a perennial problem in practical ethics. Ethical considerations or reasons often exist in a spectrum, and boundaries are frequently vague. Predicting the impact of decisions on future individuals can be challenging and uncertain. However, in one way, this is a virtue, not a weakness of the reductionist paternalism account. Even on the conventional Millian account, there is a need to weigh up the degree of harm to others that would be caused, and the other countervailing reasons not to intervene. The reductionist account indicates another important consideration to be weighed. non-identity Finally, an additional complication is that certain patient choices may not merely cause a future self to have medical complications and reduced well-being. The choices may change in a fundamental way the nature of the future self who experiences them. For example, if future Jenny t2 (following a complicated home birth) has been 'transformed' by the experience of perinatal loss, there might be a question about whether she has been harmed in a counterfactual sense. After all, if the current Jenny t1 had given birth in hospital, that would have given rise to a different future self. This raises the possibility that considerations of future selves might give rise to complicated new forms of Parfit's non-identity problem. 4 As Parfit famously noted, in special cases where our decisions that would affect which future individuals would be born, we sometimes cannot say that a specific future individual is worse off, since the alternative is that they would not exist. However, decisions that affect our future selves are importantly different from those that affect which future individuals exist. On the reductionist account, our future selves may be more like other people than we conventionally think. Nevertheless, (absent exceptional circumstances) there are physical and psychological continuities between the current and future self. These mean that the current self can have an interest in the well-being of their future self. It also means xxxii For example, will a future James retain his anti-vaccine beliefs, or will his mind have changed? How much will Jenny's views, values, and traits change? Feature article that the future self can coherently claim that they have been counterfactually harmed-even where their life would have gone radically differently had their younger self made a different choice. It would be completely coherent for Jenny t2 to lament the choice of Jenny t1 , though it would not make sense in a typical non-identity case for a child (who has a life worth living) to lament their parents' choice to conceive them rather than a different (healthier) child. xxxiii ConClusions In this paper, I have argued that if we adopt a reductionist account of personal identity, the bright line between harm to self and harm to others becomes blurred and the Millian harm principle fails to generate a clear prohibition against intervening to prevent future harm. I have described two different forms of paternalism potentially arising from a reductionist view of identity and suggested that in the face of conflicting prudential and moral reasons, a wide value-based objective view supports a form of Identity relative paternalism. I have defended a new identity relative paternalitic intervention principle. The point is not that identity relative paternalism necessarily or always justifies paternalistic interventions in such cases, rather that there is a stronger ethical case for doing so than is often recognised. Harm to self can be sufficient to warrant state intervention-where that harm is significant, and that future self is, to a relevant degree, like another person. Pace Mill, power (including medical power) can be rightfully exercised over competent adults, against their will for their own benefit. The strong moral reasons to prevent harm to other people can also apply to our future selves. Correction notice Since this paper first published, acknowledgements have been added.
14,515.2
2023-01-20T00:00:00.000
[ "Philosophy", "Law" ]
Maxillary Bone Regeneration Based on Nanoreservoirs Functionalized ε-Polycaprolactone Biomembranes in a Mouse Model of Jaw Bone Lesion Current approaches of regenerative therapies constitute strategies for bone tissue reparation and engineering, especially in the context of genetical diseases with skeletal defects. Bone regeneration using electrospun nanofibers' implant has the following objectives: bone neoformation induction with rapid healing, reduced postoperative complications, and improvement of bone tissue quality. In vivo implantation of polycaprolactone (PCL) biomembrane functionalized with BMP-2/Ibuprofen in mouse maxillary defects was followed by bone neoformation kinetics evaluation using microcomputed tomography. Wild-Type (WT) and Tabby (Ta) mice were used to compare effects on a normal phenotype and on a mutant model of ectodermal dysplasia (ED). After 21 days, no effect on bone neoformation was observed in Ta treated lesion (4% neoformation compared to 13% in the control lesion). Between the 21st and the 30th days, the use of biomembrane functionalized with BMP-2/Ibuprofen in maxillary bone lesions allowed a significant increase in bone neoformation peaks (resp., +8% in mutant Ta and +13% in WT). Histological analyses revealed a neoformed bone with regular trabecular structure, areas of mineralized bone inside the membrane, and an improved neovascularization in the treated lesion with bifunctionalized membrane. In conclusion, PCL functionalized biomembrane promoted bone neoformation, this effect being modulated by the Ta bone phenotype responsible for an alteration of bone response. Introduction Approaches of bioengineering and regenerative medicine aim to create different types of materials, implants, or scaffold mimicking structure of extracellular matrix, functionalized with bioactive molecules or living cells. The clinical purpose of these methods is the reparation or guided regeneration of damaged tissue, in our case, jaw bone affected by genetical diseases. These biomembranes or scaffolds constitute a support for osteoblastic adhesion and proliferation, but also microenvironment for stem cells' chemotactism and differentiation [1]. Different sources of living cells are described, as mesenchymal stem cells (MSCs), adipose-tissue derived stem cells, skin derived multipotent stem cells, or oral cavity MSCs, presenting compatible immunophenotype or morphology [2,3]. The main interest of the use of bone marrow derived stem cells is their osteogenic potential for neoangiogenesis. Several therapeutic applications are developed in the field of bone and cartilage defect treatments, based on the osteoinductive and osteoconductive properties of these materials but also on the intrinsic physiological regenerative properties of bone [3]. The functionalization of these matrices enhances the bone regenerative process. To functionalize at a nanoscale level is very convenient. It allows the concentration of many different functions in a small volume and presents the advantage of increasing the quality of targeting while controlling the cost and delivery kinetics of the active molecules [12]. Thus, the strategy of functionalization of nanofibers by nanoreservoirs of BMP-2 or BMP-7 showed a great efficiency for bone regeneration and increased the differentiation of MSCs (mesenchymal stem cells), accelerating the tissue regeneration in vivo [9,10,13,14]. These different nanofiber scaffolds with nanoreservoirs are efficient proregenerative biomimicking implants for bone regeneration. The next challenge of these smart active nanomaterials is to be able to promote normalization of implantation site. Indeed, some pathologies or treatments can modify drastically properties of implantation bone site and compromise the regeneration, for example, in contexts as aging or genetical and metabolic skeletal diseases, after tumors resection, severe traumas, and in rare diseases with bone hypotrophy or structural defects. The skeletal phenotype described in patients with ectodermal dysplasia and in the Tabby (Ta) mutant experimental mouse model is characterized by craniofacial dysmorphia, marked alveolar bone hypotrophy, bone structural defects leading to endosseous implants, and jaw bone grafts postoperative complications. The Ta mutant mouse corresponds to the experimental model of ectodermal dysplasia genodermatosis, with a satisfactory isomorphism, and presents a spontaneous mutation of Ta gene exon 1, the mouse homologous of EDA gene, mutated in humans affected by ectodermal dysplasia. Therefore, the Ta model was used to evaluate in vivo the bone response after microsurgical lesion in the context of ectodermal dysplasia (ED). The phenotypic spectrum of Ta model integrates craniofacial and postcranial bone morphological, structural, and metabolic anomalies [15]. For example, dysplastic zones in the tail vertebrae with histological and structural trabecular bone defects have been observed. Moreover, dental morphotypes with agenesis and morphological defects have been extensively characterized and mimic human phenotype [16]. In our study, only Ta males were used presenting a severe phenotype, in order to avoid any variability linked to genetic or hormonal status. Wild-Type (WT) mice were used as control group. Clinically, the management of maxillary bone defects represents a challenge with indications of extensive bone grafting [17][18][19]. Despite the fact that autogenous or allogenic bone grafting is considered as a gold standard, some complications were described, especially in the context of genetical diseases, leading to the development of bone tissue-engineering application [20]. The use of biomembrane with nanoreservoirs embedding different dimers like BMP-2 and Ibuprofen is a promising approach to compensate the bone defects linked to the EDA/Ta mutation, combining biomaterials, cells, and signaling molecules, essential in bone bioengineering, osteogenesis, and neoangiogenesis [3]. The role of Ibuprofen is to modulate inflammation in a context of NF-KB pathway dysfunction, this pathway being essential in the inflammation process regulation. On the other hand, BMP-2 promotes bone formation by stimulating osteoblasts differentiation, proliferation, and migration, allowing accelerated bone healing [21]. BMP family molecules are widely used in different homodimeric and heterodimeric associations for the management of bone fractures, skeletal defects, nonunions, or osteonecrosis [22]. The most studied BMP isoforms are BMP-2 and BMP-7, used as recombinant human BMP in the treatment of skeletal diseases, these isoforms playing a major role during bone embryogenesis and postnatal bone homeostasis and remodelling [23]. Nevertheless, clinical use of BMPs isoforms is still controversial, with a limited number of controlled comparative trials, which leads us to study in an experimental mouse model the biological effects of BMP-2 release on maxillary jaw bone neoformation. The aim of the study is to produce a proregenerative biomimicking implant carrying anti-inflammatory and osteoinductive properties in order to enhance maxillary bone regeneration in a model of ectodermal dysplasia, the Tabby mutant mouse model. Our team focuses on the kinetics of molecules release in vivo from PCL functionalized biomembranes, which are crucial in osteogenic differentiation of stem cell control. 2.2. Biomembrane. PCL nanofibrous biomembranes were obtained by electrospinning and bifunctionalized using the nanoreservoir technology, producing BMP-2/Ibuprofen nanoreservoirs [11]. The PCL was dissolved in a mix of dichloromethane and dimethylformamide (DCM/DMF 50/50). Electrospinning allowed producing biomembranes of entangled polymer nanofibers. A syringe of 5 mL ejected the solution through a high-voltage electric field (15 kV). The solvent evaporated and the PCL formed fibers are recovered at the collector (20 × 20 cm 2 aluminium foils). The 40 mthick PCL membranes were soaked in 70% ethanol and exposed under ultraviolet light for 30 min to be sterilized. The electrospun fibers were 544 +/− 88 nm in diameter (mean over 50 fibers) as previously described [13]. Buildup of the Nanoreservoirs. For the biological activity experiments, (BMP-2/chitosan) 3 and (Ibuprofen/chitosan) 3 were built up on the PCL scaffold. The membrane was washed for 15 min in MES buffer (40 mM, pH 5.5) and then in chitosan (500 g/mL) before to immerse it again in MES buffer and in the BMP-2 solution (200 ng/mL). Each immersion must last 15 min. The cycle is repeated three times with BMP-2 and three times with Ibuprofen (50 g/mL). The concentrations of the solutions were taken from the literature and previous study [13]. Chitosan has a positive charge with a p of 6.5. BMP-2 has a positive global charge in this experimental condition (MES pH 5.5) with its isoelectric point of 8.5 while Ibuprofen has a negative global charge (isoelectric point of 4.91). But BMP-2 is an amphoteric protein with negatively charged extremities allowing the layer by layer buildup. The objective was to obtain nanoreservoirs distributed randomly on the surface of PCL nanofibers as shown in previous study [11]. Encapsulated in the nanoreservoirs of chitosan, BMP-2 and Ibuprofen are protected and available for cell activity. Scanning Electron Microscopy (SEM) . SEM allowed characterizing the morphological structure of nanoreservoirs on the PCL biomembrane as previously described [24] and the morphology of the osteoblasts on the scaffolds after 4 days of culture. The biomembrane was fixed and dehydrated in ethanol baths of increasing concentration (25%, 50%, 75%, 90%, and 100%) each for 10 min. It was placed on a specimen holder and fixed with carbonconductive adhesive tape. Hexamethyldisilazane (HDMS) was deposited on the sample. The objective was to observe the nanofibrous substructure, the size and the porosity of the fibers, and the distribution and the size of the nanoreservoirs. Adsorption of Ibuprofen on PCL Membrane. To quantify the Ibuprofen attached to the biomaterials, we recovered the soaking solutions after each adsorption cycle. The optical density was measured at 200 and 350 nm. The amount of Ibuprofen was then determined using a standard curve (Supporting 1). In Vitro Characterization. Human primary osteoblasts (Hob) (PromoCell GmbH, Heidelberg, Germany) were grown in a "Specific Medium" with "Supplement Mix" (Pro-moCell GmbH, Heidelberg, Germany). The cells were incubated at 37 ∘ C in a humidified atmosphere of 5% CO 2 . When cells reached subconfluence, they were harvested with trypsin and subcultured on nonfunctionalized PCL or Ibu, BMP2, or BMP2/Ibu functionalized PCL membranes in 24-well plates. The membranes were treated with 70% ethanol and sterilized by 30 min exposure to UV light before cell seeding. For this, the membrane was punched to the well size and locked in. The cell viability and proliferation were measured by the AlamarBlue5 test (Fisher Scientific, Illkirch, France). Before AlamarBlue analysis, samples have been moved in a new well in order to measure only the metabolic activity of cells attached on the scaffold. The osteoblasts were also studied by immunofluorescence for the expression of osteopontin and BSPII after 14 days of culture. Briefly osteoblasts were fixed with 4% paraformaldehyde (PFA) for 10 min at 4 ∘ C, saturated with 0.1% Triton X-100 and 1% BSA for 1 h, and then rinsed three times with PBS. Primary antibodies were incubated overnight at 4 ∘ C at 1/200: rabbit anti-BSPII (sc-73497, Santa Cruz Biotechnology, Clinisciences, Nanterre, France) and mouse antiosteopontin (OPN, sc-10591, Santa Cruz Biotechnology, Clinisciences, Nanterre, France). After three washings with PBS, samples were incubated for 1 h with anti-rabbit Alexa 488 or anti-mouse Alexa 488 (Molecular Probes, Fisher Scientific, Illkirch, France) and then with Alexa Fluor 594 Phalloidin (1/200, Molecular Probes, Fisher Scientific, Illkirch, France) for 10 min and 5 min with 200 nM 4 ,6-diamidino-2-phenylindole (DAPI, Euromedex, Souffelweyersheim, France). The samples were observed under an epifluorescence microscope (Olympus DP73). In Vivo Microsurgical Protocol. The experimental protocol fulfilled the authorization of the "Ministère de l'Enseignement Supérieur et de la Recherche" under the agreement number 01716.02. The Ethics Committee of Strasbourg named "Comité Régional d'Ethique en Matière d'Expérimentation Animale de Strasbourg (CREMEAS)" specifically approved this study. Under general anesthesia, a maxillary bone lesion was created in the diastemal area with a dental bur (500 m) after gingival incision (Supporting 2A, B). On one side, bifunctionalized BMP-2/Ibuprofen scaffold or functionalized Ibuprofen scaffold or functionalized BMP-2 scaffold was implanted while the other side served as a control with the same lesion but without scaffold or with nonfunctionalized membrane (Supporting 2C). The mucosa was closed with biological glue (3M Vetbond6 Tissue Adhesive, Fisher Scientific, Illkirch, France) (Supporting 2D). In Vivo Microcomputed Tomography (Micro-CT) Analyses. To study the evolution of bone response, we conducted a longitudinal postoperative follow-up using microcomputed tomography. The X-ray microtomography acquisitions were performed under general anesthesia after 7, 21, and 30 days. A spatial isotropic resolution of 50 m was used for the acquisitions. Volumic analyses of bone lesions followed the definition of a cubic region of interest (ROI) framing these lesions. Histological Analyses. Osteoblasts cultured for 4 days on PCL biomembranes were fixed for 10 min with 4% paraformaldehyde and stained with hematoxylin-eosin. The histological analyses of neoformed bone structures in WT and Ta mutant mice were conducted at 30 days postoperative. Maxillaries were fixed for 24 h with 4% paraformaldehyde, decalcified in EDTA 15% at 37 ∘ C for one week, and embedded in paraffin. Serial sections (10 m) were stained with hematoxylin-eosin. Sections were observed on a Leica DM4000B microscope. Statistical Analyses. Statistical analyses were performed using Student's t-test. Statistical significance was evaluated by one-way ANOVA (SigmaStat, Jandel GmbH, Erkrath, Germany). All data were expressed as mean ± standard deviation (SD). < 0.05 was considered as statistically significant. Characterization of the Scaffolds. The polycaprolactone fibers and nanoreservoirs were characterized by scanning electron microscopy (SEM) (Figure 1). Their distribution was random. The PCL scaffolds (Figure 1(a)) evidenced a nonwoven mesh like structure with a large surface area per volume ratio. The electrospun fibers were 544 +/− 88 nm in diameter, porosity corresponding to interfibers spaces ranged between 400 nm and 2 m. The nanofiber diameter of the developed PCL nanofibers falls within the range of the native collagen nanofibers diameter present in extracellular matrix (ECM). The nanoreservoir technique was used to decorate the surface of the nanofibers with BMP-2/Ibuprofen. Figures 1(c) and 1(d) showed BMP-2 nanoreservoirs tightly grafted on the surface of the electrospun nanofibers. Ibuprofen was homogeneously distributed along the PCL fibers (Figures 1(b) and 1(d)). As previously described [13], the amount of BMP-2 incorporated into the nanoreservoirs was 0.73 g/cm 2 by using Quartz crystal microbalance with dissipation monitoring (QCM-D). The recovery of the dipping solutions for Ibuprofen during the functionalization of the scaffold allowed measuring the fixed amount of Ibuprofen. The amount of Ibuprofen was constant for the first two cycles (Table 1). In the third cycle a small amount of Ibuprofen was fixed, that is why we have not done more cycles. The spectrophotometry did not show any passive release of Ibuprofen (not shown). 3.2. Biocompatibility of the scaffolds. The nanofibrous structure enables cell migration and growth as well as nutrient and bioactive molecule diffusion. Biocompatibility evaluation of the PCL scaffolds was based on human primary osteoblasts (Hob) metabolic activities analyses using AlamarBlue test (Figure 2). Cell morphology, adhesion on the PCL scaffolds, and expression of osteopontin and BSPII, which are noncollagen bone matrix proteins, were evaluated ( Figure 3). The cells were seeded on the scaffolds and then cultured for 21 days in the osteoblastic medium. The AlamarBlue reduction percentage, followed over times 6, 24, and 48 h and 7, 14, and 21 days of culture, confirmed the viability of the cells on both types of scaffolds (without or bifunctionalized with BMP-2/Ibu). The results showed that, after 6 h of culture, the osteoblasts had a higher metabolic activity on the uncoated scaffolds. This activity increased after 24 h and was identical on both types of scaffolds. This activity was constant up to 14 days and increased again after 21 days of culture. The metabolic activity of osteoblasts is significantly more important on bifunctionalized scaffolds than on control scaffolds. The bifunctionalized scaffolds were therefore not toxic for the osteoblasts and can be used in in vivo implantation experiments. After 4 days of culture, the Hob was observed by hematoxylin-eosin (HE) staining (Figure 3(a)) and by SEM (Figures 3(b) and 3(c)). The cellular morphology of osteoblasts was identical after 4 days of culture on nonfunctionalized membrane and on bifunctionalized membrane. The cells were spread on the membrane surface and inside. The numerous cellular extensions infiltrated between the nanofibers showing a satisfactory biocompatibility of the biomembrane (Figures 3(b) and 3(c)). After 14 days of culture, the Hob was tested for their expression of bone specific proteins: bone sialoprotein 2 (BSPII) and osteopontin (OPN). The immunofluorescence images showed that osteogenesis occurred successfully in both types of scaffolds: PCL (Figures 3(d) and 3(f)) and PCL/BMP-2/Ibu (Figures 3(e) and 3(g)). However, the qualitative enhancement of protein expression by the bifunctionalization is significant in vitro, even after 14 days (Figures 3(e) and 3(g)). Effects of the Bifunctionalized Scaffold on the Bone Neoformation in WT and Ta Mice by Micro-CT. Effects of the BMP-2/Ibuprofen bifunctionalized PCL scaffold on maxillary bone regeneration were evaluated using micro-CT analyses. In the WT mice, no difference at day 21 was observed between the treated side (RS) and the control side (LS). Positive effect of the scaffold was observed in WT on the treated side (+13%) between day 21 and day 30 (Figure 4, WT RS), which was not observed on the control side (Figure 4, WT LS). An average bone neoformation of 14.4% at 30 days for the control lesions was observed, compared to 21% for the lesions treated with BMP-2/Ibu scaffold ( < 0.05) ( Figure 5). In the Ta mice, the treatment with BMP-2/Ibu scaffold leaded to a lower bone neoformation than control lesions at day 21 (4% versus 13%). We observed a positive effect on the treated side between day 21 and day 30 (+8%), which was not observed on the control side (Figures 4 and 5). Histological Analyses of the Effects of Different Scaffolds on the Bone Neoformation in WT Mice. We first compared neoformed bone after PCL, PCL/BMP-2, and PCL/BMP-2/Ibu implantation for 30 days (Figure 6) on paraffin sections stained with hematoxylin-eosin that allows visualization of the extracellular bone matrix and collagen type I. After 30 days, the gingiva was healed and bone regenerated on both sides of the lesion with the 3 different scaffolds (Figures 6(b)-6(d)). When the membrane was functionalized with BMP-2 or BMP-2/Ibu, we also observed bone regeneration inside the membrane (Figures 6(f) and 6(g), arrows). We did not observe any noticeable difference in bone regeneration between BMP-2 and BMP-2/Ibu scaffolds. Histological Analyses of the Effects of the Bifunctionalized Scaffold on the Bone Neoformation in WT and Ta Mice. Histological analyses, at 30 days postoperative (Figure 7), confirmed the micro-CT analyses and showed neoformed bone (NB) with regular structure at the level of lesions treated by BMP-2/Ibu scaffold (Figures 7(d)-7(f), 7(j)-7(l)) in WT and Ta mice. Neovascularization was more important at the level of the lesion treated with PCL/BMP-2/Ibu scaffold especially for the Ta mice (Figure 7(j)). Effects of the Functionalized Biomembrane in WT and Mutant Ta Mice. Polycaprolactone (PCL) membrane revealed an osteoconductive effects and BMP-2 stimulated the bone production by its osteoinductive properties [21,25]. Functionalization with BMP-2 was already described in previous study and approved by American authorities (FDA). In our study, we adopted a system consisting in direct LbLbased nanoimmobilization of BMP-2, allowing protection of the growth factor and the use of lower concentrations (three adsorptions steps with a 200 ng/mL solution) compared to the soaking approach. Ibuprofen appears to stimulate neovascularization according to other studies, this effect being based on an increased secretion of VEGF and endothelial cell proliferation [26]. The bioavailability of the Ibuprofen entrapped in the nanoreservoirs is improved, compared to scaffolds electrospun with Ibuprofen in solution [27,28]. Moreover, use of lower concentrations allows a reduction of cell toxicity and genotoxicity, the last one being observed on mouse bone marrow cells in contact with Ibuprofen [29]. The maxillary bone regeneration based on nanoreservoirs functionalized PCL biomembranes showed promising results in WT mice; nevertheless the use of Ibuprofen inhibited early bone neoformation in mutant Ta mice. In the Ta and WT mice, the increase in bone formation peaks between the 21st and the 30th days following the surgery ( < 0.05). This can be explained by the diffusion kinetic of the Ibuprofen entrapped in the nanoreservoirs. Despite the absence of significative effect of the bifunctionalized membrane in the Ta mutant mice at 30 days, a more important stimulation of osteogenesis is observed between the 21st and 30th day in the treated lesion compared to control lesion. Only BMP-2 and BMP-2/Ibuprofen membranes were used on the protocol indeed; based on our previous experimental results [11,13] and literature data [30], we assumed the absence of positive biological effects on osteoblast proliferation of an Ibuprofen functionalized membrane, in the absence of an osteoinductive molecule as BMP-2. The altered effects of the biomembrane in the Ta mice could be potentially linked to the bone metabolic and structural anomalies associated with the mutation [18,31]. These genetically determined bone abnormalities associated with the Ta mutation could not be integrally compensated by the biological effects of the bifunctionalized PCL biomembrane. We assume the absence of early effects in the Ta model linked to bone physiopathology and a negative compensation of BMP-2 effects by the mutation. Potential Development of the Model. The main interest of this mouse model consists in the possibility of evaluating bone response in the context of EDA/Ta mutation. Besides the characterization of dental and skeletal phenotypes linked to HED [32,33], this mouse model allowed a dynamic approach of bone response kinetic, based mainly on in vivo micro-CT and histological techniques. More accurate micro-CT approaches, with higher isotropic resolution, will be developed, based on the use of synchrotron micro-CT techniques or nano-CT. This high-resolution micro-CT will allow deep ultrastructural phenotyping of the neoformed bone and its differences between Wild-Type and different genetically modified mice. These micro-CT acquisitions will lead to tridimensional morphometric characterizations of the native and neoformed bone, with description of parameters like trabecular bone volume, trabecular number, thickness, or intertrabecular spaces. The soft tissues ingrowth and morphological modifications of the scaffold will also be studied by synchrotron micro-CT. The vascular ingrowth process, important for postoperative bone regeneration, will also be studied on this model based on K-edge subtraction micro-CT using synchrotron lights [34]. The development of this model is essential both to understand the physiopathological mechanisms and in preclinical research applied to genetical rare diseases. Furthermore, this mutant mouse model makes experimental approaches of bone grafting and osteointegration complication mechanisms in patients affected by HED possible. The surgical protocol applied in Ta mice allowed the exploration of altered jaw bone response and the potential osteogenic effects of PCL biomembranes. The characterization of bone response in Ta mice can be adapted to other mutant mice presenting skeletal abnormalities and described in the literature like the Lrp4 mutated sclerosteosis mouse model [35] or the mutant FKBP51 V55L for Paget's disease [36]. Indeed, applications of this microsurgical protocol to other mouse models of genetic diseases with skeletal defects will allow the in vivo study of the maxillary bone response in different pathological contexts. Beyond the maxillary location, it will be possible to analyze the bone response of Ta mice in other anatomical locations, as calvaria or long bones. Significative osteoinduction using BMP-2 was demonstrated in other animal models in calvaria like rabbit or mice [37,38]. Potential Development of the Biomembrane. The design of the scaffold, the components, and the functionalization with different signaling molecules can be modified [39] and adapted according to the pathological context, the genetical defect, or the anatomical site [29]. The membrane may be formed from different materials with specific properties of biocompatibility, cytotoxicity, resorbability, or osteogenic capacity. Different polymers are available, as PCL, PCL associated with other materials as PLA, or other electrospun substances like polystyrene [39,40]. The size and the morphology of the nanofibers are the main parameters that can be controlled, by modulating for example the flow rate or the polymer concentration [5]. The thickness, the microstructure, and the porosity of the scaffold are the other controlled parameters of the biomembrane. Interconnected porosity is a crucial factor to obtain sufficient neovascularization [41]. There are many perspectives and potential therapeutic clinical applications, like functionalization with mesenchymal stem cells or osteoblasts, use of different BMP isoforms homodimeric and heterodimeric associations [9,10], or different molecules like statins or hypoxia-mimetic agents [42]. Experimental use of these molecules was already reported, but with other types of scaffolds like hydrogels [43] or in gelatin nanofibrous scaffold [41]. It might be interesting to combine these substances with the functionalized scaffold by the nanoreservoirs technology. The quantity of nanoreservoirs can be modified by increasing the number of functionalization cycles and thus allowing a longer effect over time. Conclusion Biomembrane-based engineering appears as a promising approach allowing bone regeneration and opens the possibility of developing biomaterials functionalized with different molecules or stem cells. In this study, the association between Ibuprofen and BMP-2 on a PCL membrane makes it possible to have both osteoinductive and anti-inflammatory effects. In the WT mice, the bifunctionalized scaffold showed only a late biological effect, with bone neoformation being observed between day 21 and day 30, whereas in Ta mice, the bone neoformation is lower than control lesions at day 21 and then increased secondarily. We assume that the difference between Ta and WT is linked to bone metabolic alterations. The main research perspectives are to adapt biomembranes to the physiopathology of rare diseases like HED, skeletal dysplasia, or bone tumors and metastases. The combination with other materials, stem cells, and molecules may be benefit to induce bone regeneration. The purposes are to promote cell adhesion, osteogenic differentiation, improve bone formation, and mechanical properties allowing a decrease of postoperative complications prevalence. with a dental bur (500 m), (C) implantation of the biomembrane, and (D) closing of the gingiva with biological glue.
5,798.8
2018-02-26T00:00:00.000
[ "Engineering", "Materials Science", "Medicine" ]
D936Y and Other Mutations in the Fusion Core of the SARS-CoV-2 Spike Protein Heptad Repeat 1: Frequency, Geographical Distribution, and Structural Effect The crown of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is constituted by its spike (S) glycoprotein. S protein mediates the SARS-CoV-2 entry into the host cells. The “fusion core” of the heptad repeat 1 (HR1) on S plays a crucial role in the virus infectivity, as it is part of a key membrane fusion architecture. While SARS-CoV-2 was becoming a global threat, scientists have been accumulating data on the virus at an impressive pace, both in terms of genomic sequences and of three-dimensional structures. On 15 February 2021, from the SARS-CoV-2 genomic sequences in the GISAID resource, we collected 415,673 complete S protein sequences and identified all the mutations occurring in the HR1 fusion core. This is a 21-residue segment, which, in the post-fusion conformation of the protein, gives many strong interactions with the heptad repeat 2, bringing viral and cellular membranes in proximity for fusion. We investigated the frequency and structural effect of novel mutations accumulated over time in such a crucial region for the virus infectivity. Three mutations were quite frequent, occurring in over 0.1% of the total sequences. These were S929T, D936Y, and S949F, all in the N-terminal half of the HR1 fusion core segment and particularly spread in Europe and USA. The most frequent of them, D936Y, was present in 17% of sequences from Finland and 12% of sequences from Sweden. In the post-fusion conformation of the unmutated S protein, D936 is involved in an inter-monomer salt bridge with R1185. We investigated the effect of the D936Y mutation on the pre-fusion and post-fusion state of the protein by using molecular dynamics, showing how it especially affects the latter one. Introduction Coronavirus Disease 2019 is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which is also referred to as human coronavirus 2019 (hCoV-2). SARS-CoV-2 is a novel virus belonging to the β genus coronaviruses, which also include two highly pathogenic human viruses identified in the last two decades, known as the severe acute respiratory syndrome coronavirus (SARS-CoV) and the Middle East respiratory syndrome coronavirus (MERS-CoV) [1][2][3]. Coronaviruses are named after the protruding spike (S) glycoproteins on their envelope, giving a crown shape to the virions [4]. Of the four structural proteins of coronaviruses: S, envelope (E), membrane (M), and nucleocapsid (N), the S protein is the one playing a key role in mediating the viral entry into the host cells [5][6][7], making it mutations in the S protein HR1 fusion core (at https://www.molnac.unisa.it/BioTools/cov2smt/index.php) (accessed on 18 April 2021). Identification of the HR1 "Fusion Core" Mutations The HR1 of coronaviruses S proteins undergoes one of the most notable rearrangements within the protein between the pre-fusion and post-fusion conformations. In the post-fusion conformation, it experiences a refolding of the pre-fusion multiple helices and intervening regions into a single continuous helix ( Figure 1). As already mentioned, three of these long helices then form a 6HB with three HR2 helical motifs [18,29,30]. The HR1 and its "fusion core" particularly play a crucial role in the virus infectivity. On 15 February 2021, we downloaded all the SARS-CoV-2 genomic sequences from the GISAID resource, extracted from them 415,673 complete S protein sequences, and identified all the point mutations occurring in the S929-Q949 region (see Methods). The identified mutations with the relative number of occurrences are reported in Table S1. Only the most frequent mutations are reported in Table 1. Most of the positions, such as 931, 933 to 935, 937, 944-945, and 948-949, were virtually unaffected by mutational events, with a maximum mutation rate of 0.005%. Positions 930, 937, 941, and 946-947 were also little affected, with a mutation rate below 0.02%. Positions 932, 938, 940, 942-943 had a mutation rate between 0.025% and 0.055%. Positions featuring the higher number of mutations were 929, 936, and 939, which are all located in the N-terminal half of the HR1 fusion core and featuring a mutation rate above 0.1%. Top: Cartoon representation of SARS-CoV-2 S protein HR1 and its fusion core (insets) in the pre-fusion and post-fusion conformations (PDB IDs: 6VSB and 6LXT). Discussed mutations are colored in purple and labelled. Q949, at the end of the fusion core, is also labeled. Bottom: Sequence alignment of the HR1 fusion core (framed) and 10 residues up-stream and down-stream in the S protein of SARS-CoV-2, bat coronavirus RaTG13 (protein_ID: QHR63300.2), and SARS-CoV (protein_ID: AAP13441.1). On 15 February 2021, we downloaded all the SARS-CoV-2 genomic sequences from the GISAID resource, extracted from them 415,673 complete S protein sequences, and identified all the point mutations occurring in the S929-Q949 region (see Methods). The identified mutations with the relative number of occurrences are reported in Table S1. Only the most frequent mutations are reported in Table 1. Most of the positions, such as 931, 933 to 935, 937, 944-945, and 948-949, were virtually unaffected by mutational events, with a maximum mutation rate of 0.005%. Positions 930, 937, 941, and 946-947 were also little affected, with a mutation rate below 0.02%. Positions 932, 938, 940, 942-943 had a mutation rate between 0.025% and 0.055%. Positions featuring the higher number of mutations were 929, 936, and 939, which are all located in the N-terminal half of the HR1 fusion core and featuring a mutation rate above 0.1%. Starting from position 929, S929 was found to mutate to threonine in 467 sequences and to asparagine/arginine/glycine in 60/5/1 sequences. Geographical Distribution of the HR1 "Fusion Core" of Most Frequent Mutations The geographical distribution, per country, of the investigated mutations is reported in Figure 2. The first occurrence of the S929T mutation was deposited in GISAID on 18 April 2020, which is sequenced in Canada. On 15 February, however, the large majority of its occurrences was reported from England (440 over 467, corresponding to 94%). The remaining 27 occurrences were also mostly sequenced in Europe, with only 5 overall occurrences from USA and Canada, 2 from Australia, and 1 from South Africa. Starting from position 929, S929 was found to mutate to threonine in 467 sequences and to asparagine/arginine/glycine in 60/5/1 sequences. As for position 936, D936 was mutated to tyrosine in 1296 sequences and to asparagine/histidine/valine/glycine/glutamate/glutamine/alanine/serine in 148/125/44/24/17/3/1/1 sequences. Geographical Distribution of the HR1 "Fusion Core" of Most Frequent Mutations The geographical distribution, per country, of the investigated mutations is reported in Figure 2. The first occurrence of the S929T mutation was deposited in GISAID on 18 April 2020, which is sequenced in Canada. On 15 February, however, the large majority of its occurrences was reported from England (440 over 467, corresponding to 94%). The remaining 27 occurrences were also mostly sequenced in Europe, with only 5 overall occurrences from USA and Canada, 2 from Australia, and 1 from South Africa. The first occurrence of the D936Y mutation was, instead, deposited in GISAID on 8 March 2020, which was sequenced in Sweden. On 15 February 2021, occurrences have been reported from 48 countries. However, Sweden confirms itself as the country with the higher occurrences of such a mutation (in 219 sequences, representing 17% of the total). The other four European countries contributed, together with Sweden, 60% of all the The first occurrence of the D936Y mutation was, instead, deposited in GISAID on 8 March 2020, which was sequenced in Sweden. On 15 February 2021, occurrences have been reported from 48 countries. However, Sweden confirms itself as the country with the higher occurrences of such a mutation (in 219 sequences, representing 17% of the total). The other four European countries contributed, together with Sweden, 60% of all the occurrences. These countries are England, Finland, Wales, and Denmark, and reported 260 (20%), 181 (14%), 122 (9.4%), and 114 (8.8%) occurrences, respectively. USA also contributed a significant number of occurrences (136 occurrences or 10%). The remaining 30% of occurrences were mainly sequenced in European countries, the Netherlands (56), Germany (36), Switzerland (24), Norway (15), Luxembourg (12), Scotland (5), Austria (5), and others, as well as in India (13), Japan (12), Canada (10), Mexico (7), Singapore (5), etc. (for a complete list, see the web site: https://www.molnac.unisa.it/BioTools/cov2smt/index. php) (accessed on 28 April 2021). Notably, the total number of occurrences of the D936Y mutation amounted to 17% of all the 1089 sequences available from Finland and to 12% of all the 1768 sequences available from Sweden. The first occurrence of the S939F mutation was deposited in GISAID on 25 February 2020 from the United Arab Emirates. On 15 February 2021, it was spread in 44 countries, especially western ones. Three countries represented together 66% of all the occurrences. These countries are England, USA, and Denmark, having reported 483 (37%), 253 (20%), and 124 (9.6%) occurrences. Over 10 occurrences of the mutation were also reported from other European countries: Austria (29), Sweden (21), Wales (20), Switzerland (19), the Netherlands (12), and Norway (11), but also from Israel (15) and South Africa (15). Two more occurrences of the mutation have been reported from the United Arab Emirates between May and June 2020. Clade Association of the HR1 "Fusion Core" of Most Frequent Mutations The distribution of the mutations in high-level phylogenetic groupings, or genetic clades, is plotted in Figure 2. As a reminder, the G/GH/GR/GV clades are among the latest out of eight genetic clades reported in GISAID (S, L, V, G, GH, GR, GV, GRY) [34]. The G clade carries the D614G mutation, now globally dominant, accompanied by other mutations upstream the S protein gene (C241T, C3037T). In addition, the GH clade presents the NS3-Q57H mutation, the GR clade presents the N-G204R mutations, and GV clade presents the S-A222V mutation. The three reported mutations HR1 are clearly associated with the late G/GH/GR/GV clades. In particular, S929T is mainly associated with the GV clade and D936Y is mainly associated with the GH clade, while S939F is roughly equally associated with the GR, GH, GV, and G clades. Sequence Conservation among Similar Viruses All the amino acids in the three positions more prone to mutation in the SARS-CoV-2 S protein HR1 fusion core are conserved in the bat coronavirus RaTG13 S protein (sharing an overall sequence identity of 97% with SARS-CoV-2 S protein), while all of them are mutated in the SARS-CoV S protein (overall, 76% sequence identical to the SARS-CoV-2 homolog) (see Figure 1). In particular, S929 is a lysine in SARS-CoV, while D936 is substituted by a glutamate and S939 by a threonine. It has been proposed that the SARS-CoV-2 HR1 mutations as compared to SARS-CoV may be associated with enhanced interactions with HR2, further stabilizing the 6-HB structure and maybe leading to increased infectivity of the virus [29]. In this context, it is noteworthy that the point mutations we are discussing did not restore the corresponding SARS-CoV amino acid. Effect of the Mutations on the Protein Pre-Fusion Conformation In the pre-fusion conformation, the most mutated positions are located on the second of four non-coaxial helical segments composing the HR1 (Figure 1). They are all exposed to the solvent (Table 2), and can be modelled as larger residues without causing any structural strain (see Figure 3). These mutations are not expected to cause relevant changes in the pre-fusion structure. However, they could have a destabilizing effect as a consequence of posing large aromatic residues, at positions 936 and 939, in direct contact with the solvent instead of a charged aspartate or polar serine residue. Table 2. Solvent accessibility of mutated residues in the pre-fusion and post-fusion conformations. Amino Acid Pre-Fusion Post-Fusion T929 exposed partly buried (18.6%) a Y936 exposed partly buried (19.0%) F939 exposed exposed a Percentage of buried surface upon complex formation. Molecules 2021, 26, x FOR PEER REVIEW 6 of 13 consequence of posing large aromatic residues, at positions 936 and 939, in direct contact with the solvent instead of a charged aspartate or polar serine residue. Table 2. Solvent accessibility of mutated residues in the pre-fusion and post-fusion conformations. Amino Acid Pre-Fusion Post-Fusion T929 exposed partly buried (18.6%) a Y936 exposed partly buried (19.0%) F939 exposed exposed a Percentage of buried surface upon complex formation. Effect of the Mutations on the Protein Post-Fusion Conformation When looking at the post-fusion conformation of the SARS-CoV-2 spike protein S2 subunit, these mutations appear more revealing. Two of the wild-type residues, S929 and D936, are engaged in side-chain to side-chain H-bonds with the HR2 segment of an adjacent monomer. In particular, S929 and D936 (HR1 on Chain A) are H-bonded to S1196 and R1185, respectively (HR2 on Chain C, Figure 4). Mutation of S929 to threonine does not cause the loss of the inter-monomer H-bond (Figure 4), while a mutation of D936 to tyrosine, does. The H-bond between D936 and R1185 is actually a salt bridge (estimated to contribute an additional 3-5 kcal/mol to the free energy of protein stability as compared to a neutral H-bond [35]). Effect of the Mutations on the Protein Post-Fusion Conformation When looking at the post-fusion conformation of the SARS-CoV-2 spike protein S2 subunit, these mutations appear more revealing. Two of the wild-type residues, S929 and D936, are engaged in side-chain to side-chain H-bonds with the HR2 segment of an adjacent monomer. In particular, S929 and D936 (HR1 on Chain A) are H-bonded to S1196 and R1185, respectively (HR2 on Chain C, Figure 4). Mutation of S929 to threonine does not cause the loss of the inter-monomer H-bond (Figure 4), while a mutation of D936 to tyrosine, does. The H-bond between D936 and R1185 is actually a salt bridge (estimated to contribute an additional 3-5 kcal/mol to the free energy of protein stability as compared to a neutral H-bond [35]). Of the remaining most frequent mutations, S939F is completely exposed to the solvent and, therefore, like in the pre-fusion conformation, expected to act unfavorably on the protein solvation energy. Molecular Dynamics Analysis When comparing the effect of the mutations on the pre-fusion and post-fusion structures, it emerges that the D936Y mutation is the one expected to have the greatest structural impact. Since it is also the most frequent mutation occurring on the fusion core of S HR1, we decided to further analyze the effect of such a mutation on the structure and dynamics of the SARS-CoV-2 S protein. To this aim, three 0.5-μs long MD simulation replicates were run on the mutant and the wild-type protein, both in their pre-fusion and post-fusion conformations, for a total of 6 μs. We recall in the following the main findings of the MD analysis, while details are reported in the Supplementary Information text and in Figures S1-S12 and Tables S2 and S3. Both the wild-type and mutant conformations were stable during the whole dynamics, in the pre-fusion and post-fusion conformations, with maximal root mean square deviation (rmsd) values on the Cα atoms not exceeding 3.5 Å from the initial structure Of the remaining most frequent mutations, S939F is completely exposed to the solvent and, therefore, like in the pre-fusion conformation, expected to act unfavorably on the protein solvation energy. Molecular Dynamics Analysis When comparing the effect of the mutations on the pre-fusion and post-fusion structures, it emerges that the D936Y mutation is the one expected to have the greatest structural impact. Since it is also the most frequent mutation occurring on the fusion core of S HR1, we decided to further analyze the effect of such a mutation on the structure and dynamics of the SARS-CoV-2 S protein. To this aim, three 0.5-µs long MD simulation replicates were run on the mutant and the wild-type protein, both in their pre-fusion and post-fusion conformations, for a total of 6 µs. We recall in the following the main findings of the MD analysis, while details are reported in the Supplementary Information text and in Figures S1-S12 and Tables S2 and S3. Both the wild-type and mutant conformations were stable during the whole dynamics, in the pre-fusion and post-fusion conformations, with maximal root mean square deviation (rmsd) values on the Cα atoms not exceeding 3.5 Å from the initial structure ( Figures S1 and S7). The difference in the rmsd values between the wild-type protein and the D936Y mutant (Figure 5a) is negligible for the pre-fusion conformation, 0.05 (±0.1) Å. In the post-fusion conformation, the average rmsd is instead higher, by 0.38 Å (±0.2), for the mutant, which seems to acquire some flexibility. The total number of inter-monomer H-bonds from the wild-type to the mutant decreased more in the post-fusion conformation, −1.8 (±1.1), than in the pre-fusion one, −0.9 (±1.3). As we expected, in order for these lost H-bonds to be the inter-monomer D936-R1185 salt bridges we discussed before, we monitored the H-bond distances between D/Y936 and R1185 over time (Figure 5b,c). The minimum distance between the nitrogen atoms of the arginine guanidinium group and the oxygens of the aspartate carboxylate or the hydroxyl oxygen of the mutated tyrosine is reported for each trimer interface. In case of the wild-type, the minimum H-bond distance is 3.32 (±0.7) Å and 3.62 (±1.0) Å for two interfaces, with distances being within 3.5 Å in 70% and 57% of frames, respectively. Therefore, these two H-bonds are largely maintained over time. For the third interface, the average distance is instead 6.48 (1.1) Å, with only 1% of the frames within 3.5 Å. This is consistent with the reference X-ray structure, where D936 and R1185 on the adjacent monomer are at an H-bond distance for two interfaces, and are, instead, 4.71 Å apart on the third interface. In case of the mutants, the average distances are all around 4 Å (3.96 ± 0.7, 4.29 ± 0.9 and 4.23 ± 0.9 Å for each interface), with the total frames featuring a distance within 3.5 Å amounting to only 22%. This correlates with the loss of ≈2 H-bonds in the mutant conformation. However, it is worth it to remind here that, due to its strong electrostatic nature, a stabilizing interaction between D936 and R1185 is maintained above the classical threshold for an H-bond distance [36]. ( Figures S1 and S7). The difference in the rmsd values between the wild-type protein and the D936Y mutant (Figure 5a) is negligible for the pre-fusion conformation, 0.05 (±0.1) Å. In the post-fusion conformation, the average rmsd is instead higher, by 0.38 Å (±0.2), for the mutant, which seems to acquire some flexibility. The total number of inter-monomer H-bonds from the wild-type to the mutant decreased more in the post-fusion conformation, −1.8 (±1.1), than in the pre-fusion one, −0.9 (±1.3). As we expected, in order for these lost H-bonds to be the inter-monomer D936-R1185 salt bridges we discussed before, we monitored the H-bond distances between D/Y936 and R1185 over time (Figure 5b,c). The minimum distance between the nitrogen atoms of the arginine guanidinium group and the oxygens of the aspartate carboxylate or the hydroxyl oxygen of the mutated tyrosine is reported for each trimer interface. In case of the wild-type, the minimum H-bond distance is 3.32 (±0.7) Å and 3.62 (±1.0) Å for two interfaces, with distances being within 3.5 Å in 70% and 57% of frames, respectively. Therefore, these two H-bonds are largely maintained over time. For the third interface, the average distance is instead 6.48 (1.1) Å, with only 1% of the frames within 3.5 Å. This is consistent with the reference X-ray structure, where D936 and R1185 on the adjacent monomer are at an H-bond distance for two interfaces, and are, instead, 4.71 Å apart on the third interface. In case of the mutants, the average distances are all around 4 Å [3.96 ± 0.7, 4.29 ± 0.9 and 4.23 ± 0.9 Å for each interface), with the total frames featuring a distance within 3.5 Å amounting to only 22%. This correlates with the loss of ≈2 H-bonds in the mutant conformation. However, it is worth it to remind here that, due to its strong electrostatic nature, a stabilizing interaction between D936 and R1185 is maintained above the classical threshold for an H-bond distance [36]. Since an arginine can involve a tyrosine in a cation-π interaction, we also monitored the minimum distance between the nitrogen atoms of the R1185 guanidinium group and the center of mass of the Y936 aromatic ring (Figure 5d). Average values are in the 6-7 Å range and never drop below 4.3 Å, which is considered a reasonable cutoff distance for establishing a cation-π interaction [37]. Therefore, the above analysis ruled out the possibility of having a cation-π interaction between these two residues. Finally, we followed the buried surface area over the simulation time within the MDcons approach finding the post-fusion assembly to be, overall, more compact (i.e., featuring a moderately higher buried surface area upon complex formation) for the wildtype system, as compared to the D936Y mutant (see Figure S13). Discussion We monitored the mutations accumulated over time on the SARS-CoV-2 S protein HR1 fusion core, and a key structural and functional motif for the virus infectivity, using GISAID as the resource of genomic sequences. The SARS-CoV-2 HR1 fusion core differs in several positions from that of SARS-CoV and its peculiarity has been associated with the higher infectivity of the virus [29]. On 15 February 2021, D936Y was the most frequent mutation on the HR1 fusion core, followed by S939F and S929T. Notably, most of the HR1 fusion core positions are virtually unaffected by mutational events, while all three most-frequent mutations are located on the second of four non-coaxial helical segments composing the HR1. In the pre-fusion conformation, two of these mutations result in large aromatic residues of a tyrosine and a phenylalanine. Such mutations, mainly localized in Europe and USA, are quite late ones, emerging starting from the end of February 2020, and are associated with the late G/GH/GR/GV clades, implying that they co-exist with the globally dominant D614G mutation. D936Y was the most frequent among the HR1 fusion core mutations on 25 February 2021. While the geographical distribution of S929T, mostly from England, and of S939F, mostly from England, USA, and Denmark, may reflect the higher contribution of these countries to the genomic sequencing of SARS-CoV-2 (the three countries together covered roughly two-thirds of the sequences in GISAID on 15 February), D936Y was widespread. Besides the above countries, in Scandinavia and especially in Finland and Sweden, it represents 17% and 12%, respectively, of all the sequences available from these countries. We investigated the structural basis of such mutations, finding out that the D936Y mutation is the one expected to have the greatest structural impact. Therefore, we analyzed the effect of such a mutation by molecular dynamics, showing that it causes the loss of a strong inter-monomer salt bridge in the post-fusion conformation of the S protein and introduces some flexibility in it, resulting in an overall slightly reduced compactness of the assembly. Experimental testing of the D936Y mutation, within a study comprising over 100 S protein variants or glycosylation site modifications [38], has shown a significant decrease of infectivity as compared to the Wuhan reference strain [1] when it was the only variant. It demonstrated instead increased infectivity, as compared to the reference strain, when associated with the D614G variant, which was comparable to that of the strain presenting only the D614G mutation. It is worth noticing that, for other frequent variants included in the same study, such as L5F and D839Y, infectivity was virtually unchanged. The structural effect of the D936Y mutation, that we report here, may call for further functional and clinical studies to clarify its possible consequences on the SARS-CoV-2 virulence. Identification of Mutations We downloaded the 550,092 genomic sequences available from GISAID on 15 February 2021. From these sequences, we extracted the nucleotide sequences of the spike protein and translated them to protein sequences with in-house scripts. Nucleotide sequences featuring an internal stop codon or having at least one undefined ("N") nucleotide were discarded. Sequences annotated as pangolin, bat, or canine were also discarded. The remaining 415,673 protein sequences were further analysed. As a reference system, we used the genomic sequence with GISAID ID: EPI_ISL_402124, isolated and sequenced in Wuhan (Hubei, China) on 30 December 2019 [1]. Then, upon alignment to the reference sequence, we identified point mutations in all the sets of at least two sequences. The web application was built using standard HTML, php, and python scripts. Mutants Modelling and Analysis Mutants 3D models were built using the mutate_model module of the Modeller 9v11 program [39]. This is an automated method for modelling point mutations in protein structures, which includes an optimisation procedure of the mutated residue in its environment, beginning with a conjugate gradients' minimisation, continuing with molecular dynamics with simulated annealing, and finishing again by conjugate gradients. The used force field is CHARM-22. For details, see Reference [40]. Models for mutants in the pre-fusion conformation were built starting from the EM structure of the pre-fusion trimeric conformation (PDB ID: 6VSB, resolution 3.46 Å, [22]). Models for mutants in the post-fusion conformation were built starting from the X-ray structure of the S2 subunit fusion core, featuring residues 912-988 and 1164-1202 (PDB ID: 6LXT, resolution 2.90 Å, [29]). Molecular models were analysed and visually inspected with Pymol [41]. The COCOMAPS web server [42] was used to analyse the inter-chain contacts and H-bonds as well as the residues accessibility to the solvent. Molecular Dynamics Simulations Molecular dynamics simulations were carried out for the wild-type S protein and for the D936Y mutant in the pre-fusion and post-fusion conformations, starting from the experimental structures used for modeling the mutants (see above). For the pre-fusion simulations, we used the trimer of the S2 subunit (PDB ID: 6VSB). From S711 to C1146, respectively, 200 residues upstream and ≈160 residues downstream of HR1. Missing residues between K811 and R815 and between L828 and Q853 were modeled with the GalaxyFill program [43]. The crystal structure of the post fusion core of the protein S2 subunit (PDB ID: 6LXT), featuring residues 912-988, 1164-1202 [29] was used for the postfusion simulations. For the D936 mutant, models obtained as detailed in the previous section were used. All the MD simulations were carried out with Gromacs 2018 [44], using the Amber14SB force field [45]. Each protein was inserted into a rectangular box of TIP3P water molecules, setting a minimum distance of 12.0 Å from it to the box sides and neutralizing the solution with Zn 2+ and Cl − ions. A minimization was first carried out, followed by isothermal ensemble (NVT) dynamics using a velocity-rescale thermostat [46] for computing positions and velocities of atoms. Then, 2 ns of isothermal-isobaric ensemble (NPT) dynamics was carried out to equilibrate the structure. Periodic boundary conditions were applied in all directions. The production simulations were carried out using an NPT ensemble for 500 ns. The temperature was maintained constant at 300 K using a velocity-rescale thermostat [46] (τ T = 0.1 ps) and a pressure of 1 bar was maintained using a Parrinello-Rahman barostat [47] (τ P = 2.0 ps). Electrostatic interactions beyond 1.2 nm were evaluated by the Particle-Mesh-Ewald (PME) method [48]. Bond lengths were constrained with the LINear Constraint Solver algorithm [49]. Trajectories were analyzed using Gromacs 2018 analysis tools. For the MDcons analyses [50], using a contact-based approach [51,52] for the dynamical characterization of the interface in protein assemblies, 500 snapshots were generated for each system, by writing the coordinates every 1 ns. : Table S1. Number of occurrences of all mutations on the HR1 "fusion core" on 15 February 2021. Table S2. MD analysis data of the wt and D936Y mutant pre-fusion state. Table S3. MD analysis data of the wt and D936Y mutant post-fusion state. Figures S1-S2. Supplementary Materials Pre-fusion state: Cα RMSD values versus time. Figure S3. Pre-fusion state: RMSF values per residue. Figure S4. Pre-fusion state: Average number of hydrogen bonds versus time. Figure S5. Pre-fusion state: Potential energy versus time. Figure S6. Pre-fusion state: Electrostatic (ELE) and Lennard Jones (LJ) energies versus time. Figures S7-S8. Post-fusion state: Cα RMSD values versus time. Figure S9. Post-fusion state. RMSF per residue. Figure S10. Post-fusion state: Average number of hydrogen bonds versus time. Figure S11. Post-fusion state: Potential energy versus time. Figure S12. Post-fusion state: Electrostatic (ELE) and Lennard Jones (LJ) energies versus time. Figure S13. Buried surface area along the MD simulations for the wt and D936Y mutant post-fusion state. Author Contributions: R.O. conceived the study, participated in its design, carried out the analyses, and drafted the manuscript. A.R.S. performed the MD simulations. A.P. implemented the web application. A.V. participated in the bioinformatics analyses. L.C. participated in the study's design, in the analyses, and in the implementation of the web application. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available in supplementary material.
6,714.4
2021-04-30T00:00:00.000
[ "Biology" ]
Constraint for a light charged Higgs boson and its neutral partners from top quark pairs at the LHC The charged Higgs boson plays an essential role in distinguishing between a wide variety of standard model extensions with multiple Higgs doublets, and has been searched for in various collider experiments. This paper expands our previous work to a broader Higgs mass space with discussions on subsequent issues. We study the prospect of a light charged Higgs boson, produced by top quark pairs at the Large Hadron Collider (LHC), and decaying into a $W$ boson (can be off shell) and a pair of bottom quarks via on-shell production of an intermediate neutral Higgs boson. We reinterpret the cross sections of $WWbb\bar{b}\bar{b}$ final states measured by the ATLAS collaboration at LHC 13 TeV in the presence of the decay chain: $t \rightarrow H^+ b, H^+ \rightarrow W^+ H_i, H_i \rightarrow b \bar{b}$, and H.c., where $H_i$ is a neutral Higgs boson variably lighter than the charged Higgs boson. We find improved agreements with the data and obtain limits on the total branching ratio of the aforementioned decay chain. The limits impose the strongest constraints on the parameter space of type-I two-Higgs-doublet model for most Higgs masses sampled when $H_i$ is the $CP$-odd Higgs boson $A$. We also calculate potential constraints with pseudodata in high-luminosity runs of the LHC. I. INTRODUCTION While the Higgs boson, the unique fundamental scalar particle in the standard model (SM) of particle physics, has been hunted down in the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN [1,2], the detail of the scalar sector is not yet fully revealed to us eleven years after, and the possibility of a larger and more complex scalar sector is still appealing for many reasons, e.g. the supersymmetry [3].The simplest but well-motivated extension of the SM scalar sector is the two-Higgs-doublet model (2HDM) [4,5], which features a pair of charged Higgs bosons and three neutral Higgs bosons.The charged Higgs boson is important for identifying various SM extensions from the nature; direct searches on it have been carried out at LEP [6], Tevatron [7], and now LHC [8,9], while the signals can be affected by possible undiscovered neutral Higgs bosons.Hopefully, the improved sensitivity at the LHC Run 3 will take us further in discovering or denying the charged Higgs boson and its possible non-SM companions. A light charged Higgs boson has a mass smaller than the mass difference between a top quark and a bottom quark, and can therefore be generated from the decay t → H + b (and H.c.), which benefits from the large production cross sections of the top quark pair at the LHC.Signals of the light charged Higgs boson can be examined in various final states [4,10,11].In our previous work [12], we have studied the W ± b b final states, which can usually be generated following two patterns: (a) H + → t ( * )b → W + b b, and (b) H + → W + H i → W + b b, where H i can be any neutral Higgs boson in the model.These patterns are exactly included in 2HDM [4], where H i can be the non-SM scalar H non-SM which we refer to as H afterwards, and the pseudoscalar A; the SM-like scalar H SM is unable to participate in such decay at the alignment limit [10].A few simplifications were made for pattern (b) in 2HDM in our previous work [12], most significantly being m H ± − m A = 85 GeV and m H > m H ± .These simplifications are reverted or discussed in further detail in this work.The W + b b channel has shown its power in exploring the parameter space of 2HDM in many theoretical studies [10,[13][14][15][16][17][18]. In this paper, we utilize a measurement on inclusive and differential fiducial cross sections of final states composed of two W bosons and four bottom quarks, performed by the ATLAS collaboration at LHC 13 TeV with an integrated luminosity of 36.1 fb −1 [19].We reinterpret the data in a two-dimensional Higgs mass space featuring the charged Higgs boson and a neutral Higgs boson H i with a mass from the threshold of bottom quark pair production to the vicinity of the SM Higgs mass.In this context, after the production of a top quark pair, one of the top quarks may decay as t → H + b which is followed by H + → W + H i → W + b b, while the other one follows the SM decay t → W −b . In Section II: We introduce our data selection and the methodology of our calculation, and then we perform signal-only likelihood tests for the signal strength of the charged-Higgs physics, B(t . We find improved agreements with the LHC data, and set upper limits on the signal strength.Making the neutral Higgs mass a variable shows us a much more complete aspect of the limit contours in the Higgs mass space.We also make explicit comparison between the sensitivity of different data sets to the signal strength. In Section III: We show the general constraints on the possible Higgs mass hierarchies in type-I 2HDM for reference to the Higgs mass space we study.We translate our signal constraints into strong constraints on the parameter tan β of type-I 2HDM with the mass hierarchy m H > m H ± > m A , while paying attention to alternative decay channels contributing to the same final states.The relevant differences between the pseudoscalar A and the scalar H in type-I 2HDM are discussed, and we find scenarios where m H < m H ± can share the same constraint result under certain conditions.We also discuss the potential of future high-luminosity data in both Section II and III. II. THEORY AND SIGNAL CONSTRAINTS The ATLAS collaboration measured the t t production in association with additional b-jets, in final states including two W bosons and four bottom quarks.The fiducial cross section measurements were performed in a di-lepton channel where one of the W bosons decays into an electron while the other into a muon, and a leptonplus-jets channel where one of the W bosons decays into an electron or muon while the other into jets [19].The W bosons can decay into electrons and muons either in a direct manner or via intermediate tauons.The results have been unfolded to particle level, identifying final states with at least four b-jets or at least three b-jets (since some b quarks can be out of experimental acceptance).Detailed definitions of the fiducial region can be found in Ref. [19] and are also implemented in the public Rivet [20] analysis routine.We note that there is another measurement on similar final states performed by the CMS collaboration [21]; however, it cannot be used in this work since it requires reconstructions of the top quarks following the SM decay mode. Theoretical predictions on the binned cross sections in the presence of a light charged Higgs boson and an even lighter neutral Higgs boson can be calculated as when the branching ratio of the non-SM decay mode t → H + b is small.σ SM (X) denotes the SM cross section of the QCD production of X; ϵ bin SM(H + ) is the particle-level experimental efficiency for the prescribed kinematic bin of the SM(charged-Higgs) process, with SM branching ratios of the W boson decay.Other SM processes contributing to the same final states are already subtracted from the experimental data.B sig H + , representing the signal strength of the charged-Higgs physics, is defined as the branching ratio where the first line specifies a process with its s-channels (leading to a symmetry factor of 2 in Eq. ( 1)); validity of the narrow-width approximation followed and onshellness of the neutral Higgs boson H i are accepted within the mass space we consider.In the following discussions and calculations, we take the pseudoscalar A in 2HDM as a main example of H i .For type-I 2HDM at or close to the alignment limit and we confirm that the decay width of H + is at most ∼ 10 −1 GeV, and the decay width of A (at most ∼ 10 −3 GeV) is much smaller than that of W + , using the ScannerS-2 program [22,23].For the scalar H and type-II,X,Y 2HDMs, there are calculations indicating similar conclusions [24].Apart from the process included by B sig H + , (a) the non-resonant production of H + and (b) an alternative decay mode H + → t ( * )b can also contribute to the same final states, and we do not include them for simplicity.Contribution (a), for reference, is generally around 10% of the contribution from the resonant decay of a top quark pair at m H ± ≈ 160 GeV and even less at smaller m H ± , in type-I,II 2HDMs [25].Contribution (b) is assumed to be insignificant, and we will discuss the validity of this assumption for type-I 2HDM in Section III.We treat B sig H + as an input variable to Eq. ( 1), and derive the efficiency ϵ bin H + from Monte Carlo (MC) simulations of generic 2HDM.The efficiency at particle level represents the size of the detected part, in a specific fiducial channel, of the overall normalized phase space distribution, and can therefore be calculated as where σ bin fid is the binned cross sections that fall into the fiducial region in the simulated process, and σ MC is the actual inclusive cross section of the simulated process.We set the decay widths of H + and A in MC simulations as values small enough to guarantee the on-shellness of the Higgs bosons, and therefore ϵ bin H + depends only on the masses of the Higgs bosons at leading order. We perform a survey on the inclusive cross sections and various differential fiducial cross sections measured by ATLAS and select three data sets.The first is the inclusive fiducial cross sections in the di-lepton channel and the lepton-plus-jets channel with at least three or four bjets (totalling four bins).The other two are the normalized distributions of the invariant mass of the two closest b-jets in angular distance, m ∆min bb , with at least three bjets in the di-lepton channel and at least four b-jets in the lepton-plus-jets channel respectively.Each normalized distribution was divided into five bins, and we drop the last bin for selecting independent bins.The light neutral Higgs boson produces a pair of bottom quarks with a small invariant mass, which makes these two bottom quarks tend to have a small angular distance.Therefore, the charged-Higgs physics generally enhances differential cross sections at small m ∆min bb , and distributions of m ∆min bb can be quite sensitive to this change.This intuitive conclusion could be violated if the neutral Higgs boson is not very light (rather above 70 GeV). We generate event samples with MC simulations in MadGraph5 aMC@NLO 3.4.2[26] followed by parton showering (PS) and hadronizations with PYTHIA 8.306 [27] in the four-flavor number scheme (4FS), and analyze the events with the public routine of the AT-LAS analysis in Rivet [20].We use CT18 parton distribution functions (PDF) [28] and a top(bottom) quark pole mass of 172.5(4.7)GeV in simulations, and set the default renormalization and factorization scales to the sum of transverse energy of all final states divided by two.For MC simulations of the charged-Higgs process in generic 2HDM, we use the 2HDM NLO model from FeynRules [29].The efficiency ϵ bin H + is calculated using event samples generated at leading order in QCD matched with PS.We set the total cross section of SM t t production to 838.5 pb at LHC 13 TeV, calculated with Top++ 2.0 [30,31] at next-to-next-to-leading order (NNLO) and next-to-next-to-leading logarithmic accuracy in QCD. We have used the general procedure to calculate SM predictions at next-to-leading order (NLO) in QCD and found the result agrees well with the theoretical predictions in the ATLAS analysis [19].In the remaining part of our study, we instead use the SM predictions in the ATLAS analysis as described below for its comprehensive estimation of uncertainties.For the inclusive fiducial cross sections, there are four predictions generally agreeing with each other, and we use the theoretical prediction from SHERPA 2.2 [32] at NLO+PS in 4FS, with uncertainties obtained by varying the renormalization and factorization scales by factors of 0.5 and 2.0 and including PDF uncertainties from NNPDF3.0 NNLO PDFs [33].For the normalized distributions of m ∆min bb , we take four different predictions from POWHEG [34][35][36] +PYTHIA8: a prediction in 4FS for t tb b production [37], and three predictions with different tunes of the programs [38] in the five-flavor number scheme for t t production [39] (additional b quarks are generated from PS).We take the mean of the four predictions as the prediction used, thus keeping the normalization, and the standard deviation of the four predictions as the uncertainty.The PDF uncertainties are negligible for normalized distributions and are thus not included.We have checked that evaluating the prediction and uncertainty in a different manner or with different MC results can hardly impact on our final results. We use an interval of 10 GeV to sample the Higgs mass space specified in Eq. (3).For each Higgs mass point, the log-likelihood combining all selected data sets together is calculated as where σ i pre is the theoretical prediction for the i-th bin calculated as in Eq. ( 1), with an error of δ i pre ; σ i exp is the central value of the measurement in the i-th bin, with its statistical error and systematic error combined to be of δ i exp .The data sets we select include N bin = 12 uncorrelated kinematic bins.Let χ plot χ 2 best contours on the (m H ± , m A ) plane in Fig. 1.The pure-SM value χ 2 (B sig H + = 0) = 7.0 is subtracted from the contours.The overall best fit among the sample points is found at (m H ± = 100 GeV, m A = 20 GeV) with B sig H + = 0.54%, where χ 2 is lowered by 3.0 units compared to the SM case.We find generally moderate improvements on description of the ATLAS data; the improvements are especially attributed to enhancements to the inclusive fiducial cross sections compared to SM, as is visualized in our previous work [12]. Based on the values of the χ 2 function, we can use the CL s method [41] to deduce upper limits on B sig H + for fixed (m H ± , m A ).The upper limits at a signal-only confidence level of (1 − α ′ ) are calculated as where B is the best-fitted B sig H + at (m H ± , m A ) (corresponding to the central values of the observation), δ B is an uncertainty estimated by increasing B sig H + from B until ∆χ 2 = 1 (corresponding to the combined uncertainties of the observation and the prediction), and Φ is the cumulative distribution function of the standard normal distribution.We note that only the two sets of normalized distributions of m ∆min bb are directly included in deducing our final limits when calculating χ 2 (thus N bin = 8), for the selected distributions have generally smaller relative uncertainties compared with the inclusive cross sections. Before introducing our final result of the CL s limits on B sig H + , we present a result of using CL s limits to quantify the sensitivity of different data sets to the charged-Higgs signal.The ATLAS analysis has presented 25 sets of relative differential cross section data [19], and we calculate 95% CL s upper limits on B sig H + at all sample Higgs mass points using 23 of them one by one.The two distributions of the number of b-jets are not included as they depend on details of the model due to their σ(t t) normalization.We note the ATLAS analysis has shown that various SM MC predictions in general describe the differential cross section data well within the experimental uncertainties [19].To simplify the calculation, we set the central values and uncertainties of the SM-predicted distributions involved as corresponding observed central values and zeros respectively.We take the median of the B sig H + limits calculated at all sample points as a reference value which conversely represents a data set's sensitivity to B sig H + , and plot these reference limits in Fig. 2. The normalized distribution of m ∆min bb gives clearly the best limit in the dilepton channel and almost the best limit in the leptonplus-jets channel, consolidating our previously mentioned conclusion that the normalized distributions of m ∆min bb are most sensitive to the charged-Higgs signal.We must note, however, the normalized distributions of m ∆min bb are most sensitive in an average sense, and in the leptonplus-jets channel with at least four b-jets, the normalized distribution of m b1b2 shows a much greater sensitivity for a relatively heavy light neutral Higgs boson, specifically for sample points with (m H ± ⩽ 130 GeV, m A ⩾ 70 GeV).We expect to see stronger limits on B sig H + especially in this region of Higgs masses in future multivariate analyses. We now plot the true 95% CL s upper limits on B sig at the sample Higgs mass points in Fig. 3, and the corresponding contours in Fig. 4. The most strict limit found among the sample points is 0.26%, at two diagonally adjacent points (m H ± = 140 GeV, m A = 30 GeV) and (m H ± = 150 GeV, m A = 40 GeV).The result expands our previous work [12] to the (m H ± , m A ) plane, showing the background of the previous m H ± − m A = 85 GeV result.We note again that the specific neutral Higgs boson A is replaceable, and the result should also apply to scenarios where A is replaced with any neutral Higgs boson that couples to fermions in a Yukawa way, e.g. the non-SM CP -even Higgs H in 2HDM, since the efficiency ϵ bin H + in Eq. ( 1) should hardly change when varying parameters other than Higgs masses at leading order.We can take the view that in future high-luminosity data, the central values may coincide with the pure-SM prediction.Then Eq. ( 6), whose first term becomes zero and second term receives most contributions from kinematic bins most sensitive to B sig H + , approximates the potential upper limits on B sig H + for high-luminosity data within the limits of current systematic uncertainties.To simulate a high luminosity, we lower the statistical uncertainties and the SM theoretical uncertainties of the normalized m ∆min bb distributions by 80% and 50% respectively, making the combined non-systematic uncertainties be generally around 10% of the corresponding systematic uncertainties for B sig H + values covered by the calculation.We plot the contours of the resulting 95% CL s upper limits on B sig H + in Fig. 5.In this high-luminosity scenario, the observed limits shown in Fig. 4 can be lowered by 32-45% at most sample Higgs mass points (less for m A = 110 GeV).Lowering the statistical uncertainties by 80% intuitively requires an integrated luminosity of around 900 fb −1 , while the high-luminosity LHC program is expected to reach 3000 fb −1 H + that can be potentially reached with SM-like highluminosity data in the future, on the (m H ± , mA) plane.The statistical uncertainties are reduced to complete insignificance at around 900 fb −1 , where the systematic uncertainties prevent further lowering of the signal limits. III. CONSTRAINTS ON TYPE-I 2HDM Constraints on the signal strength B sig H + can be translated into constraints on the parameter space of various models at fixed Higgs masses.Here we discuss constraints on the type-I 2HDM, which is less constrained by direct searches at the LHC among the common 2HDMs [15]. Before introducing the constraints on the parameter space of type-I 2HDM imposed by the t t → W + bb bW −b decay, we note there are also various general theoretical and experimental results that especially implicate constraints on the mass space of the model.We use the ScannerS-2 program [22,23] to perform a scan on the parameter space defined as Table I in type-I 2HDM.The scan implements constraints including: • Tree-level perturbative unitarity, boundedness from below, and absolute stability of tree-level electroweak vacuum. The electroweak precision constraints and the flavor constraints require a electroweak global fit, which will change significantly if we adopt the new W boson mass reported by the CDF collaboration in 2022 [52].We use the results in Ref. [42] and Ref. [43] new m W value which is improved from while compatible with their previous data [53]; we choose not to reconsider the fit for it.Further details of the implementation of the constraints and the scan can be found in Ref. [22].We plot the constrained mass hierarchy of H + , A, and H for a light charged Higgs boson in Fig. 6, which is in agreement with results from other scans [10,43,54,55]. The constraint from the T parameter shows preference between two scenarios: (a) both A and H are heavier or lighter than the charged Higgs boson H + , and (b) one of A and H is heavier than H + while the other one is lighter than H + .Scenario (b), being in disfavor with the 2022 CDF data while likely allowed by previous data, describes the mass space studied in Section II. For type-I 2HDM with mass hierarchy m H > m H ± > m A and near the alignment limit, as is discussed following Eq.( 2) and Eq. ( 3) in the parameter space we consider: The signal strength B sig H + can be calculated as the product of three branching ratios, B(t → H + b), B(H + → W +( * ) A), and B(A → b b).The last branching ratio depends only on the mass of the pseudoscalar A, as all significant decay modes of A originate from the couplings of A to fermion pairs, which are all proportional to the mass of the fermion and cot β only.The product of the first two branching ratios can be roughly expressed from the related couplings as where II.Eq. ( 7) approximates the trend about tan β quite well for m H ± − m A ⪆ 40 GeV at tan β ⪆ 1.Therefore, we can deduce lower limits on tan β from the 95% CL s upper limits on B sig H + for fixed masses of the Higgs bosons.We use the ScannerS-2 [22,23] program which is interfaced with HDECAY 6.60 [57][58][59] to calculate the branching ratios. We plot the lower limits on tan β at the sample Higgs mass points in Fig. 7, and the corresponding contours in Fig. 8.The correlation of the tan β (or cot β) limits with m A is attenuated at larger m H ± values.The most strict limit found among the sample points is 10.0, at (m H ± = 100 GeV, m A = 50 GeV), and the limit generally weakens as the masses of the two Higgs bosons move away in any direction within the mass space considered.We now note, however, we have ignored an alternative decay mode H + → t ( * )b → W + b b which can make the result in Section II unreliable to be used here.We use two criteria to judge the significance of this alternative mode: (a) At the current limit value of tan β, check if (b) For points where tan β is not limited, check roughly if the maximum of the actual signal strength (with two decay modes summed up) can be larger than the previously calculated limit on B sig H + .Points meeting the criteria are noted in Fig. 7 and linearly generalized in Fig. 8.We also note that as too small tan β can violate the perturbative unitarity in the process studied in this work, our calculations do not include the region tan β < 1, which is usually studied in other processes [15].The result expands our previous work [12] to the (m H ± , m A ) plane, showing a significantly more constrained area in m H ± ⩽ 130 GeV. The most recent study of the tan β constraints for light H + and A at various (m H ± , m A ), to the best of our knowledge, is included in a search for H ± → W ± A → W ± µµ where the W boson is on shell, with the ATLAS detector [56].We compare our result (where A → b b) with this ATLAS result (where A → µµ) in Fig. 9.We find our constraints stronger than the constraints imposed by the A → µµ mode at most Higgs mass points mutually included.The potential tan β limits from highluminosity pseudodata which prefer SM within the current systematic uncertainties, translated from the potential signal limits in Fig. 5, are also compared in Fig. 9.At almost all sample points considered in this work, the potential limits on tan β are 21-38% higher than the observed limits shown in Fig. 7. We can also consider the scenario where the masses of the pseudoscalar A and the scalar H are swapped.While the current B sig H + limits should still be applicable by simply replacing A with H as is discussed in Section II, the tan β limits can no longer be translated in the same way.The H-version B sig H + can depend differently on tan β and significantly on m 2 12 : Differed from A, the non-SM scalar H can decay into two photons through a charged-Higgs loop, and the related di-charged-Higgs coupling c(HH + H − ) linearly depends on the soft-breaking Z 2 parameter m 2 12 and does not decrease to zero like the difermion couplings do as tan β → ∞ [4,60], making this di-photon channel dominant when tan β or m 2 12 is large.The constraints on the parameter space of type-I 2HDM in this scenario would be a completely different tan β-m 2 12 distribution.We do not show this scenario here, as the constraints would be rather relaxed. Another scenario involved is where both A and H are lighter than the charged Higgs boson.The H-part contribution to B sig H + is negligible in part of the parameter space, and in such cases, the previous tan β limits which are originally applied to m H > m H ± can be reused, regardless of the presence of H.The branching ratios of the charged Higgs boson to W ± A and W ± H are almost equal near the alignment limit, while in the subsequent decay, H → b b can be suppressed by three competing channels: H → ZA, H → AA, and the di-photon channel described in the previous scenario.The first two channels emerge only when m H > m A , whereas the di-photon channel can emerge at any m H as long as the soft-breaking Z 2 parameter m 2 12 is large enough for the limit value of tan β.We note down an approximate criterion as an example for when the suppression by the di-photon channel can happen: For m H ± = 100-170 GeV, m H < m A , m HSM , m H ± , and tan β ∼ 10 0 or 10 1 , there is We show this criterion on a moderately constrained point (m H ± = 140 GeV, m A = 60 GeV): The limit on tan β is 6.1 for m H > m H ± , so the formula gives m 12 > 0.61 TeV for m H = 60 GeV.This range of m 12 actually corresponds to B(H → b b) < 9.2% if tan β is 6.1, while B(A → b b) is always 80% at this mass point.Therefore, the limit on tan β will be almost identical to the previous value of 6.1 in this range of m 12 .We also note that as the decay of H into A conversely suggests, the channel A → ZH emerges when m H < m A , and in such cases, the limit on tan β weakens and depends on m H .A complete discussion of the m A , m H < m H ± scenario would involve too many free parameters, lying beyond our current method of reinterpreting the data. IV. SUMMARY We have studied the prospect of a light charged Higgs boson, which is produced from top quark pairs at the LHC, and decays into a W boson and a pair of bottom quarks via an intermediate neutral Higgs boson.We set upper limits on the signal strength of this charged-Higgs channel with the ATLAS measurement at LHC 13 TeV on the W W bb bb final states, in which the distributions of the invariant mass of two closest b-jets show the greatest signal sensitivity.The 95% CL s upper limit on the branching ratio B(t → H + b, H + → W + H i , H i → b b), where H i represents the neutral Higgs boson that participates in the decay, varies from 0.26% to greater than 1.5%, on the mass plane of a 100-160 GeV charged Higgs boson and a 10-110 GeV neutral Higgs boson.Other non-SM contributions to the same final states are not included, yet they are insignificant for most Higgs masses sampled if considered in 2HDM.The limits are expected to be lowered by 32-45% for most Higgs masses sampled, with future high-luminosity data if SM is preferred then. The signal limits are translated into constraints on the parameter space of type-I 2HDM, where we have especially discussed the current general constraints on the possible hierarchies of the Higgs masses.We discuss the parameter constraints with specific mass hierarchies, as we argue the decay properties of the CP -odd Higgs boson A and the CP -even Higgs boson H are different.The 95% CL s lower limit on tan β when m H > m H ± > m A varies from 1 to 10; future high-luminosity data can potentially raise the limits by 21-38% for most Higgs masses sampled.The result demonstrates the power of the W ± b b final states of the charged Higgs boson in constraining the parameter space of type-I 2HDM or models with similar couplings.We encourage dedicated experimental searches for further improvements. FIG. 2. Reference limits on B sigH + for each data set of normalized distribution in the di-lepton channel with at least three b-jets and the lepton-plus-jets channel with at least four b-jets respectively.The kinematic variables include: (1) the scalar sum of the transverse momenta pT of the lepton(s) and jets in the events (HT) and that of only jets in the events (H had T ), (2)pT of the i-th highest-pT b-jet (p b i T ), (3) the invariant mass, pT, and angular distance of the first and second highest-pT b-jets (m b 1 b 2 , p T,b 1 b 2 , and ∆R b 1 b 2 ), and those of the two closest b-jets in angular distance (m ∆min bb , p ∆min T,bb , and ∆R ∆min bb).A smaller limit value corresponds to a greater average sensitivity of a data set to the decay channel studied in this work. FIG. 3 .FIG. 4 .FIG. 5 . FIG. 3. 95% CLs upper limits on B sigH + on the (m H ± , mA) plane.Each tile corresponds to a sample point which is located at the tile's center.The lighter grey tiles represent limits between 1% and 1.5%, and the darker grey tiles represent limits over 1.5%. FIG. 6 . FIG.6.Likely mass separations between the charged Higgs boson and the two non-SM neutral Higgs bosons when m H ± = 100-170 GeV, in type-I 2HDM close to the alignment limit.T fit and σ fit in the upper(lower) plot are the central value and the uncertainty respectively of the fit of the T parameter assuming U = 0 a , in Ref.[42](Ref.[43]).All mH , mA < m H ± points in the lower plot have deviations exceeding −2σ fit , which is possible since the electroweak precision constraints are a multivariate normal distribution.aType-I 2HDM predicts |U | values no larger than 0.01 at all shown points. 1 and C 2 are positive constants, and C 2 increases as m H ± increases.Typical values of these two constants multiplied by B(A → b b) −1 are shown in Table FIG. 7 .FIG. 8 .FIG. 9 . FIG.7.95% CLs lower limits on tan β of type-I 2HDM on the (m H ± , mA) plane, translated from Fig.3.Each tile corresponds to a sample point which is located at the tile's center.tan β is not limited at grey-colored sample points, and limits at sample points above the red line are unreliable. FIG. 1. Intuitive contours a of χ 2 best on the (m H ± , mA) plane, minus the χ 2 of the pure-SM prediction.Note that the bestfitted B sig H + at each point varies with the point. 2 best be the smallest χ 2 possible when varying B sig H + at each Higgs mass point, we then a Contours in this work are all plotted using Matplotlib [40], which implements a marching squares algorithm to compute contour locations based on the sample points. . It is worth expecting further increased precision especially in the B sig H + -sensitive distributions of m ∆min bb in future experimental data. TABLE I . Ranges of the parameter scan.Masses are in GeV.c(HV V ) is the gauge coupling factor of the non-SM scalar H.
7,914.2
2023-04-16T00:00:00.000
[ "Physics" ]
From TshwaneLex to TshwanePedia: Creating and Flexibly Maintaining The addition of a restricted number of features to the dictionary (compilation) soft- ware TshwaneLex suffices to turn this application into a tool for the creation and maintenance of encyclopaedias. This article gives a brief overview of those extra features, using the online encyclo- paedia of the James Randi Educational Foundation (JREF) as case study. In South Africa, the dictionary (compilation) software TshwaneLex is wellknown.Development of the application started in Pretoria in mid-2002, and already one year later a first release was in use at the Sesotho sa Leboa National Lexicography Unit (NLU).Since then, all members of the eleven NLUs have come into contact with TshwaneLex, either through training sessions organised by the Pan South African Language Board (PanSALB) and/or simply as a result of the fact that they use TshwaneLex on a daily basis in their respective units.Several commercial dictionary publishers in South Africa, including Oxford University Press and Pharos Dictionaries, also use or are in the process of acquiring TshwaneLex.Reports of the first South African products compiled and placed online with TshwaneLex may be found in Lexikos 13 (De Schryver 2003: 10-12) and Lexikos 14 (De Schryver et al. 2004: 56-57, 66). Over the years, TshwaneLex has also been well-received at all major international lexicography conferences, including TAMA 2003 (Johannesburg), AFRI-LEX 2003(Windhoek), DWS 2003 (Brighton) AFRILEX 2005 (Bloemfontein).At each of those meetings, the then-latest features of TshwaneLex were introduced, features about which one can read more in the proceedings of each of those conferences. Today, there are TshwaneLex users in the four corners of the world: from Papua New-Guinea and China in the East, to the United States in the West, from Estonia and Ireland in the North, to South Africa in the South.The dictionary projects are either government-sponsored (e.g. at the Royal National Academy of Medicine in Spain, or at the Research Centre of African Languages and Literatures in Congo), commercial (e.g. at Van Dale Lexicografie in the Netherlands, or at Macmillan in Botswana), or private (with users in Japan, Macao, Afghanistan, Albania, Slovenia, the Czech Republic, Germany, Luxembourg, France, the United Kingdom, Kenya, etc.). Clearly, in order to cover such a wide variety of projects and languages, each with its own unique dictionary structure and needing its own script(s), TshwaneLex had to be a truly off-the-shelf application.To attain this, the software was built around three core concepts: user-friendliness, language-independency, and full customisability.User-friendliness is achieved by means of close cooperation between the developers of the software and numerous beta testers around the world.The language-independent nature of the application is realised thanks to full Unicode support on all levels, which also allows for the simultaneous use of various left-to-right and right-to-left scripts.Customisability is brought about by, among others, a powerful Document Type Definition (DTD) editor and linked styles system.This third aspect, customisability, turned out to be so powerful that it led to two adaptations of the basic Tshwa-neLex code: TshwanePedia for the production of encyclopaedias, and Tshwa-neTerm for the management of terminology.In this article we will be concerned with the former, and in a subsequent one (cf.Joffe and De Schryver 2005a) we will look into the latter. From TshwaneLex to TshwanePedia One of the most important aspects we felt had to be in TshwaneLex was a high degree of built-in customisability, as each dictionary project has its own struc-ture and styles, or "style guide".To this end, we built functionality into Tshwa-neLex to allow end-users to customise the DTD.The DTD defines the structure of articles in the dictionary, and the fields that appear in a specific dictionary.Tied in with the DTD is the styles system, which allows one to customise the entire formatting for all fields (e.g.bold/italics, Times New Roman/Arial, as well as common punctuation to appear before, after or between fields).An indepth (technical) discussion of the multilayered TshwaneLex DTD editor may be found in Joffe and De Schryver (2005). What is important here is that this customisability allows for the creation of other types of reference works with TshwaneLex, not just 'dictionaries'.For example, several TshwaneLex users have (ab)used the software for the creation of bibliographies, address databases, and even diaries. When the James Randi Educational Foundation (JREF) approached us to place their 'Encyclopedia of Claims, Frauds, and Hoaxes of the Occult and Supernatural' (Randi 1995) online, we realised that TshwaneLex was indeed flexible enough to handle such a project.At the same time, however, we seized the opportunity to add a string of additional features to turn TshwaneLex into TshwanePedia.The extra features, although predominantly useful for the compilation of encyclopaedias, have been 'fed back' into TshwaneLex, thus becoming available for dictionary compilation as well.Three issues will be focused upon, viz.'window layout', 'multimedia' and 'export' features. The first difference one notices when comparing a typical dictionary with a typical encyclopaedia, is that encyclopaedia entries are generally much longer than dictionary articles.Additionally, whereas the data of a single dictionary article is normally broken up into many different chunks, with each chunk being placed in a separate and carefully thought-out field in the DTD, encyclopaedia entries are more straightforward.Although it remains important for the compilers of an encyclopaedia to be able to see the 'structure' of the entries they are compiling (in the Tree View), more (horizontal) space is thus often needed for the various input boxes.For that reason, a so-called optional 'Wide Tools window layout' was implemented, which is accessible with a single 'hotkey'.Addendum 1 shows a screenshot of the TshwaneLex interface with the wide view enabled.(Note that when the wide view is not enabled, the entire right side is taken up by a preview of the encyclopaedia entries.) Secondly, encyclopaedias also typically contain far more illustrations throughout.A new (multimedia) data type 'Image file' was added to that intent.In the DTD for the encyclopaedia shown in Addendum 1, images (and their captions) may be added following any paragraph.All images are stored in a central place, and whenever compilers want to add a new image, they can simply use the 'Browse …' button to select a stored image.This has been taken one step further.Given that the corpus of the future is the Web, TshwaneLex already had a hotkey to launch a Google Web search for the lemma sign one is working on, the idea being that one can simply select/adapt corpus lines from the Web.This functionality has been extended to the images, with another hotkey now also launching a Google Images search. The third extra feature concerns flexibility of the export, especially with online encyclopaedias in mind.TshwaneLex already provided several methods for placing reference works online.The online dictionaries described in Lexikos 13 (De Schryver 2003: 10-12) and Lexikos 14 (De Schryver et al. 2004: 56-57, 66), for example, were placed on the Web with the 'TshwaneLex online software module'.This is a customisable set of PHP scripts that provide functions for creating a search interface where the user can enter words, to perform searches on a TshwaneLex file stored in a MySQL database, and to generate HTML output.In order to decrease the load on the web server, one may rather wish to generate 'static' output, where the reference work is placed online as a pre-generated file or set of files.In this regard the 'Export HTML' features were extended, with options to create one file per alphabetical category or even one file per encyclopaedia entry, in addition to one single large file.In the screenshot shown below, for instance, the output will be generated as one file per entry, with the data for each of those entries being 'dropped' into a template file. Placing the JREF encyclopaedia online The first edition of James Randi's encyclopaedia was published as a hardcopy in 1995, by St. Martin's Press in New York.In 1996 the not-for-profit James Randi Educational Foundation (JREF) was founded "to promote critical thinking by reaching out to the public and media with reliable information about paranormal and supernatural ideas so widespread in our society today" (JREF 1999(JREF -2005)).When the JREF website was launched in 1999, the idea rose to link the entire contents of James Randi's encyclopaedia to the site.Various attempts produced mixed results over the years, and in mid-2005 TshwaneDJe HLT (the company which created TshwaneLex) agreed to undertake the task.The encyclopaedia was received as a set of WordPerfect files, and these were parsed and then imported into TshwaneLex.The material was proofread, corrected and extended, and a TshwaneLex plug-in was created to transform the (implicit) cross-references into hyperlinks.In the process, the three adaptations mentioned above were made to the software.The encyclopaedia pages, one per alphabetical category, were uploaded to the JREF site on July 28, 2005, and James Randi 'announced' this one day later in his weekly column. In just four days' time, from August 1 to 4, no less than sixty 'bloggers' referred to and commented on the encyclopaedia -an overwhelming response.Today's bloggers clearly complement the feedback strands used so far in our research on (online) dictionary use: "a well thought out log file has been unobtrusively keeping track of all aspects of dictionary use, while an online feedback form has allowed for a more traditional and open way of receiving feedback" (De Schryver and Joffe 2004: 188).One blogger pointed out that large HTML files are cumbersome (= 'implicit feedback'); in the update that went live on August 9, 2005, seven hundred pages were automatically exported and uploaded, one per entry, instead of one per alphabetical category (= 'reaction').See Addendum 2 for an example of the online encyclopaedia in this regard. As one can see, creating and subsequently flexibly maintaining an online encyclopaedia has now become available at every compiler's fingertips thanks to TshwanePedia (just as was already the case for online dictionaries thanks to TshwaneLex).
2,255.4
2010-02-19T00:00:00.000
[ "Education", "Computer Science" ]
Mast Cells in the Mammalian Testis and Epididymis—Animal Models and Detection Methods Mast cells (MCs) are an evolutionary well-conserved type of cells, mediating and modulating allergic responses in innate immunity and tissue remodeling after chronic inflammation. Among other tissues, they inhabit both the testis and epididymis. In the testis, MCs usually appear in the interstitial compartment in humans, but not in other standard experimental models, like rats and mice. MCs seem to be responsible for testicular tissue fibrosis in different causes of infertility. Although experimental animal models follow the effect on MC activation or penetration to the interstitial tissue like in humans to some extent, there is an inconsistency in the available literature regarding experimental design, animal strain, and detection methods used. This comprehensive review offers an insight into the literature on MCs in mammalian testes and epididymides. We aimed to find the most suitable model for research on MC and offer recommendations for future experimental designs. When using in vivo animal models, tunica albuginea incorporation and standard histological assessment need to be included. Domesticated boar strains kept in modified controlled conditions exhibit the highest similarity to the MC distribution in the human testis. 3D testicular models are promising but need further fine-tuning to become a valid model for MC investigation. Introduction Mast cells (MCs) have a crucial role in promoting hypersensitivity reactions and reactions to parasitic diseases. They are essential in developing autoimmune diseases, promoting acute and chronic inflammatory responses [1,2], and recognized as critical regulators of immune modulation, capable of suppressing allergic reactions and chronic inflammation [3]. Mast cell precursor population originates at the yolk sac [4], while in adult tissues, MC precursors reside in the bone marrow and migrate to tissues where they further differentiate and serve as sentinel cells under the influence of intrinsic and external stimuli [5]. Mast cell hematopoietic progenitors express CD34+ on their surface, and both KIT (type III receptor tyrosine kinase, CD117) and interleukin (IL) 3 initiate their differentiation in the bone marrow [6,7]. c-kit, which encodes for KIT (CD117), is essential in regulating all aspects of MC biology besides differentiation: survival, proliferation, secretory functions, and migration. Unlike MCs, most hematopoietic cells lose their KIT expression in the process of differentiation. The stem cell factor (SCF) functions as its specific ligand and has several other names-steel factor, MC growth factor, or, most 1) The CD34+ hematopoietic stem cell is the MC precursor differentiating into MC progenitors in the bone marrow. They reach the tissues MCs reside in and differentiate locally. (2) Under various stimuli, MCs degranulate, and the secreted mediators affect surrounding cells. (3) MCs are crucial in the pathophysiology of asthma, gastrointestinal disorders, allergy, cardiovascular disease, vasodilatation, hematostasis, and cancer. (4) MCs can be activated by immunoglobulin (Ig)E-dependent and IgE-independent pathways. IgE-dependent stimulation starts with pre-exposure to an antigen, which sensitizes the MC. The second exposure links the IgE and high-affinity IgE receptor (FcεR1) with the antigen and causes degranulation. The IgE-independent pathway does not require sensitization. Various mediators (neuropeptide Y, substance P, complement fragments polypeptides, cytokines, toxins) can directly activate or degranulate MCs. MCs are mononuclear, granulated cells of the immune system that have an oval or irregular shape. Due to the presence of acidic histamine, the abundance of granules that overlay the centrally positioned nucleus stain metachromatically with alkaline dyes [12]. Intact mast cells have tightly packed granules; they are spindle-shaped, unlike spreading MCs, which have fewer granules, but both stain purple red with toluidine blue. On the other hand, degranulated cells are pale pink with a prominent nucleus and no longer stain metachromatically [13]. MC activation occurs as a response to autoreactive T-cells stimuli, immunoglobulin E, complement, cytokines, neuropeptides, physical trauma, or sunlight [2]. In the granules of MCs, histamine, heparin, chymase, tryptase, cathepsin G, carboxypeptidase A, and tumor necrosis factor-alpha (TNFα) can be found pre-synthesized and may be released in the surrounding tissue right after MC activation [14]. Consequently, degranulation Allergies The functional characterization of MCs is complex due to their distribution, but also to their dual behavior in the organism since they can act simultaneously as "sensors" [22] and effective "warriors". Two major routes of MC activation are known-immunoglobulin E (IgE)-dependent and IgE-independent pathways ( Figure 1). The IgE-dependent pathway is considered as the main one for MC physiological activation in host defense against parasitic infections and the initiation of type I allergic reactions [23] and requires sensitization to an allergen. On the other hand, IgE-independent pathways have also been proven to serve pivotal roles in the pathophysiology of allergic and pseudoallergic responses but include MC activation by inflammatory mediators, complement fragments, cytokines, and neuropeptide substance P through specific G-protein-coupled receptors (GPCR) [3]. When activated by IgE-induced signaling through the canonical high-affinity IgE receptor (FcεRI) [23], MCs respond through an active degranulation process, characterized by a fast release of various intracellularly-stored mediators. The inducers and tissuespecific supporters of MC active response remain incompletely characterized, despite recently proposed candidates. One of them is interleukin IL-33, a constitutively expressed IL-1 family member, having above mentioned the dual role of activation and support of MCs with a significant emphasis on their inflammatory response [22]. IL-10 too has a dual role that may contribute to a negative feedback regulation in the context of inflammation-related pathologies, in which IL-10 promotes the transient expansion of MCs but then terminates the inflammatory milieu by the induction of MC apoptosis [24]. IgE-independent signaling pathways have been related to MC activity in immediate hypersensitivity reactions after the discovery that a diverse range of peptides such as neuropeptide Y or substance P [25], nerve growth factor (NGF), calcitonin gene-related peptide (CGRP), and pituitary adenylate cyclase-activating peptide and platelet-activating factor (PAF)-4 [26] can activate human MC through the members of the G protein-coupled receptor family (GPCR), called Mas-related G protein-coupled subfamily of receptors (MRGPRs). Notably, MC membrane receptor MRGPRX2 has been identified as a cause of pseudo-allergic drug reactions [26]. Other than body-produced peptides, MRGPRX2 was shown to bind with diverse externally delivered agonists such as insect venom chemical components and many drugs [27]. Therefore, MRGPRX2 inhibitors are expected to be tested in MC-related medical conditions with few effective therapeutic agents, such as postoperative pain, migraine, and drug-induced acute pseudo allergic reactions [28]. Contribution of Mast Cells to the Pathology of the Mammalian Testis and Epididymis MCs that reach and reside in the testis may be (a) regular MCs-quiescent and function physiologically or (b) pathological MCs, activated after residing in the testis or arriving de novo when related to the pathological process ( Figure 2). They are typically found in the connective tissue of testis' tunica albuginea or the epididymis in most mammals. Unlike rodents, human testes contain MCs in the interstitial tissue under physiological conditions [29][30][31]. This difference or why MCs distribute to the interstitium or just degranulate in some experimental animal models is still not elucidated. MCs contribute to the immune privilege of the testis and the homeostasis it maintains [32] by their general role in vascular permeabilization and immunomodulation, but also have suggestable roles in spermatogenesis, supported by the existence of MC-spermatozoa interaction through the binding of tryptase and proteinase-activated receptor-2 (PAR-2) [33,34]. MCs that have a role in vascular permeabilization, testicular immune privilege, and immunomodulation. If the testis is affected by infection, inflammation, environmental factors, tumors, cryptorchidism, or testicular torsion, MCs increase in number or degranulate and may lead to the severity of the fibrosis, even germ cell loss and tubular wall thickening. SC-Sertoli cell, GCgerm cell, LC-Leydig cell, MP-macrophage. Evolutionarily Conserved Mast Cells Mammalian mast cells have exquisite evolutionary conservation. Some data suggest they (or their earlier forms) appeared about 450-500 million years ago in a common ancestor humans share with hagfish, lamprey, and sharks, even before adaptive immunity or chorda development [35]. The same morphology and histochemical appearance were found in the sea squirt (Ciona intestinalis) test cells, which already show some similarities with human mast cells, such as prostaglandin D2 production. The latter contain granules that store histamine and heparin-serine protease complexes. When test cells are activated, they produce prostaglandin D2 like MCs and are considered their counterparts in C. intestinalis [36]. Several pathological conditions are related to MC active response in the mammalian male reproductive system, such as infection or inflammation, testicular torsion, immunological factors, cryptorchidism, environmental factors, tumors, epididymis dysfunction, excurrent ducts obstruction, and every one of them is a possible cause of sub-or infertility [16]. For instance, testicular fibrosis, as one of the most severe infertility diagnoses, could be related to a long-term MC pro-inflammatory response usually followed by fibrogenic actions. Fibrosis occurs as the effect of extensive scarring and overgrowth after fibroblast activation into fibrotic-phenotype myofibroblasts, secreting collagen and fibronectin. MC fibrogenic activity is established through secretion of tryptase, chymase, histamine, TGF-β1, IL-13, IL-9, CCL2, PDGF, and glycosaminoglycan FGF-2 from their granules, although some, such as chymase or metalloproteinases, could have an anti-fibrotic effect, reviewed in Zhang and Kurashima [3]. The specific pathways of MC regulation in testicular pathologies remain uncharacterized. One of the reasons could be that human testicular and epididymal pathologic conditions are primarily investigated in an already developed form, which decreases the possibility to investigate the MC-caused damage mechanism or their behavior in the activation phase. Suitable animal models give more mechanistic insight into phases of disease progression. Evolutionarily Conserved Mast Cells Mammalian mast cells have exquisite evolutionary conservation. Some data suggest they (or their earlier forms) appeared about 450-500 million years ago in a common ancestor humans share with hagfish, lamprey, and sharks, even before adaptive immunity or chorda development [35]. The same morphology and histochemical appearance were found in the sea squirt (Ciona intestinalis) test cells, which already show some similarities with human mast cells, such as prostaglandin D 2 production. The latter contain granules that store histamine and heparin-serine protease complexes. When test cells are activated, they produce prostaglandin D 2 like MCs and are considered their counterparts in C. intestinalis [36]. While birds have MCs residing in the epididymis and no reported MCs in the testis [36][37][38], amphibians, with their representative, frogs (Rana esculenta), are a standard model of testicular MC investigation. The testes of frogs have been investigated at the light and electron microscope level and showed scarce MCs residing in the testicular interstitium, just like in reptiles (lizard, Podarcis s. sicula and crocodile, Caiman crocodilus) [39][40][41][42][43]. However, the seasonal changes during the annual reproductive cycle in testicular MC degranulation and number in the frog and lizard (a peak in early winter and late spring) are not a feature easily compared to human tissues. Mast Cell Detection Methods in the Mammalian Testis and Epididymis Despite the proven existence of MC in the male reproductive system, during this literature review, noticeable incoherency of methods used to locate MCs was found, including the fixation and staining method and tissue sampling (Supplementary Table S1). Most authors clearly state that the measurements, histopathological or molecular (real-time PCR, quantitative PCR, high-performance liquid chromatography (HPLC)), were carried out in whole testes (connective tissue of tunica albuginea and interstitium) [44][45][46][47][48][49][50][51], some studies analyzed MCs only in the interstitial compartment [52,53], while in a few studies it was not specifically reported [54][55][56][57]. This could lead to a decrease in the consistency of the results between studies since most MCs reside in the connective tissue of the tunica albuginea in most animal species [58]. The most considerable influence of fixation on MC detection is related to MCTs (mucosal, tryptase-only), which require fixation in non-aldehyde solutions (Carnoy) and cannot be detected with formalin fixation, like MCTCs (connective tissue, tryptase, chymase, and carboxypeptidase) can, which are not sensitive to formalin [59,60]. The previous findings are not an insurmountable problem in the testicular detection of MCs, as in the testis, almost only connective-tissue MCs are found. Parallel MC counting was performed from Bouin-Hollane's fluid-fixed, paraffin-embedded and 2% phosphate-buffered glutaraldehyde-fixed, Epon-embedded specimens, both stained with toluidine blue dye [61] to obtain a correct measurement of possible MC total volume increase per testis, while the volume of MCs per testis may be variable due to cell number and single-cell volume. Average MC volume was different in the differently embedded sample groups. Another fixative comparison was performed regarding epididymides, fixed in either Schaffer solution (containing formalin) or BLA (basic lead acetate) to qualitatively distinguish MCs primarily found in the connective tissue or mucosa [62]. Regarding the MC tissue visualization, toluidine blue on paraffin-or resin-embedded tissues is still the most commonly used, while historically one of the oldest MC detection methods, alone or in combination with another method-Giemsa, alcian blue, safranin, aldehyde fuscin, or immunohistochemistry. Toluidine blue is a metachromatic (a pH-dependent dye that stains cell elements a different color from the dye), staining heparin-containing granules purple or red [2,63]. As a simple, non-sensitive chemical, it can be applied to tissues after various methods of fixation and embedding [44,46,47,[49][50][51]53,55,57,58,61,62,. However, its major disadvantage is the inability to distinguish immature from mature MCs, which could be done by alcian blue-safranin staining [45,48,67,72,93]. Moreover, alcian blue-safranin can help distinguish connective tissue MCs from mucosal MCs, although it may not be necessary for the testicular MC analysis, where most, if not all, MCs are connective tissue-type [83]. Immunohistochemical markers detecting MCs in the testis include specific MC proteases (carboxypeptidase, chymase, and tryptase) [30,94], but also KIT (CD117) [95], which also stains Leydig cells, seminiferous epithelium, and the sperm acrosome [96]. One group of authors used 5-hydroxytryptamine (5-HT) or 5-HT receptor subtypes as a marker of MCs [97], analyzed by immunohistochemistry, although it has been shown that only 40% of alcian blue-positive MCs stain with 5-HT [83]. In rat tissues, an antibody against rat mast cell protease 1 (RMCP1) was used next to toluidine blue dye [49,64]. Several other, less specific markers are used in immunohistochemistry for MC detection but were not applied in testis investigations to our knowledge, such as FcεRIα (α-chain of the high-affinity IgE receptor Fc region) [97]. Further detailed summary on MC markers, in general, may be found in reviews regarding staining [12,98] and oriented on detection of MCs by flow cytometry [99,100]. Mast Cells in Mammalian Testes and Epididymides MCs most commonly reside in the connective tissue in the testicular tunica albuginea (TA) and epididymis. In human testes, mast cells are abundant both in the subcapsular connective tissue of the TA and the interstitial tissue between the seminiferous tubules. MCs in humans appear in the testes already in the fetal period; their number increases during infancy, decreases in childhood, and again increases at the onset of puberty [29,30] (Figure 3). During development, MCs appear in the rat testes on postnatal day (PND) 30, in, or under tunica albuginea, and increase in number, especially in old age (18-24 months) [31]. Rodents (Rat, Mouse, Hamster, Other) Literature data on the presence of mast cells in rat (Rattus norvegicus) testes are somewhat inconsistent. Mainly Wistar and Sprague-Dawley strains were used in the studies and are systematized in Supplementary Table S1. Only one study compared the results of MC analyses between rat strains [67] and showed a significant difference in the results. There is even a report showing no evidence of mast cells in the untreated rat testis [54], but without a detailed description of whether there was an occasional MC in the TA or the author implied finding no MCs in the testis proper, primarily composed of the seminiferous tubules and interstitium [101]. Due to the abundance of MCs found around subcapsular blood vessels, it has been emphasized that the number of MCs could have been under-estimated if the samples used were not whole-mounted testicular capsules alongside with testes [46]. Rodents (Rat, Mouse, Hamster, Other) Literature data on the presence of mast cells in rat (Rattus norvegicus) testes are somewhat inconsistent. Mainly Wistar and Sprague-Dawley strains were used in the studies and are systematized in Supplementary Table S1. Only one study compared the results of MC analyses between rat strains [67] and showed a significant difference in the results. There is even a report showing no evidence of mast cells in the untreated rat testis [54], but without a detailed description of whether there was an occasional MC in the TA or the author implied finding no MCs in the testis proper, primarily composed of the seminiferous tubules and interstitium [101]. Due to the abundance of MCs found around subcapsular blood vessels, it has been emphasized that the number of MCs could have been under-estimated if the samples used were not whole-mounted testicular capsules alongside with testes [46]. When mentioned, data regarding MCs in the rat epididymis are consistent, repeatedly confirming MC in a noticeable number in the connective tissue around the epididymal tubules in all parts of the epididymis (head, body, and tail) [49,51,54,62,65,68,70,97]. Majeed reported finding MCs in the mouse (Mus musculus) epididymis but not the testis; however, as in the study analyzing rat testes, without the specification if there might be some MCs in the TA [55]. Several authors report no MCs in the regular mouse testicular interstitium [75,76,78,84], respectively analyzed in the neonatal, prepubertal, and adult [75]. Syrian (golden) hamster (Mesocricetus auratus) is a seasonal breeder, also having MCs positioned in the connective tissue of the TA [83,102], with only an occasional MC in the intertubular area. Under a long photoperiod (14:10 h light/dark), the MC number gradually increased from PND 23-90 (sexual maturation) and decreased during a short photoperiod (6:18 h light/dark) [83]. Domestic (Sus scrofa domestica) and Wild Boar (Sus scrofa ferrus) MCs in the testis of domestic boar have a similar spatial distribution to the human testis. They inhabit both the TA and the interstitial tissue in a lower number [58,79,85,87]. When mentioned, data regarding MCs in the rat epididymis are consistent, repeatedly confirming MC in a noticeable number in the connective tissue around the epididymal tubules in all parts of the epididymis (head, body, and tail) [49,51,54,62,65,68,70,97]. Majeed reported finding MCs in the mouse (Mus musculus) epididymis but not the testis; however, as in the study analyzing rat testes, without the specification if there might be some MCs in the TA [55]. Several authors report no MCs in the regular mouse testicular interstitium [75,76,78,84], respectively analyzed in the neonatal, prepubertal, and adult [75]. Syrian (golden) hamster (Mesocricetus auratus) is a seasonal breeder, also having MCs positioned in the connective tissue of the TA [83,102], with only an occasional MC in the intertubular area. Under a long photoperiod (14:10 h light/dark), the MC number gradually increased from PND 23-90 (sexual maturation) and decreased during a short photoperiod (6:18 h light/dark) [83]. Domestic (Sus scrofa domestica) and Wild Boar (Sus scrofa ferrus) MCs in the testis of domestic boar have a similar spatial distribution to the human testis. They inhabit both the TA and the interstitial tissue in a lower number [58,79,85,87]. Concerning the postnatal developmental phases of microminipigs, MCs appeared in the TA at birth and gradually inhabited the interstitium (interlobular area, rete testis, peritubular areas) at 1.5 months of age onward, even before they reach sexual maturity at 4.5 months [86]. No significant differences were found in the MC location and appearance (elongated and showed small cytoplasmic granules) between domestic and wild boar [58]. Non-Human Primates MCs of rhesus monkey testes can be identified from the infantile period (earliest reported 100 PND), increase in number until adulthood (6-8 years), with a significant increase at peripubertal stage (3-4 years) [103]. Data obtained analyzing the common marmoset monkeys (Callithrix jacchus) testis show no MC markers (tryptase, chymase) detected with real-time PCR, but mention detecting MCs by immunohistochemistry [104]. References with data on MC localization in the mammalian testes or epididymides are further systematized in Supplementary Table S1, comparing MC location, fixation, detection method, and animal strain, if applicable, together with MC localizations in testes and epididymides of animals mentioned in one or a small number of studies, like the testis or epididymis in bull, deer, ram, cat, dog, hare, and some other animals. Experimental Models Investigating Mast Cells in Mammalian Testes The reasons behind MC presence in the normal testis are still not completely elucidated, but the disturbance of MC homeostasis is found in certain pathological conditions. In human testes, the increase in abundance of interstitial MCs is thought to lead to the disruption of spermatogenesis [105] and testicular histology [106] and consequently to male infertility [107]. The activation of inflammatory mediators and immune cells was found to precede the depletion of germ cells in many forms of infertility (e.g., cryptorchidism, Klinefelter's syndrome). In experiments of mast cell activation due to pathological changes, rats were the most commonly used animal models, followed by mice and, sporadically, hamsters and boars. Several of them include mechanistic data on MC degranulation or distribution. Table 1 contains the reviewed animal studies that include MCs as a primary investigation goal or a secondary finding. Physiological changes, like seasonal testicular involution or the long and short photoperiod effects, are briefly discussed and were not included in Table 1. Gonadal Effects of Medications Alkylating agents, usually administered as oncological treatment [108], cause various testicular alterations, such as germ cell loss and seminiferous tubule histology deterioration, and affect MCs [57,84,109]. The effect they have on germ cells differs based on age. If administered in an adult animal, the germ cell loss and seminiferous tubule histology deterioration are transient, but if a young, prepubertal animal receives treatment, the recovery is not possible. One example is cyclophosphamide, causing an increase in MC number and other testicular alterations that the zinc oxide nanoparticles concomitant treatment prevented [57]. The mouse model demonstrated that the changes caused by cyclophosphamide were mainly due to oxidative stress. An antioxidant, ethyl pyruvate, showed a significant reduction of the MC number elevation [84], almost to the control levels, after treatment with cyclophosphamide. Administration of a second alkylating agent, ethylene dimethane sulphonate (EDS), which disrupts Leydig cells, has led to numerous MCs in the peritubular area in adult rat testes after Leydig cell destruction. MCs disappeared once a new Leydig cell population was differentiated [65], thus implying a novel role of MC in the induction of differentiation after histological injury. It seems that EDS did not directly affect MCs. Another proof comes from the findings that after EDS treatment of adult rats, no differences in MC number in the TA or testicular fluid were found [46]. A detailed study by Gaytan et al. elucidated the origin of MCs populating the testicular interstitium after treatment with EDS, gonadotropin-releasing hormone (GnRH) antagonist, and hypophysiotomy. Mitotic MCs exist in EDS-and GnRH antagonist-treated group testes before differentiated MCs (detected by toluidine blue and granules quality). They relate the accumulation of MCs to the local proliferation and differentiation of MC precursors [61]-blood-borne and derived from hematopoietic stem cells [110]. Moreover, inflammatory reactions may not necessarily cause the accumulation of MCs, as an inflammatory reaction would have caused the migration of other cell types (e.g., leukocytes). In contrast, GnRH antagonists and estrogen did not cause apoptosis, necrosis, or other inflammatory reactions [61]. The study has also highlighted certain relations between MCs and interstitial Leydig cells, suggesting the possibility of their common regulatory pathways. Support for this theory comes from further studies with EDS and testosterone treatment, where MC appearance in the EDS-treated testicular interstitium could also be facilitated with prolonged post-EDS testosterone administration for up to 2 months, while oxytocin treatment did not affect MC number [66]. Leydig cell destruction by EDS treatment led to a significant increase in interstitial MC number. However, the rate of MC proliferation was lower in the group additionally treated with testosterone implants (used for Leydig cell recovery), showing there are two separate phases of MC proliferation in the testes, regulated differentially. The authors suggest that mast cells are (in)directly regulated by Leydig cells [53]. Treating young newborn rats with EDS revealed a significant increase in MC number in the testes, together with their invasion in the interstitium [69]. Cl 2 MDP (dichloromethylene diphosphonate) has been used as an immuno-modulating anti-osteoclastic drug to treat hypercalcemia associated with cancer but exhibits severe macrophage cytotoxicity. As a result of macrophage depletion and consecutive inhibition of Leydig cell number increase during postnatal development after dichloromethylene diphosphonate-containing liposomes (Cl 2 MDP-lp) injection, proliferating MC number was increased in the testicular interstitium after treatment [71], again showing that Leydig cells and MCs share regulatory factors. Some antiviral compounds also affected MC numbers, such as acyclovir, a common drug used for Herplex simplex virus types 1 and 2 treatment. While known to be gonadotoxic [111,112], it additionally causes an increase of peritubular and interstitial MCs in testes of adult male rats in a dose-dependent manner [50]. Recent studies relate MC activation to the environmental xenobiotics: increased MC markers Cd13, Cd33, and Cd38 in the testicular tissue in male offspring of female rats simultaneously exposed to the phytoestrogen genistein and the antiandrogenic plasticizer di-(2-ethyhexyl) phthalate during gestation [56]. These studies opened many questions and investigation possibilities on drugs affecting MC activation. Mast Cell Antagonists A limited number of studies on animal models analyzed the role of MC antagonists. In humans, common MC blockers that modulate allergic conditions include antihistamine drugs or mast cell stabilizers. Some of them are ketotifen, tranilast, fexofenadine, and ebastine; however, only the first has been analyzed in a rat model of undescended testes. The possible reason for the rarity of studies could be related to the long and common usage of MC stabilizers in human medicine, although not for infertility treatment or prevention as the main indication. Also, there is a difference in ketotifen metabolism between rats and men [113]. Ketotifen has been experimentally used in men as a treatment for oligo-and astenozoospermia and improved sperm quality and quantity [114][115][116]. Acikgoz et al. showed that in the experimental unilateral undescended testis model, a significant increase in interstitial MC number in both the descended and undescended testes was found, except with a milder change in subcapsular scrotal MC number. Ketotifen administration reduced those numbers significantly in rats of different developmental stages (prepubertal, pubertal, and adult rats) and showed a promising effect on fertility preservation [73]. Moreover, ketotifen administration reduces MC number and damage in the testicular tissue, both after autoimmune orchitis and testicular torsion (contralateral testis) [117]. Ketotifen administration after testicular damage caused by wide needle puncture also revealed its contribution to reparation and regeneration of the testis by reducing non-functioning tubule number and increasing the number of normal spermatogonia. Here, MC inactivation by ketotifen did not prevent the destructive processes of damaged testicular tissue but still significantly and positively affected the testis' regenerative capacity [74]. Other MC antagonists were used in human studies analyzing testicular changes only and were comprehensively reviewed in Haidl et al. [16]. Experimental Autoimmune Orchitis (EAO) Experimental autoimmune orchitis (EAO) study represents a combination of physical and immunological influence on the testes, causing both degranulation and an increase in the number of MCs, with their localization in the interstitium, and severe germ cell depletion, even aspermatogenesis, and interstitial damage. EAO showed a significant subcapsular and interstitial increase in MC number in rats [47,67] and mice [77]. In addition, EAO led to significant MC degranulation, and the MCs were found in the proximity to protease-activated receptor-2 (PAR 2 )-positive cells, suggesting that PAR 2 , expressed by peritubular cells, is activated by tryptase from the MCs [34,118]. In humans, spermatogonia were PAR 2 -positive cells located basally in the seminiferous epithelium [119], although, in rats, only spermatid acrosomes were PAR 2 -positive within the seminiferous tubules [47]. This could further explain the interspecies differences related to the investigation of MCs in the testis. Notably, a later study by Lustig et al. showed that these results of EAO were possibly strain-dependent: the increase in MC number (mainly in the tunica albuginea) was two-fold in Sprague-Dawley rats, and five-fold in Wistar rats 80 days after EAO, compared to the control group [67]. Stress Evidence is emerging to support the role of stress in MC changes in number and maturity rather than migration. Spermatic cord torsion (with or without subsequent detorsion) injury experiments in rats showed an increase in MC number [120] although no MC migration to the interstitium in the experimental group [64], but significant MC degranulation, except in the mouse model [75]. MC antagonists or vasoactive intestinal peptide (VIP) could prevent MC degranulation caused by testicular torsion [48]. Experiments with testicular torsion on mice caused MC invasion to the interstitium of the contralateral testis postoperatively [75]. Germinal epithelium sloughing, seminiferous tubule atrophy, and interstitial edema were common in histological analyses in these experiments. Stress caused by immobilization and low temperatures caused maturation and degranulation of MC in the testicular interstitium. In comparison, β-endorphin caused a less pronounced effect on the same specimens, while VIP significantly decreased the number of mature mast cells and inhibited degranulation [72]. Hormones Experimental studies analyzing MCs concerning hormones are mostly done on rats. A series of studies by Gaytan et al. reported experimental treatments of rats on PND 1 with estrogen and findings of an increased number of MCs in the testicular interstitium and even in the lamina propria of the seminiferous tubules [44,46,70]. Notably, the same study demonstrated a maturation process of MCs in the interstitium, starting from the appearance to the fully maturated form. Recruitment of precursors and proliferation of immature MCs (recognized by mitotic figures) happens simultaneously in prepubertal MC number increase [45]. These studies showed that MCs invade the testis proper in their mature form while immature MCs proliferate and mature in the testicular interstitium after an induction signal. Estrogen treatment on PND 1 also caused an increase in the number of MCs in wholemounted tunica assessment. On the contrary, treatment with testosterone on PND 1 did not affect the MC number, which implies the specific role of estrogen in regulating MC proliferation in the male reproductive system [70]. Genetically Altered Animals Although many experimental models show a change in MC number or localization together with germ cell depletion, only genetically altered animals may show a more specific effect that MC function has on testicular germ cells. Several methods of genetic alterations targeting MCs are available to date. The standard models used for several decades are mice with mutations located in the white spotting (W) locus (i.e., c-kit), which exhibit reduced c-kit tyrosine kinase-dependent signaling and profound mast cell deficiency. c-kit mutations such as Kit W /Kit W−v (point mutation in the kinase domain of the receptor) [121] exhibit severe abnormalities (e.g., severe anemia and sterility) [21], while others do not (e.g., KitW−sh/Kit/W−sh bearing spontaneous W-sash (W sh ) inversion mutation affecting c-Kit transcriptional regulatory elements) [122]. These differences could affect the conclusions on MC's role in testicular homeostasis. For example, sterility is most probably caused directly, as KIT is also expressed in germ cells [17,123]. More recently, several strains of mice with c-kit-independent constitutive MC deficiency have been described for either the entire MC compartment or specific subtypes (MMC and CTMCs) [124]. In the context of fertility, c-kit-independent MC-deficient models have shown impairment in embryonic development. Here we discuss other genetically altered animals, where MCs have a distinct localization, number increase, or specific effect on the testis. Transgenic male mice expressing human P450 aromatase cDNA under the control of the ubiquitin C promoter (AROM+) presented infertility as a major phenotype. They also had an increase in the number of activated mast cells in the interstitial spaces of the older mice, compared to wild-type, where no MCs were found [76], together with interstitial fibrosis. Another model for examining postnatal Leydig cell differentiation is anti-Müllerian hormone (AMH) over-expressing mice (Mt-hAMH mice). AMH has an inhibitory effect on the regulation of postnatal Leydig cell differentiation, and MCs are activated by the consequential hormone level and spermatogenesis disruption. Their testes are deficient in Leydig cells and have many MCs in the interstitial compartment compared to controls (C57BL/6 mice) [125]. The only example of MC appearance within the seminiferous tubule, among primary spermatocytes near the basement membrane, is observed in retinoid-related orphan nuclear receptor alpha (RORα)-deficient mice, demonstrating disruption of Sertoli-germ cell junctions and showing the necessity for RORα protein in the regulation of testicular structure [78]. Cryptorchidism Like the human testis, the unilateral cryptorchid testes of boar contain scarce MC in the interstitium, but their number significantly increases in the bilateral cryptorchid testes [29,87,126]. The already mentioned unilateral rat cryptorchidism model showed a mild increase in scrotal testis MC number and a high increase in the abdominal testes [127], which ketotifen, an MC antagonist, reduced [73]. No data on the effect of any MC antagonist on the pig testes were found in the available literature, although pigs have a more similar testicular MC distribution compared to humans than rats for their normal MC appearance in the interstitium. Experimental Models Investigating Mast Cells in Mammalian Epididymides Ethanol. Experimental observations of MCs in the epididymis mainly include ethanol intake. Ethanol-preferring rats showed an increase in the total number of degranulated MCs in the epididymis, but no such effect was observed in the testis [49]. In prepubertal rats, an increase in MC number and degranulation in the caudal and initial segment of the epididymis after ethanol consumption was observed [51]. Antioxidants seem to have a protective effect in such experiments. Alternating intake of an antioxidant Bauchinia forficata alcoholic extract compared to only ethanol intake for 15 days relieved the MC degranulation level in the epididymal head [68]. Hormones. Contrary to the testis, neonatal estrogenisation did not cause significant MC number change in the rat epididymis; instead, it was related to the increased volume. Similarly, testosterone administration on PND 1 did not affect epididymal MCs in the prepubertal testis [70]. Inflammation. Inoculating C. trahomatis to the vas deferens caused pyogranulomatous inflammation, abscesses, and spermatic granulomas in the rat epididymis. MCs are typical in moderate to severe interstitial inflammation, next to lymphocytes, plasma cells, and neutrophils [128] Discussion Although significant progress is achieved in studies about MCs' role in male infertility, unknown elements in the cascade of mediators in the complex pathophysiology of male infertility, which MCs significantly influence, call for further detailed studies in real-time conditions. There is limited access to human testicular tissues prior to histologically recognizable infertility. Hence, there is a great interest in finding an animal or in vitro model, which could be used in experiments analyzing the impact of various stimuli (chemical, biological, physical) on MC activation. Available data regarding MCs in the testis abounds exquisite reviews on humans [16,33,130] and a comprehensive review on MCs in the nonmammalian vertebrates [42], including the presence of MCs in the testes of birds, frogs, and lizards. A comprehensive review on the mammalian testicular and epididymal MC has not been found written from the perspective of method and result comparison in the available literature and may have a significant impact in drawing the attention of future authors to crucial problems in MC analysis. The evolutionary conservation provides the possibility to use the same detection method for MC analysis in the reproductive system of several mammalian species. The functions and granules contents are almost identical, while different histological features in the testicular architecture between mammals (e.g., seminiferous wall thickness) and other biological differences, such as the subtle blood-testis barrier variations between the species [131][132][133][134], could be some of the causes of variable testicular MC effect on testicular structures between humans and other vertebrates. Molecular mechanisms of MC activation in infertility may be analyzed by several methods used generally in MC investigations. On the other hand, male infertility studies require histological assessment due to the characteristics of MC distribution and migration within the testicular tissue. As suggested by Mayerhofer et al. in 2018. and Haidl et al. in 2011, the anatomical proximity of the MCs to the testicular structures, especially seminiferous tubules, are significant in the pathology of human infertility. Logically, the closer the MCs are to germ cells, the more direct an effect they can have on fertility via secreted mediators. Due to these characteristics, it is necessary to consider the tubular wall thickness and conditions of the blood-testis barrier when discussing and analyzing the effect of MC on germ cells and fertility. Among the animals used in MC research, domestic boars seem to have MC distribution more similar to the human testis than rodents [58,79,86,87]. If used in experiments, the breed [135] and exposure to light should be taken into consideration [136]. Wild boars are indeed not a proposed model in MC investigation, for both impractical sample collection and seasonality in testis function [137], whereas the effect of seasons on fertility became evolutionarily ameliorated in domestic pigs. They do not show such distinct changes in sperm production and quality, especially if the light exposure resembles the conditions of increasing photoperiods [136]. Hamsters, as typical seasonal breeders, may not be the most suitable model animals for MC investigation-the apoptotic and proliferative activity and the testicular involution are not features easily translated on human testicular investigations [83,102]. With respect to the difference in animal facility conditions and practicality, pigs are optimal as an animal model in testing chemicals that could aid human infertility. Although human MCs have been divided into predominantly tryptase (MCT), chymase (MCC) or both (MCTC) and rodent MCs into connective (CTMC) or mucosal tissue MCs (MMC), with a high level of similarities between human and rodent MCs, it has been noted that perhaps an organ-specific classification should take place [138]. The previously mentioned review does not include specific MC markers in testicular tissues, but there may be subtle differences in testicular and epididymal MC expressed markers compared to MCs residing in other organs, and not just the known differences between mucosal or connective tissue MCs. Pre-detection methods are crucial in (immuno)histochemical MC analysis, for tissue sampling, fixation, and staining may significantly alter the results. Whenever possible, whole testes should be fixated (including the tunica albuginea) while several fixations and staining methods should be tested, at least at the beginning of the study, to evaluate the most accurate method in the data collection. Toluidine blue should be included in the analyses, being the most used detection method, for better result comparison, and MC tryptase shows a limited expression in rodents [73]. Detection methods used in MC analyses also need systematization and guidelines regarding testicular studies. In general, studies on MCs in the testes still do not include a high diversity in detection methods and possible markers (Supplementary Table S1). For example, flow cytometry analysis of testicular MCs has not been performed to the best of our knowledge. In most cases, a histological assessment was used with a limited number of immunohistochemical markers. Nonetheless, any variation of histological analysis gives valuable data on MC interactions with other testicular cells (such as a change in location, degranulation, or shape), and other methods (real-time PCR, quantitative PCR, fluorescenceactivated cell sorting (FACS)) cannot obtain that. A phenomenon called "phantom mast cells" occurs after extensive degranulation of MCs, which remain present but undetectable by toluidine or any other staining that detects granules [139]. In order to avoid falselynegative results for this reason, especially in experiments analyzing MC degranulation, other detection methods should be used, like antibodies against MC tryptase, that will detect residual protease or KIT (CD 117) that does not bind to granules of MCs [140]. Regarding the in vitro model, other testicular cell types, especially Leydig cells, need to be involved in the in vitro investigations, as shown in many studies, where MCs and Leydig cells directly affect one another and share common regulatory factors [45,66,71]. Despite the enormous achievements in in vitro testicular models [141,142], MCs are still not included in the 2D or 3D testicular in vitro models. Nonetheless, significant progress has been made, for example, a study from 2020 showed a 3D co-culture model including Sertoli, Leydig, endothelial, myoid cells, and macrophages, detected by their respective specific markers [143]. There are additional obstacles to overcome prior to including MCs in one of the 3D model variations [141]. However, in the abundance of investigation on animal models and a relative scarcity of human material, the 3D models are the promising future in clarifying the MC role in male infertility. Studies including MCs should follow some general guidelines because of the specific localization, interspecies differences, and activity in the testicular and epididymal pathology. • Histological assessment, including toluidine blue stain, should always be included in studies analyzing testicular and epididymal mast cells, as a standard method that stains all mast cell subtypes, regardless of protein content. • Depending on the effect and antibody used, a few fixation methods should be optimized at the beginning of the study due to the mast cell subtype specificity. • When investigating animal models, whole testes should be used, including the tunica albuginea, for in most mammals, mast cells reside right underneath. When found in the interstitium, the seminiferous tubule wall thickness should be commented upon. • With respect to practicality, domestic boars kept under non-variable conditions are proposed to resemble human testicular mast cell distribution better than rodents. • No exclusively testis-or epididymis-specific mast cell markers have been found yet, although the characterization of other organ-specific mast cell markers is known. • 3D in vitro models are promising, although they still need significant development in order to incorporate mast cells and the tunica albuginea, if possible. Further efforts need to be made to develop a suitable human-origin testicular cell line combination. Samples Both adult Wistar strain rat (3-month-old) and human (25-year-old) samples were obtained from archive collections. Normal, disease-free samples were chosen for both specimens. Serial sections (4 µm) were cut for immunohistochemistry on a Leica microtome.
9,679.6
2022-02-25T00:00:00.000
[ "Medicine", "Biology" ]
Replacement of Anterior Composite Resin Restorations Using Conservative Ceramics for Occlusal and Periodontal Rehabilitation: An 18-Month Clinical Follow-Up This case report describes a patient with discolored and fractured composite resin restorations on the anterior teeth in whom substitution was indicated. After wax-up and mock-up, the composite was removed and replaced with minimally invasive ceramic laminates. An established and predictable protocol was performed using resin cement. Minimally invasive ceramic restorations are increasingly being used to replace composite restorations. This treatment improves the occlusal and periodontal aspects during the planning and restorative phases, such as anterior guides, and laterality can be restored easily with ceramic laminates. In addition, the surface smoothness and contour of ceramic restorations do not affect the health of the surrounding periodontal tissues. Here we present the outcome after 18 months of clinical follow-up in a patient in whom composite resin restorations in the anterior teeth were replaced with minimally invasive ceramic laminates. Introduction Every patient wants a functional, healthy, and esthetically appealing smile. Nevertheless, restorative cosmetic dentistry should be used conservatively. With improvements in dental ceramics and use of adhesive systems, conservative ceramic laminates are now considered by both dentists and patients to be one of the most viable treatment options in cosmetic dentistry [1]. Minimally invasive ceramic laminates can be indicated when the patient presents with tooth wear or an extensive diastema affecting an anterior composite or the natural teeth. In such cases, little or no preparation of the tooth is necessary. Further, the longevity of adhesion to the enamel has been well established [2]. Failed, discolored, or fractured anterior restorations and damaged teeth have a negative impact on the smile [1]. Therefore, rehabilitation should include reestablishment of occlusion, such as anterior and lateral guidance. Before replacement of a restoration, occlusal guidance can be tested with a mock-up for diagnosis and a temporary restoration [3]. This planning strategy can be used to complement the esthetic intervention and improve occlusal aspects [4]. Gingival health should also be evaluated when esthetic procedures are considered. This is an esthetic consideration that can significantly influence the final result of restoration. Indirect restorations of ceramics, such as veneers, can result in better periodontal health when compared with composite resin and also present an adequate emergence profile [5]. Various materials are available for minimally invasive ceramic restoration. Reinforced ceramic lithium disilicate is commonly used because of its optical and physical properties [6]. The improved mechanical properties of these materials can help dentists and laboratory technicians achieve clinical success. This case report describes the procedure used to replace a composite resin restoration with minimally invasive laminates in a patient seeking improved occlusal and periodontal 2 Case Reports in Dentistry smile esthetics, as well as the outcome after 18 months of clinical follow-up. Case Report A young woman presented at our dental specialties clinic complaining of staining and fractures of the composite resin restorations in her upper anterior teeth. A history and physical examination revealed dental fluorosis and small gaps. Anterior disclusion and laterality were absent (Figures 1 and 2). Gingival inflammation was observed near the borders of the subgingival resin. This was clinically verified based on her symptoms of gingival bleeding and accumulation of plaque. Replacement of the resin restorations by minimally invasive ceramic laminates was proposed to the patient. Initially, orientation of tooth brushing and dental prophylaxis were performed using an ultrasonic instrument. This was followed by scaling and root planning. After 7 days, a wax model was developed for diagnosis ( Figure 3). A gypsum model was obtained from a silicon impression (Express6 XT; 3M ESPE, St Paul, MN, USA), and a wax-up of this model was performed. A silicon impression was then developed from the wax-up. In a subsequent session, the color was selected before removal of the composites to prevent dehydration of the substrate. A Protemp 4 (3M ESPE) was inserted into the silicon from the wax-up, and a mock-up was made to assess the size and form of the wax model ( Figure 4). This mock-up was also used to evaluate the esthetic length of the tooth and its relationship to the shape of the lower lip and the size of the spaces between the teeth. At that time, the anterior disclusion and laterality guides were evaluated, as well as the emergence profile of the cervical margins of the teeth. The restorations were removed using Sof-Lex6 discs (3M ESPE). After removal of the resin from each tooth, the vestibular space needed for the ceramic was estimated using a wax wear guide ( Figure 5). A small cervical demarcation was made in the tooth to establish the completion of indirect restoration. To complete the preparation, the teeth were finished and polished with diamond points using a speed multiplier (W & H Dentalwerk, Bürmoos, Austria; Figure 6). A Pro-Retract 0000 wire retractor (FGM, Joinville, Brazil) was inserted and a silicone addition impression (Express XT) was performed. Provisional restorations were made using bisacryl resin (Protemp 4). Provisional restorations made from bis-acryl resin do not need to be cemented. However, the excess gingiva must be removed. Dies were carefully made to allow for correct construction of the emergence profile. An IPS e.max press (Ivoclar Vivadent, Tamboré, Brazil) was used to make the ceramic laminates ( Figure 7). The cervical and proximal finish was performed using stones and ceramic rubber (Zzag, Curitiba, Brazil). After a week, the ceramics were tried in the mouth using a try-in clear paste (Nexus6 3, Kerr Dental Corporation, Orange, CA, USA). At this time, we could already see an improvement in the appearance of the gingiva after removal of the resins that were causing inflammation (Figure 8). After obtaining approval from the patient, the restorations were etched with hydrofluoric acid for 20 seconds (Condac Porcelain 10%, FGM). Each inner surface was washed for 20 seconds and dried using a triple syringe. Silane (Kerr Dental) was applied as the bonding agent, and 60 seconds were allowed for drying. Adhesive (OptiBond S, Kerr Dental) was applied and polymerization was activated. We then performed modified tooth isolation, with prophylaxis and etching using phosphoric acid for 30 seconds; the surface was then washed and dried. OptiBond S adhesive was also applied to the teeth and polymerization was activated. Clear cement (Nexus 3) was applied to the inner surfaces of the restorations and placed into position. The excess cement was removed using a microbrush and flossing. Each restoration was cured for 120 seconds using a Radii-cal curing light (SDI, Sao Paulo, Brazil). All materials were applied according to the manufacturer's instructions. An occlusal adjustment with ceramic rubber was prepared and checked against the anterior disclusion and laterality guides (Figure 9). The final smile in this patient is shown in Figure 10 and the outcome at 18 months can be seen in Figure 11. After 18 months, there was no clinical evidence of gingival bleeding or accumulation of plaque. Discussion A number of restorative approaches could have been used in this patient, including direct composite resins and minimally invasive ceramic laminates. The success of resin depends on the skill of the operator and the esthetic wishes of the patient. However, long-term clinical outline, color stability, durability, and occlusion are critical for the anterior teeth. Currently, porcelain veneers afford predictable and successful restoration, with an estimated approximate survival of 10 years [7]. In the present case, there was a good amount of sound structure and no significant color change, so ceramic laminates were a suitable option. After being informed about the advantages and disadvantages of each type of restoration, the patient opted for conservative ceramic veneers of minimum thickness. A pressed glass-ceramic laminate was used in this patient. For esthetic veneers, ceramics reinforced by leucite, such as VITAPM5 9 (VITA Zahnfabrik, Bad Säckingen, Germany), and those reinforced by lithium disilicate, such as IPS e.max, are good examples and are commonly indicated because of their optical and mechanical [6] properties. Leucite and lithium disilicate particles are added to the base glass composition of these ceramics to improve their resistance to fracture without impairing their optical properties. It is also important to note that the mechanical properties of these materials depend on the shape and volume of the crystals, among other factors [8,9]. These properties provide sufficient strength to withstand anterior and lateral disclusion when compared with direct composite resins such as those that had initially been used in this patient. Due to the relatively low refractive index of leucite and lithium disilicate, even with a relatively high crystalline content, these materials may be considered translucent and esthetic [1]. Consequently, optical effects, such as opalescence, color, and opacity, are excellent for restoring translucency to the incisal edge, as seen in our patient. Finally, these materials are biocompatible restorative materials that improve periodontal health in the long term due to their surface smoothness. Esthetic treatments should not be performed without appropriate restoration planning. Gurel et al. [4] demonstrated the use of mock-up and temporary techniques for preoperative evaluation and as a diagnostic aid for the final result, as in the present case, where the mock-up was used to facilitate communication with the patient in the diagnostic phase. It was also used to demonstrate the esthetic design of the new emergence profile to improve the incisal reestablishment with the lips and the anterior and lateral disclusion guides. Consequently, the guide used for mockup can also be used for the manufacture of a temporary restoration. Biomechanical and occlusal principles that can help optimize the conservative treatment of worn teeth should be selected. According to Abduo et al. [10], during full excursion, canine-guided occlusion tends to be more frequently observed; with aging, the prevalence of canine-guided occlusion tends to be reduced, and that of group function occlusion increases. In this case, the patient was very young and the composite resin in her anterior teeth had almost certainly worn down. Therefore, esthetic restoration was performed with ceramics. Moreover, ceramic materials perform better in terms of discoloration, integrity of the margins, minor fractures, and cracking when compared with composite resin. Gingival health should always be evaluated. Periodontal health before restorative treatment is important [11], and pretreatment with prophylaxis and scaling will improve the impressions taken of the teeth. Well-adapted temporaries without excess material will facilitate the cementing process, thus improving periodontal health in the long term [5]. This can be considered to be an ideal periodontal condition to prepare the teeth and obtain impressions in the absence of gingival bleeding and accumulation Case Reports in Dentistry 7 of plaque. This will also improve the prognosis of the treatment. The preparation was restricted to the enamel and was therefore conservative, favoring adhesive cementation [12,13]. Retention of the restoration is also helped by use of hydrofluoric acid to condition the inside surfaces and by use of silane coupling agents. A photopolymerizable resin cement should be used for thin restorations, because these are translucent [1,10]. This also shortens the treatment time. In addition, for esthetic reasons, this system includes a try-in paste, so the final result is more predictable [1]. The ultimate success of functional and esthetic treatment depends on the patient being well informed and motivated to maintain oral health. Cooperation on the part of the patient and periodic intervention by the dentist are essential for longterm success of the restoration [1,6,7]. After 18 months of clinical follow-up, the restorations in our patient proved to be adequate from both the functional and esthetic points of view, with maintenance of occlusion and periodontal health. Conclusion Replacement of resin composites by minimally invasive ceramic laminates can rehabilitate the teeth in a safe and esthetically pleasing manner. When carried out appropriately, occlusion and periodontal health can also be reestablished.
2,716
2016-07-31T00:00:00.000
[ "Medicine", "Materials Science" ]
Bayesian eikonal tomography using Gaussian processes Eikonal tomography has become a popular methodology for deriving phase velocity maps from surface wave phase delay measurements. Its high efficiency makes it popular for handling datasets deriving from large-N arrays, in particular in the ambient-noise tomography setting. However, the results of eikonal tomography are crucially dependent on the way in which phase delay measurements are interpolated, a point which has not been thoroughly investigated. In this work, I provide a rigorous formulation for eikonal tomography using Gaussian processes (GPs) to interpolate phase delay measurements, including uncertainties. GPs allow the posterior phase delay gradient to be analytically derived. From the phase delay gradient, an excellent approximate solution for phase velocities can be obtained using the saddlepoint method. The result is a fully Bayesian result for phase velocities of surface waves, incorporating the nonlinear wavefront bending inherent in eikonal tomography, with no sampling required. The results of this analysis imply that the uncertainties reported for eikonal tomography are often underestimated. ever, the results of eikonal tomography are crucially dependent on the way in which phase delay measurements are predicted from data, a point which has not been thoroughly investigated.In this work, I provide a rigorous formulation for eikonal tomography using Gaussian processes (GPs) to smooth observed phase delay measurements, including uncertainties.GPs allow the posterior phase delay gradient to be analytically derived.From the phase delay gradient, an excellent approximate solution for phase velocities can be obtained using the saddlepoint method.The result is a fully Bayesian result for phase velocities of surface waves, incorporating the nonlinear wavefront bending inherent in eikonal tomography, with no sampling required.The results of this analysis imply that the uncertainties reported for eikonal tomography are o en underestimated. Non-technical summary Eikonal tomography is an imaging method that uses slight variations between seismic waves trapped at the surface of the Earth to infer information about the properties beneath the surface.To be able to perform the best possible eikonal tomography, we need to be able to predict in between measurements of these variations at di erent seismic recording stations as best we can.Furthermore, end-users of seismic tomography require information about the uncertainty of the images.In this paper, I perform this prediction using Gaussian processes (GPs), a method with particularly nice mathematical properties.The GP prediction results in robust uncertainty measurements for our imaging problem without many of the computational di iculties associated with other uncertainty quantification methods. Introduction Surface wave tomography is a cornerstone imaging technique for the investigation of the crust and upper mantle.However, due to the signi cant non-planarity of scattered surface waves, interpretation of surface wave data is not straightforward (e.g., Wielandt, ).Despite this issue, the increasing proliferation of dense seismic arrays, combined with the advent of ambient-noise correlation methods, has motivated intense study into surface wave tomographic techniques. To ameliorate the great cost of nonlinear ray tracing for large inverse problems, a large part of this study has focused on methods that derive surface wave properties from only local information contained in the wave eld.Beginning with a wave eld perturbation approach (e.g., Friederich et al., ; Friederich and Wielandt, ; Pollitz, ), theoretical e orts in local surface wave inversion have since concentrated on direct measurement of wave eld derivatives (e.g Lin et al., ; Lin and Ritzwoller, ; de Ridder and Biondi, ; de Ridder and Maddison, ).Likely owing to its simplicity, the most popular extant method is eikonal tomography (Lin et al., ), which relies on the determination of the wave eld phase gradient across an entire local or regional array.For a single surface wave mode propagating with phase velocity C p , frequency ω, phase delay τ and amplitude A, the Helmholtz equation implies that (Tromp and Dahlen, Simplifying this relationship under the assumption that the frequency of the wave is large compared to perturbations in the wave amplitude gives us the eikonal equation: Eikonal tomography uses Equation to directly infer local phase velocity from local phase gradient.A distinction compared to local gradiometry is that calculation of the phase gradient is performed simultaneously for all desired locations by tting a delay curve across an array, rather than by local analysis of sub-arrays (Langston, a, e.g.,).The assumption that the wavefront is smooth relative to frequency is strong, but the di culty associated with measuring wavefront curvature accurately has ensured that eikonal tomography remains a central technique in array analysis.Application of eikonal tomography in practice has typically resulted in images comparable to other tomographic methods and Helmholtz tomography (which uses Equation directly), especially when results are averaged azimuthally (Bodin and Maupin, ; Lin et al., ; Lehujeur and Chevrot, ). In this work, I employ Gaussian process theory (Rasmussen and Williams, ) to derive semi-analytic closed-form approximations for the posterior distribution of eikonal-equation-based phase velocity measurements using the saddlepoint method (Butler, ).In this case, semi-analytic means that the posterior approximations have a single parameter that must be solved using constrained minimization techniques -no Monte Carlo methods need be used.As a result, the approximate posterior can be calculated very quickly.As an intermediate result, I derive fully analytic posteriors for the gradient of phase delay.The delay gradient posteriors can be sampled using standard multivariate normal ran-dom number generators, which provides an e cient way to compute arbitrary statistics of the GP posterior when the semi-analytic approximations are di cult to obtain. Eikonal tomography from derivatives of Gaussian processes The least well-de ned problem in eikonal tomography is how to go from point measurements of phase delay to the phase delay gradient map (Lin et al., ).It is in this process that the practitioner has the greatest control over the resulting phase velocity map; intuitively, we can immediately see that over-smoothing the map will result in a measurement of C p that is too large; conversely, maps that are too rough will result in too small C p .Past studies have typically employed splines (either in tension (e.g., Lin et al., ; Lin and Ritzwoller, ) or smoothing Chevrot and Lehujeur ( )) to perform prediction.The spline framework is a robust general interpolation or smoothing method, however in its basic formulation it gives a single maximum-likelihood estimate of the prediction, with no associated uncertainty information. This study is aims to place the problem of estimating an optimal phase gradient map on a robust Bayesian footing, where all assumptions are explicit, adjustable, and optimizable in the face of the data.In this study, the problem of predicting phase delay measurements is posed as a Gaussian process (GP) regression -we will see that this framework meets the desiderata for estimating phase gradients.GPs are a particular framework for de ning distributions over function spaces (Rasmussen and Williams, ).GPs have the property that any nite collection of points sampled from them will have a multivariate Gaussian distribution.A GP is de ned by a mean function f (x) and covariance function k(x, x ), which generate the mean and covariance matrix of a nite collection of points drawn from the GP.In the context of regression, this leads to a powerful result -if we assume a GP prior for an unknown function, and we then observe data with a Gaussian likelihood, the posterior distribution for the unknown function will also be a GP.Thus, GPs fully generalize nite linear regression and Gaussian inverse problems to the function space setting (Valentine and Sambridge, a,b).As di erentiation is a linear operation, derivatives of GPs are again also GPs.We will use these properties to derive closed-form posterior distributions for the derivatives of observed data under a GP prior.While the motivating example is eikonal tomography, these techniques are applicable to regression problems generally.Derivatives of GPs have long been used in the dynamical control community (e.g.Solak et al., ; Rasmussen, ) Closer in spirit to seismology, GP derivatives have also been applied to the identi cation of geodetic transients (Hines and Hetland, ). The presentation described here is generalized from McHutchon ( ). In general, bold font refers to D collections of data and capitals to matrices.Boldfont capitals are therefore collections of n data in d coordinates and will have dimensions n × d.Coordinates (i.e., x) may be vector quantities but will not be boldfont.To begin, assume that there are measurements (X, y) of the observed phase delay y at points X. Assume that the data y are noisy; for the purposes of exposition this is taken to be identically distributed Gaussian noise η with the distribution N (0, σ), but arbitrary multivariate Gaussian noise distributions are also easily handled by GP theory. This implies that there is an unknown true phase delay eld τ (x) with y = τ (X) + η. (3) The objective of eikonal tomography is to know the eld τ (x) so that we can di erentiate it and get C p .I assume that where f is a zero-mean GP and τ 0 (x) is a reference phase delay eld, for example for a laterally homogeneous medium. where k(x, x ) is the assumed covariance function.For the examples in this work, I will use a squared-exponential kernel with independent length scales in each dimension for the covariance function: This covariance function promotes very smooth elds (it is in nitely di erentiable), and provides a degree of exibility due to the independent length scales.I also assume that τ 0 (x) = s 0 |x| for a xed reference slowness s 0 .Let K XX be the matrix of evaluating k with rows given by X and columns by X .The fundamental idea of GP regression is that, given this problem setup, then the observed data y and the predicted data τ (X ) has the joint multivariate Gaussian distribution By conditioning τ (X ) on the observed data y we have (Rasmussen and Williams, ) Note that data error models with Gaussian covariance just require replacing σ 2 I with C D . Figure shows an example application of GP regression for obtaining τ (x)|(y), with comparison to the approach based on regression using splines (e.g., Lin et al., ; Lin and Ritzwoller, ) -in this case, using smoothing splines (e.g., Chevrot and Lehujeur, ). This example emulates a typical local surface wave application, using data points uniformly distributed within the inversion region with .s added Gaussian noise.The synthetic phase delay eld is strongly perturbed away from the reference model to highlight di erences between the two methods.The GP mean and standard deviation are given analytically, and show substantial di erences with the smoothing spline t -here, the spline smoothing parameter is automatically set by the FitPack routine (Dierckx, ).In comparison to the GP, the spline performs less well, especially in areas of data gaps.I can now calculate expectation values for the derivatives; note that from now on I implicitly condition on y but will not write it out for ease of notation, unless it seems particularly germane to do so.Since di erentiation is a linear Figure 2 Cross-sections through the GP reconstruction, showing the true phase delay (black), GP mean (orange) and standard deviation (grey).The GP reconstruction is overlaid with the noisy observed delay values.The GP posterior closely follows the true phase delay curve, with substantially higher uncertainty near the edges of the domain, even before extrapolation. operation, and linear operations acting on normal distributions result in normal distributions, the components of ∇τ must also be normally distributed, and are completely speci ed by their mean and covariance.The collection of means for component i are immediately given by recognizing that as the expectation operator is also linear, it commutes with the derivative operator: Note that the mean value of the derivatives are calculated independently for each dimension; however as we will see they do have covariance between output points and between dimensions.For the covariance, consider n × n blocks of the covariance matrix of size nd × nd where d is the dimension and n is the number of output points.Note that I choose to order the hierarchy of the covariance matrix rst by derivative coordinate, and second by data point index, as it makes the notation more convenient.As the covariance is bilinear, where I introduce the dummy variable x to represent the second argument in the covariance (X = X , but we want to formally di erentiate in respect to the second slot only when using x ).Continuing on, So that I can compress the notation somewhat, let us de ne KXX = K XX + σ 2 I and ∆y = y − τ 0 (X).For the D case investigated here (noting that higher dimensions immediately generalize), the conditional posterior is a multivariate Gaussian with mean given by Equation and covariance given by Equation : which is an exact distribution for the derivatives evaluated at X . of the posterior for the squared slowness and compare it against the predictions from the smoothing spline.The GP posterior is in this case more accurate than the spline result, and also delivers uncertainty information. Unfortunately, it turns out that this is as far as it is possible to go with exact distributions, as the velocity is a nonlinear function of the gradients in eikonal tomography.Thankfully, however, there is well-developed theory for approximating quadratic forms of normal random variables, and as 1 , which is a quadratic form of a normal random variable, it may be possible to try for a good approximation to the velocity.Before deriving one, however, there are two important issues to investigate -setting hyperparameters, and closed forms for the expectation value of velocity. Finding good values for GP hyperparameters The hyperparameters of the GP may be optimized by maximizing the log marginal likelihood of observations, where the marginalization is performed over the unknown function values τ (X) (Rasmussen and Williams ( )).This gives the type-II maximum likelihood estimate; the hyperparameters have a point-estimate, whereas the function values have a full posterior distribution given that point-estimate.The log marginal likelihood for GP regression is given by where the covariance matrix KXX (θ) is treated as a function of the hyperparameters θ, and n is the number of data. Intuitively, the log marginal likelihood parsimoniously balances data mis t (the rst term) with the level of uncertainty (the second term).For a D squared-exponential kernel with independent length scales, independent Gaussian data noise, and a laterally homogeneous medium as a reference model, the hyperparameters are θ = (ρ, l 1 , l 2 , σ, s 0 ). A special exact case for eikonal tomography: The expectation value of squared slowness given normally distributed derivatives Consider without loss of generality a D case.The squared slowness is given by Assume the phase gradient is given by a multivariate Gaussian random variable that describes the joint distribution of the two derivatives τ x1 , τ x2 , and let S 2 be the random variable describing the distribution of slowness squared.This is, for example, the distribution that arises for the derivatives of a single point conditioned on observations under GP regression as described above.Then As the slowness squared is a scalar, I can take the trace to proceed as follows, following Kendrick -2 0 2 0.0 0.2 0.4 -1.0-0.50.0 0.5 True Derivative GP Analytic GP-FD GP-FD Moments Comparison of the true squared slowness against results calculated using a squared-exponential Gaussian process with tuned hyperparameters.The GP mean and standard deviation are calculated by drawing 100,000 predicted travel time gradients.The spline squared slowness has been calculated using 5 th order centred finite di erences.The GP result has a mean closer to the truth, and additionally adds uncertainty information, when compared to the smoothing spline.The colouring of the di erence plots is arranged according to the usual seismic convention of blue being a fast and red being slow; in this case blue means that the predicted slowness is smaller compared to the truth and vice versa; note that this induces a colour flip compared It is instructive to note that the expectation value of squared slowness is strictly greater than the sum-of-squares of the mean derivatives, so that velocities are "biased" lower a er accounting for errors.Note that this is true for any calculation that assumes the derivatives have a Gaussian distribution, not just the Gaussian process framework analysed here. Approximation of the posterior using the saddlepoint method The analytic results obtained for the derivative ∇τ have already given us a great deal.Any expectation value that depends on these derivatives (in particular, moments of the phase velocity) can be calculated using the Monte-Carlo method -i.e., by drawing many random samples of ∇τ and then calculating the desired statistics on this random sample.Because it is possible to draw directly from the posterior of ∇τ given Equation , every sample can be used and is independent (unlike in Markov-Chain Monte-Carlo).As such, these expectation values will usually converge quickly.However, there are cases where it is still useful to have approximations of the posterior that can be even more quickly calculated; for instance if the eikonal tomography derived phase velocities are being used in a joint inverse problem, or if accurate statistics for extreme values need to be calculated.A frequently used simple approximation would be to us Laplace's method directly on the posterior distribution for ||∇τ || 2 or C p .The approximate posterior under using this technique is the best tting Gaussian distribution.However, looking at Figure , it is clear that neither distribution is close to Gaussian, and may not in fact have a clear mode to t. Instead of approximating the posterior directly, I instead use the saddlepoint approximation.The saddlepoint approximation for the distribution of random variables was originally proposed by Daniels ( ), with Butler ( ) giving a thorough account of the basic method.Very roughly, the idea is to examine the cumulant generating function (CGF) where f (x) is the probability distribution of X and X is its domain of support.The existence of the CGF requires that there is some interval a < 0 < b such that the above integral converges.Applying Laplace's approximation for this integral and rearranging terms, where ŝ is the solution of K (s) = x.ŝ is a saddlepoint of the integrand in Equation , hence the name "saddlepoint approximation".If the application requires it, f (x) then typically has to then be normalized to integrate to unity so that it is a true probability distribution, giving us If the application only requires the PDF up to proportionality (as is o en the case), then the above normalization is not required, and the saddlepoint approximation requires no integration whatsoever.Butler ( ) shows that this optimization problem is well posed and gives a unique real solution for f , if s is constrained to be inside the interval that contains 0 for which K(s) converges.Serendipitously, this low order method o en provides extremely good approximations to the PDF, as the CGF K contains the full information about the distribution of X.For sums of random variables (such as ||∇τ || 2 ), it is almost always easier to construct the CGF K analytically rather than the PDF f , as where X and Y are arbitrary random variables and * is the convolution operator.Therefore, when using the saddlepoint approximation of to obtain the PDF, multiple potentially slowly converging convolution integrals are converted into a simple root-nding problem with a unique solution.Let us now apply this concept to deriving the PDFs of ||∇τ || 2 and C p from our closed form posteriors for phase delay derivatives ∇τ .To do this, my goal is to write the distribution of ||∇τ || 2 in a form for which I can determine the CGF K ||∇τ || 2 , and then use the saddlepoint approximation to obtain the posterior PDF f||∇τ|| 2 , from which I can also obtain the posterior PDF fCp using a change-of-variables formula. For simplicity, I approximate the posterior for a single point x given data (X, y).I have shown that ∇τ (x )|y ∼ N (µ, Σ) for a d dimensional mean vector µ and a d × d covariance matrix Σ.Therefore, where QΛQ T = Σ is an eigenvalue decomposition of Σ and h is a d-dimensional standard normal variable h ∼ N (0, I).Q contains the normalized eigenvectors as its columns and Λ is a diagonal matrix of corresponding eigenvalues.Assuming that the phase delay measurements are taken in di erent locations, all of the terms in Λ are positive as then Σ, as a non-degenerate covariance matrix, is positive de nite.I can then write where μ = Λ − 1 2 µ.The eigenvalues collected in Λ are labelled λ i , with corresponding components of μ labelled μi .The quadratic form in Equation can be written as a sum over non-central chi-squared distributions (Imhof, ; Butler and Paolella, ).The degree of freedom of each non-central chi-squared corresponds to the multiplicity of the eigenvalues of Σ, which will for our purposes always be distinct, giving Because of the summation property of the CGF, the CGF of ||∇τ (x )|| 2 is then (Butler and Paolella, ) and the derivatives are given by The domain of convergence in which the root of K (s) = x is sought is the largest open interval containing zero for which , where λ max is the largest eigenvalue of Σ. Applying the saddlepoint approximation given the above K gives us the saddlepoint distribution f||∇τ(x )|| 2 (x) for the squared slowness, which can be normalized to give The transformation between squared slowness and phase velocity is given by g(x) = 1 √ x , which is a monotone decreasing function.The appropriate Jacobian transformation rule to obtain the approximate PDF of phase velocity is then (Kadane, The approximate distributions f||∇τ(x )|| 2 (x) and fCp (x ) are plotted against a histogram of , , draws of the squared slowness and phase velocity using the analytic derivatives in Figure , showing that the saddlepoint approximations are a close t.Higher order saddlepoint approximation terms and approximations for the cumulative distribution function (CDF) are collected in Butler ( ). The saddlepoint method can be further applied to the joint distribution function two points to derive the approximate spatial covariance (Al-Na ouri et al., ).Because the underlying posterior distributions for the derivatives is given by a GP, the covariance completely describes the spatial behaviour of the velocity distribution, and so the ability to calculate the distribution for any two arbitrary points is su cient to fully characterize the posterior.However, the resulting rootnding problem will be two-dimensional rather than one dimensional and is substantially more complicated than the forms derived here, so they are le for future work. Implications for sample statistics Most eikonal tomography applications report per-station per-frequency error statistics by computing the standard error in the mean phase velocity over multiple sources.Studies typically appeal to the central limit theorem to justify the use of the sample standard error formula and sample mean for quantifying the data distribution.The reported standard errors are then used to weight data in further inversions -a typical use case is to perform D Bayesian inversion beneath each station using the mean values and the reported error.Previous methods do not optimally smooth the phase delay regression that underlies eikonal tomography, potentially producing biased results, and do not produce uncertainty estimates for each source.However, uncertainties reported in studies using these methods are o en extremely low, amounting to a few percent of the estimated phase velocity. In our GP framework, Monte Carlo sampling can be used to directly estimate the distribution for sample statistics such as the mean over multiple sources.As a motivation, observe that both the empirical distribution for phase velocity and its saddlepoint approximation is heavy tailed in Figure .This is a point relatively close to the edge, which can result in a distribution that is far from Gaussian.Taking this point, I then draw 4 n samples of velocity for n = 0 . . .6, calculate the sample mean and median, and then repeat , times to nd the distribution in the sample statistics. Figure shows the results.The sample mean converges only slowly to a normal distribution, and is still broad even with samples.In comparison, the sample median is well-behaved and converges quickly as the sample size increases.For both sample statistics, the distribution for small numbers of samples is unsurprisingly quite similar to the underlying velocity distribution, and is consequently heavy tailed -this should be taking under consideration for applications such as tting azimuthal anisotropy pro les to eikonal tomography results, where many azimuth bins near the edges of arrays will o en have few contributing sources. Future work In this study, I present the simplest possible implementation of a GP framework for eikonal tomography with analytic derivatives of phase delay.The exibility of GP modelling o ers several opportunities for future improvements that should result in more robust inversions.The rst of these is that multi-frequency eikonal inversion is naturally handled by GP modelling by assuming a space-frequency covariance function.The most simple model would use a separable function k((x, f ), (x , f )) = k x (x, x )k f (f, f ).A smooth frequency covariance k f (f, f ) would reduce the impact of missing data in particular frequency bins, which can be an issue due to spectral holes in surface wave trains. Secondly, the squared-exponential kernel used in this study could be further improved to better represent the behaviour of true seismic wave elds; for instance, the problem could be recast in radial coordinates with a radial-azimuthal kernel as studied in Padonou and Roustant ( ).Due to the natural cylindrical symmetry of wave propagation, this may allow us to reduce the uncertainty in the eikonal tomography results.In particular, this kernel choice would be appropriate in use cases such as ambient-noise tomography where the seismic source is inside the array, resulting in highly non-planar wavefronts. Mean Velocity (km/s) The mean or median is calculated by drawing 4 n samples for n = 0 . . .6.This process is repeated 100,000 times to obtain the distributions of sample means and medians.The sample mean converges to a normal distribution slowly. A third option would be to use the GP framework for smoothing the underlying full wave eld records before processing them for phase delay measurements or for other gradient based techniques such as wave eld gradiometry (e.g., Langston, a,b; de Ridder and Biondi, ; de Ridder and Maddison, ) or full Helmholtz tomography (Lin and Ritzwoller, ).These applications would potentially require extending the GP derivative theory to higher order, but again noting that derivatives are linear, the resulting distributions for higher order spatial terms will also be GPs.The GP framework is especially well suited towards the inclusion of strain measurements in joint wave eld reconstruction (e.g., Muir and Zhan, ) as the appropriate covariance kernels can be calculated using the results in Equation an enticing prospect considering the proliferation of distributed acoustic sensing (DAS) strain sensors (Zhan, ).GP based techniques have also been used in geodesy to investigate transient strain rates (e.g., Hines and Hetland, ), and the saddlepoint approximation techniques investigated here could o er a way to more accurate quanti cation of strain invariants arising from geodetic analysis. Finally, as the number of phase delay measurements increases across stations and frequency bins, the size of the data covariance matrix K increases.For n measurements, the cost of inverting this matrix scales like O(n 3 ), so very large collections of measurements pose a challenge for GP based inversion.Due to the popularity of GPs in machine learning research, there are a wide range of sparse GP approximations that produce almost identical results and still result in analytic derivatives once the sparsity structure is determined (e.g., Titsias, ; Lindgren et al., ; Wilson and Nickisch, ).Employing these methods would allow e cient upscaling of the methodology presented here to multi-frequency inversion of USArray-scale datasets. Conclusions This study derives an analytic posterior distribution for phase delay derivatives, and then derives approximate posteriors for phase velocity using the saddlepoint approximation applied to the eikonal equation.The result is a fully Bayesian eikonal tomography that requires no MCMC sampling to characterize the posterior.As such, computations are easily implemented and highly e cient.Using the GP framework as a basis, I investigated two important e ects that impact the interpretation of eikonal tomography results, namely the e ect of the inclusion of data uncertainty on the expectation value of velocity and the behaviour of sample statistics, both of which suggest that the uncertainty in eikonal tomography results is greater than previously assessed.The GP framework presents a fully interpretable way forward to improve eikonal tomography in the future, with many opportunities for future work due to the exible and robust nature of GP modelling. Data and code availability I have included the Pluto notebook used to generate the results in the submission.This notebook will be uploaded to Zenodo a er acceptance so that the assigned DOI corresponds to the nal version used for the publication. Figure compares the GP reconstruction with the true values of the phase delay map.T The GP mean closely ts the true values, although the level of uncertainty becomes quite substantial near the edges of the domain. Figure 1 Figure1Comparison of the GP posterior (showing mean and point-wise standard deviation) of the phase delay with a smoothing-spline based solution for an example phase delay data set with 100 randomly distributed points and 0.2 s Gaussian noise.There are notable di erences in the estimated phase delay, especially where there are gaps in the data coverage.The colouring of the di erence plots is arranged according to the usual seismic convention of blue being a fast and red being slow; in this case blue means that the predicted arrival is fast compared to the truth and vice versa. Figure shows the mean and covariance structure for the derivatives at two test points calculated using the above theory, compared to the true derivative of the phase delay, and nite-di erence estimates computed using random draws of the GP estimate of the phase delay (i.e., Monte-Carlo nite-di erence derivatives).Both the analytic and Monte-Carlo results closely agree with each other and with the true values for the derivatives.In Figure , I use the multivariate normal posterior for the derivatives to generate samples Figure 3 Figure 3Corner plot showing the covariance of derivatives at two test points, and their individual histograms.The test points are τ 1 at (0.1875,0.3875), and τ 2 at (5.5, 3.5).Black crosses and lines show the true value of the derivatives.Orange lines show the analytical GP based solutions derived in this paper, with ellipses drawn at the 95% credible level and crosses showing the mean.Grey circles and histograms show finite-di erence (FD) based derivatives using Monte-Carlo samples of the GP posterior for phase delay, and red crosses and ellipses show the mean and estimated covariance at 95% confidence from the FD draws. Figure 5 Figure 5Comparison of the empirical CDF and PDF (grey) for the squared slowness and phase velocity for the point at (0.1875,0.3875) with the saddlepoint (SP) approximation (orange).For the PDF, the true value is also shown in black and the median, 25 th and 75 th percentiles of the empirical PDF are shown in purple.The empirical distributions are truncated between 0.01 and 10 for plotting purposes. Figure 6 Figure 6Comparison of the distribution of sample means and sample medians for the phase velocity at (0.1875,0.3875).The mean or median is calculated by drawing 4 n samples for n = 0 . . .6.This process is repeated 100,000 times to obtain the distributions of sample means and medians.The sample mean converges to a normal distribution slowly.
7,357
2023-02-10T00:00:00.000
[ "Geology" ]
Feasibility of Aspergillus keratitidis InaCC1016 for synthetic dyes removal in dyes wastewater treatment Several industries produce waste that can not be degraded naturally or toxic to a living organism, i.e., dyes waste. Fungi were considered as the best candidates for dyes waste treatment among other microorganisms because of fungi more resistance in the lack of nutrient conditions. Besides, their biomass can also function as an adsorbent that was able to absorb dyes so that it is more effectively applied. This study aims to evaluate the feasibility Aspergillus keratitidis to degrade Congo Red (CR) and Methylene Blue (MB) in the solid and liquid state. Dyes decolorization in the solid-state was observed based on clear zone produced, and in the liquid state, decolorization was determined spectrophotometrically. A. keratitidis was able to decolorize synthetic dyes in both media, solid and liquid state. CR was more effective dyes to be removed by A. keratitidis than MB. This fungus able to decolorize about 96% of 200 ppm CR within seven days and 63% of 100 ppm MB within ten days. Moreover, MB was more toxic dyes than CR, which inhibited A. keratitidis growth. A. keratitis was suggested involved lignolytic enzyme on dyes decolorization due to it can degrade lignin compound, but it needs a further study to prove it. Based on our knowledge, this is the first report about a potential study of A. keratitidis in dyes decolorization and lignin degradation activity. Introduction Nowadays, the environmental pollution issue is being a concern of people in over the world. The increase in population over the past few decades has boosted industrial growth to grow faster to meet the demand. Industrial activities have a detrimental effect on environmental sustainability. Several industries produce waste that can not be degraded naturally or toxic to a living organism. For example, textile industries produced wastewater containing synthetic dyes that mostly were hardly degraded naturally, toxic, and carcinogenic [1] [2]. The textile industry is the largest supplier of wastewater because this industry consumes large amounts of water and around 17-20% of the total water consumed will be discharged in the form of waste [3][4]. When these waste released into the environment, it can hamper not only the aquatic environment but also human health [5][6] [7]. Many dye wastewater treatment technologies have been developed which involve oxidation methods (photocatalytic oxidation, ozone, H2O2, Fenton process) and physical methods (adsorption and filtration) [3]. However, the existing methods have not been effectively utilized because of their high costs, need complicated equipment, and produce new pollutant [8]. Biological method (fungi, algae and, Microorganism source and growth Fungus isolate was isolated from wood decay in tropical rain forest located in East Kalimantan, Indonesia. Fungus isolate was identified and deposited in the Indonesian Culture Collection (InaCC) with accessing number InaCCF1016. The fungus was grown in the Potatoes Dextrose Agar (PDA) and incubated at 30 o C for seven days before decolorization assay. Molecular identification of fungus isolate Fungal mycelia collected from 72 h Potatoes Dextrose Broth (PDB) were used to DNA extraction. The extraction of fungal DNA was performed using the nucleon PHYTOpure (Amersham LIFE SCIENCE) nucleon reagent. Strain identification was carried out by PCR amplification in ITS using ITS 4: (5′-TCCTCCGCTTATTGATATGC-3′) and ITS 5: (5′-GGAAGTAAAAGTCGTAACAAGG-3′) [10] [11]. DNA sequencing results were analyzed using the ChromasPro version 1.7.5 program (Technelysium Pty Ltd, Australia). Sequence alignment was conducted between the new sequences and the closest sequences to the search results for Basic Local Alignment Search Tool (BLAST) on the National Center for Biotechnology Information (NCBI) website (https://www.ncbi.nlm.nih.gov/) using MUSCLE (Edgar 2004 ) in the MEGA 7 program [12]. Phylogenetic trees were constructed by Neighbor-Joining (NJ) analysis [13] using the MEGA 7 software program and Tamura 3-G+C parameter model as the best evolutionary model [14]. Bootstrap (BS) analysis was performed based on 1000 replications [15]. About 22 ITS rDNA sequences obtained from BLAST search results were used as ingroup, and Trichoderma viride (AM498467) was chosen as outgroup. Synthetic dyes decolorization assay Dyes decolorization by A. keratitidis was firstly examined in the solid-state. Congo red (CR) and Methylene Blue (MB) was chosen as a model of synthetic dye. The media was used containing 80% (w/v) PDB, 2% (w/v) agar, and 200 ppm of dyes. One disc (8 mm) of mycelia carried out from 7 days incubation culture was placed on the center of solid media, and then, the culture was incubated at 30 C for seven days. The diameter of mycelia and clear zone produced was measured, and the decolorization index was calculated following equation [16]: Decolorization index (%) = 100% (1) Where DD was decolorization zone diameter and MD was mycelia diameter of fungus on media containing dyes. The decolorization of synthetic dyes also performed on the 50 ml liquid media that has the same formula as the solid media without agar addition. Two discs of mycelia taken from 7 days incubation fungus culture were added into the 300 ml Erlenmeyer flask, and the culture was incubated in a shaker incubator at 110 rpm, 30 o C for ten days. The uninoculated media was used as a control, and the experiment was set with three replicates. Culture media was withdrawn from the flask at 3, 7, and 10 days time interval, and the CR decolorization was monitored spectrophotometrically at wavelength 480 nm, and the MB was at 660 nm. The effect of dyes initial concentration on dyes decolorization rate also was performed on 100 and 200 ppm for CR and 50 and 100 ppm for MB. Biomass of fungus was also collected in the final time of incubation through a paper filter, and biomass was dried at 60 o C for 24 h. Biomass production was expressed as g dry weight of mycelia was produced in one L of media (g L -1 ). Lignin degradation assay The ability of fungus isolate to degrade lignin was observed using two methods of confirmation. First, fungus isolate was grown on solid media consisted of minimal salts media MSM supplemented with lignin extract as a sole carbon source (MSM-L) and 2% (w/v) agar. MSM consisted of 4.5 g / l K2HPO4; 0.53 g / l KH2PO4; 0.5 g / l CaCl2 2H2O; 0.5 g / l MgSO4.7H2O; 5 g / l NH4NO3; 0.001 g / l CuSO4.5H2O; 0.001 g / l FeSO4.7H2O; 0.001 g / l MnSO4.H2O; ZnSO4.7H2O. Final concentration of lignin extract in MSM was 0.25 % (v/w). Lignin extract was prepared from the empty fruit oil palm waste. The extraction process followed a method that was used by Barapatre et al. [17]. One disc of fungus mycelia carried out from 7 days culture incubation was inoculated on the center of solid-state media. The culture was incubated at 30 o C for seven days, and the diameter of mycelia was measured on the final day incubation. The fungal grow index was used as an indicator of fungal ability to degrade lignin. As a control, the fungal isolate was grown on rich media, PDA. Fungal growth index was calculated using the following equation: Where Фs was a diameter of fungus mycelia on minimal media (MSM-L) and Фc was a diameter of fungus mycelia on control media (PDA) The second observation was performing in a liquid state containing MSM with the addition of black liquor as a sole carbon source. Lignin was a major compound in black liquor. Two discs of mycelia taken from 7 days incubation fungus culture were added into the 250 ml Erlenmeyer flask containing 50 ml sterile MSM-BL. The culture was incubated in a static condition at room temperature (28-30 o C) for 21 days. The degradation of lignin was observed on the change of the absorption of supernatant media at 280 nm. This wavelength was specific for lignin compound absorption. The liquid media without inoculant was used as a control. The culture media was observed within time interval 3, 7, 10, 14, and 21 days. All of the experiment conducted in this study was set up in 3 replication. Fungal identification (molecular phylogenetic analysis) The phylogenetic tree originating from the NJ analysis consists of 1 InaCC F1016 sequence and 22 closest sequences derived from BLAST results in GenBank (Figure 1). The phylogeny tree shows that the InaCC F1016 sequence is in a clade with Aspergillus keratitidis (KY980627, KY980626, and KY980647) with a 99% bootstrap value. The position of the InaCC F1016 sequence on the phylogenetic tree is one clade with three sequences of Aspergillus keratitidis BLAST search results with a bootstrap value of 99%. This results showed that InaCC F1016 is Aspergillus keratitidis. Aspergillus keratitidis belong to subgenus Polypaecilium that was first described as Sagmonella keratitidis isolated from corneal scraping by Hsieh et al. [18] and then it was also found as the most frequently isolated species from house dust as reported by Tanney et al. [19]. This fungi species was considered as xerophiles microorganism than can growth on low water activity environment [19]. Based on our observation, there is a limited report regarding the functional characterization of this fungus. This fungus culture has been deposited in the Indonesian Culture Collection, LIPI with the catalog number InaCCF1106. Synthetic dyes decolorization assay Feasibility of Aspergillus keratitidis to decolorize Congo Red, and Methylene Blue was investigated in two conditions, solid-state and liquid state. To our knowledge, this is the first time report for A. keratitidis decolorization analysis. Our finding showed that A. keratitidis was able to decolorize 200 ppm CR and MB on solid-state, which was presented as a decolorization index number. The decolorization index (DI) was used as a tool to reflect the ability of fungus on dyes decolorization. More highest the ID number, better the ability in decolorize dyes. Decolorization can be observed as a clear zone surrounding the fungus mycelia. This finding showed that A. keratitidis has a better ability to decolorize CR than MB, 4.04, and 1.78, respectively (Table 1). Based on our observation, CR and MB decolorization by A. keratitidis was high compared to other reports. Fomitopsis rosea was able to decolorize CR and MB with DI number about 1.27 and 1.40, respectively [16]. Fusarium solani decolorized MB in solid media with DI about 0.8 [20]. Decolorization of CR and MB in liquid medium was monitored within ten days, and the results showed that A. keratitidis more rapidly decolorized CR than MB (Figure 2). Aspergillus keratitidis able to decolorize 100 ppm CR about 98% within seven days. However, the decolorization decreased to 89% at ten days because the medium turn into yellowish that disturbed the measurement. The change in color medium might be due to the fungus produces a pigment or enzymatic reaction that produced color [16]. Besides, our finding showed that A. keratitidis hardly decolorized MB. It can be seen in Figure 2 that only about 63% of MB can be decolorized within ten days. CR and MB belong to different group of dyes. CR is a dye that was classified as an azo dye group that contains an azo group, -N = N-, as part of the structure, while MB was a heterocyclic group. The difference in the chemical structure of these two compounds was thought to affect the decolorization process of dyes by fungi [4]. Hsueh and Chen [21] reported that azo dyes with different properties of a substituent on the aromatic ring could affect the efficiency of biodecolorization. Many studies were reported regarding MB and CR decolorization. The decolorization of dyes by fungi is strain-dependent. Lyra et al. [22] studied the degradation of MB dan CR by white-rot fungi strains. The result showed that Pycnoporus sanguineus was more effective to decolorize CR than MB, that is 72.6% and 54.5% respectively. In other hands, Datronia caperata has the best ability to decolorize MB than CR, that is 74.3% and 20.3% respectively. Other fungus strain has a good ability in both dyes such as Pycnoporus cinnabarinus was able to decolorize about 90% of 100 ppm CR and about 80% of 100 ppm MB within 20 days [16]. Figure 3. Effect of dye initial concentration on decolorization of CR by A. keratitidis In this study, the effect of the dyes initial concentration on decolorization of CR and MB by A. keratitidis was also examined. The decolorization of CR become slower in the first three days when the initial concentration of CR was increased to 200 ppm ( Figure 3). However, high concentration of CR did not inhibit the decolorization of CR in the future incubation. The decolorization of 200 ppm CR reached to 80% in 7 days of incubation, and there was no significant difference of decolorization with 100 ppm CR in the final day of incubation. This finding can be explained that fungus biomass and dyes concentration ratio affects decolorization efficiency. The fungus biomass was remained constant while the dyes concentration increased, so it needs more time to decolorize all of the dye compounds compare to the lower one [23]. In another side, the effect of dye concentration on MB degradation was performed in lower concentration, that was 50 ppm. We tend to examine the degradation of MB in low concentration due to MB was hardly decolorized than CR. Figure 4 showed that the decolorization of MB become faster while the concentration of MB was low. In the seven days incubation, 50 ppm MB was successfully decolorized until 50%, two-fold higher than 100 ppm MB in the same day. However, in the final day incubation, there was no significant difference in decolorization percentage between two concentration of MB. Figure 4. Effect of dye initial concentration on decolorization of MB by A. keratitidis The toxicity effect of dyes on A. keratitidis growth was investigated through measuring the biomass production in the liquid medium. Our investigation result can be seen in Table 2. Fungal biomass production was lower in medium containing MB compare to CR at the same dye concentration. Increased on dye concentration from 100 to 200 ppm did not affect fungal growth in CR. The opposite result was found on MB, which increased dye concentration gave a negative effect on fungal growth. Biomass production was decreased when the dye concentration was increased from 50 ppm to 100 ppm. High concentrations of dyes cause inhibition of the metabolic processes of microorganisms and affect fungal growth. This finding also showed that MB was more toxic than CR because a toxic compound can inhibit fungal growth [23] Lignin degradation assay The ability of fungi to decolorize dyes is often associated with their ability to degrade lignin, so this study also confirmed the ability of A. keratitidis to degrade lignin. Moreover, there was no study have been reported related to A. keratitidis ligninolytic activity. A. keratitidis was cultured on minimal salt medium containing lignin as sole carbon (MSM-L) to evaluate its ability to degrade lignin. Our finding showed that A. keratitidis able to degrade lignin proven by its ability to grow in media that only provided lignin as the only carbon source. Fungus growth ability can be quantified as a growth index (Table 3). Growth index resulted from the comparison of fungal growth between lignin medium (MSM-L) and control medium (PDA). Growth index as 100% implies that fungal in the minimal medium can grow as well as in the control medium. Because A. keratitidis has 90.3% of growth index, it implied that this fungus could grow in the minimal medium containing lignin but not as well as in the control medium that contain rich nutrients. Nevertheless, these results inform that A. keratitidis can degrade lignin and utilize it as a carbon source to grow. Carbon is one of the main components needed by microorganisms to grow, namely as an energy source. Before it can be used as an energy source, lignin must be degraded first into simple compounds involved some lignolytic enzymes, such as laccase, lignin peroxidase, and manganese peroxidase [24]. This evidence proves that A. keratitidis can degrade lignin. In addition to using solid media, lignin degradation ability was confirmed in liquid media. Moreover, in liquid media, the number of lignin degraded can be quantified but not in solid media. The presence of lignin degradation can be seen by observing changes in absorbance at wavelengths of 280 nm. This wavelength is specific for lignin compounds [25] [26]. The results showed that there was a significant decrease in absorbance at wavelength 280 nm ( Figure 5). This results implied that there had been a decrease in lignin compounds due to the degradation by A. keratitidis. A. keratitidis can degrade lignin as 36% and 44% relative to control along 7 and 14 days of incubation, respectively. This second proof reinforces that A. keratitidis can degrade lignin. The ability of lignolytic fungi to degrade dyes depend on lignolytic enzymes produced [9]. Lignolytic enzymes produced by lignin-degrading fungi such as laccase, LiP, and MnP are non-specific enzymes [27]. This enzyme can degrade other compounds that have a similar structure to lignin. Lignin is an aromatic polymer compound, while most synthetic dyes compound also has aromatic rings. The similarity of structure with dye stuffs is the basis for why lignolytic fungi can degrade synthetic color compounds [28]. Further study was needed to understanding the decolorization mechanism by A. Conclusion A. keratitidis was able to decolorize synthetic dyes in both media, solid and liquid state. CR was more effective dyes to be removed by A. keratitidis than MB. Moreover, MB was more toxic dyes than CR, which inhibited A. keratitidis growth. A. keratitis was suggested involved lignolytic enzyme on dyes decolorization due to it can degrade lignin compound, but it needs a further study to prove it.
4,092.2
2020-02-22T00:00:00.000
[ "Engineering" ]
Application of the path optimization method to a discrete spin system The path optimization method, which is proposed to control the sign problem in quantum field theories with continuous degrees of freedom by machine learning, is applied to a spin model with discrete degrees of freedom. The path optimization method is applied by replacing the spins with dynamical variables via the Hubbard-Stratonovich transformation, and the sum with the integral. The one-dimensional (Lenz-)Ising model with a complex coupling constant is used as a laboratory for the sign problem in the spin model. The average phase factor is enhanced by the path optimization method, indicating that the method can weaken the sign problem. Our result reproduces the analytic values with controlled statistical errors. I. INTRODUCTION To understand the non-perturbative properties of quantum field theories and spin models, the Monte Carlo (MC) method plays an important and crucial role.In the MC calculation, expectation values are evaluated with the Boltzmann weight.However, the Boltzmann weight is a complex value in some cases even if the partition function is still real.This problem is called the sign problem.A typical example is quantum chromodynamics with a finite quark chemical potential (µ), reviewed in Refs.[1,2].Another example of a discrete spin system is the Hubbard model away from the half-filling [3]. The sign problem can be milder by the path optimization method or the sign-optimized manifold [4][5][6], which has a close relationship with the Lefschetz thimble method [7].Both are the so-called complexified dynamical variable approaches based on Cauchy's integral theorem, which ensures the independence of the expectation value via modification of the integral path as long as the integrand is an entire function with no contribution at infinity.If we have an integral representation of the partition function, such as quantum field theories with continuous degrees of freedom, a sign-problemreduced integral path could be on the complexified dynamical variable plane.The path optimization method utilizes machine learning to determine the optimized integration path.The path optimization works well for several models: a simple Gaussian model [4], the 1 + 1 dimensional complex λφ 4 theory [8], the Polyakov-loop extended Nambu-Jona-Lasinio model [9,10], the 1+1 and 2+1 dimensional Thirring model [5,11], the 0 + 1 dimensional Bose gas [6], the 0 + 1 dimensional QCD [12], the two-dimensional U(1) gauge theory with complexified coupling constant [13][14][15], the 2+1 dimensional XY model [16].It is also employed for error reduction of observables [17,18].The recent progress of the complexified dynamical variable approach is reviewed in Ref. [19]. For the spin models, on the other hand, we have a sum in the partition function, instead of the integral.We cannot directly apply the complexified dynamical variable approach.A solution is the Hubbard-Stratonovich transformation.It converts the sum to the integral using the auxiliary field.We demonstrate it in the onedimensional classical (Lenz-)Ising model with a complex coupling constant.Since we have the analytic result, we can judge the correctness of results by the path optimization method.Another famous example is the Hubbard model away from half-filling [3].In this case, the complexified dynamical variable approach is feasible; for example, see the review [20]. In this paper, we apply the path optimization method [4,8] to the Ising model with the complex coupling constant.The dynamical variables are replaced by the Hubbard-Stratonovich transformation as in Refs.[21,22].We also introduce parallel tempering to the path optimization method [13] toward control of the global sign problem, as first applied in the Lefschetz thimble method [23,24]. This paper is organized as follows.In Sec.II, we explain the formulation of the Ising model with the complex coupling constant, the Hubbard-Stratonovich transformation, and the path optimization method.The numerical setup and results are shown in Sec.III and Sec.IV, respectively.Section V is devoted to a summary. II. FORMULATION We employ the one-dimensional classical Ising model with a complex coupling constant as a laboratory to investigate the sign problem in spin models.The sign problem is induced by the imaginary part of the external field.We first explain the integral representation of the Ising model through the Hubbard-Stratonovich transformation.We then explain the application of the path optimization method to the model. A. Ising model with complex coupling constant The Hamiltonian of the classical one-dimensional (Lenz-)Ising model with an external magnetic field [25,26] is given by where J is a coupling constant for the nearest-neighbor spins, h is strength of the external magnetic field, and ; this is the one-dimensional Ising chain.We impose the periodic boundary condition, σ 0 = σ N and σ N +1 = σ 1 .The Hamiltonian H can be represented in matrix style as where K is the symmetric connectivity matrix, s is the spin matrix defined as s = (σ and . This representation can be also applied to higher-dimensional systems by using a suitably constructed symmetric connectivity matrix.The coefficient 1/2 in Eq. ( 2) is introduced to avoid double counting of the nearest-neighbor interaction when we make K symmetric. The sign problem arises from the imaginary part of J.The realistic Ising model does not have such an imaginary part, but is sometimes introduced for analysis of the Lee-Yang zeros [27] or the Fisher zeros [28].Such an imaginary part also naturally arises when we consider the QCD-like Potts model [29][30][31], as discussed in Appendix A The partition function of the Ising model is where the sum takes over all possible states.The inverse temperature β can be absorbed into H by replacing J ′ = βJ and h ′ = βh. B. Hubbard-Stratonovich transformation With the expression (2), we can use the Hubbard-Stratonovich transformation as in Ref. [22]; where the normalization constant N is defined by It should be noted that the eigenvalue of K must be positive for the Hubbard-Stratonovich transformation.We thus put a constant shift for K as where I is the unit matrix and the constant C takes the same sign as that of J.The C-independence of the physical result is confirmed in Ref. [22].If we set C > n where n is the maximum number of nearest neighbors of one site, K is positive definite; n = 2 for the one-dimensional Ising model.The final form of the partition function becomes where N ′ includes a contribution of N and C, which is irrelevant in the evaluation of the expectation values.We can consider H ′ as the effective Hamiltonian in molecular dynamics.The expectation value of magnetization for the single spin is obtained as The analytic result of the magnetization [26] is known as where λ ± are the eigenvalues of the transfer matrix of the model, C. Path optimization method The path optimization method [4][5][6] is proposed as a complex dynamical variable approach for the path integral formulation to control the sign problem via machine learning.Although the path optimization method does not need initial teacher data, the effectiveness of the modified path can be automatically evaluated in the learning part. In the path optimization method, we first complexify the dynamical variable v ∈ R N as where v R , v I ∈ R N .This procedure means modification of the integral path on the complexified dynamical variable plane.There are several ways to express the modified integral path minimizing the sign problem.We use the representation constructed by the neural network; the input is v = v R , and the output is v I .The actual procedure is as follows: The output layer is where L is the total number of layers.The hidden layer is composed of where Weight w and bias b are the parameters of the neural network optimized by the back-propagation method with the appropriate cost function.The activation function is the hyperbolic tangent, f (•) = tanh(•).In this work, we basically employ the following cost function where J (v R ) is Jacobian, S(v ′ ) represents the action composed of v ′ .θ 0 means the phase of the partition function.Since we do not know the exact value of θ 0 , we estimate it iteratively in the learning process.We can employ other cost functions; we introduce a penalty term as shown later. The actual procedure is as follows: 1. Generate configurations on the original path 2. Update the neural network parameters using the generated configurations 3. Regenerate configurations on the modified path 4. Repeat 2 and 3 to obtain a converged result Since the Boltzmann weight is still complex even on the modified path, phase reweighting is required for the probability: where O represents an observable such as magnetization. The left-hand side in Eq. ( 18) is the correct expectation value of O, while • • • pq is the phase-quenched expectation value, and Z pq is the partition function with the corresponding Boltzmann weight.The denominator of Eq. ( 18) is the so-called average phase factor (APF).If APF is exactly 1, the sign problem completely disappears.The sign problem becomes serious when the APF approaches 0. Note that the path optimization method and other sign-optimized manifold approaches usually require a Jacobian calculation to modify the integral path, which requires a high numerical cost, O(N 3 ).In this work, we consider the simple model, and thus do not introduce the reduction technique of the Jacobian calculation, but we need it for more complicated models and theories.One of the possible ways is that we completely neglect the Jacobian calculation in the learning part; this is a most drastic reduction technique because the Jacobian is completely neglected except in the evaluation part of the expectation values.In Ref. [15], such a drastic approximation is shown to work at least in the 1 + 1 dimensional U (1) gauge theory.Another treatment of the reduction of the Jacobian calculation, for example, is discussed by using the affine coupling layer in Ref. [32].No Jacobian calculation is required in the configuration generation of the worldvolume Hybrid Monte Carlo method by use of the flow equations [33,34]. D. Parallel tempering Since the path optimization method makes the phases of the Boltzmann weight in the partition function localize, we may encounter the global sign problem even if it seems to be absent on the original integral path.The global sign problem arises if there are some relevant contributions on the integral path which are separated by the energy barrier in the molecular dynamics.To treat the global sign problem, we consider the parallel tempering method [35][36][37], as adopted to the Lefshetz thimble method [23] and the path optimization method [13]. In this study, we introduce replicas as follows, instead of varying the temperature.We modify the integral path using the path optimization method, and we have Eq.(11).We then make replicas as where r = 1, • • • , N r and N r means the total number of replicas.The region between the original path and the modified path is divided into N r slices as replicas. The exchange probability between the rth-replica and (r + 1)th-replica is set as where E. Improvements We introduce the following three improvements to the path optimization method.Improvements are in part based on knowledge obtained in the machine learning community. First, we add the penalty term to the cost function (16), similar to the L 2 normalization, where λ is the strength of the term.The penalty term prohibits too large separations of the integral path from the original path in the training.Second, we mix the previous and regenerated configurations as 50 : 50 in the training part to make the training speed moderate; we only use the regenerated configurations in the evaluation of the expectation value.If the regenerated configurations are significantly changed compared with the previous configurations, it may violate the stability of training. Finally, we introduce the scheduler, Exponen-tialLR [38], to ensure stable training.The scheduler decreases the learning rate as the training progresses, and thus the change of the parameters in the neural network becomes mild.If the model approaches good minima, a decrease in the learning rate leads to better determination of the parameters. III. NUMERICAL SETUP We consider N = 4 spins in the one-dimensional Ising model.Our numerical codes are implemented in the framework of PyTorch [39].For evaluation of the expectation values, we generate N conf = 1000 configurations after thermalization by HMC.The trajectory length is 1 with a step size of 0.2.For the number of replicas, we employ N r = 10.The statistical error is estimated using the Jackknife method with bin size 50.Measurements are performed at each 100 trajectories.We set Re J = 1.0 and h = 0.1 ∈ R. The shift value C in K in (6) is 2 + 10 −5 . In the training part, we use the batch training [40] with the batch size 32.The number of hidden layers is L = 2 and each layer contains 64 units.We employ AdamW [41] as an optimizer.We set the strength of the penalty term λ = 1.0.The decay rate on the scheduler is γ = 0.9.In the following, we show the results with the three improvements explained in Sec.II E. The results are evaluated after the 30th training.If no significant improvement is achieved in the early stage of training, we reset the initial values of the neural network. After finishing the training, we regenerate the configurations and estimate APF and the magnetization. IV. NUMERICAL RESULTS Figure 1 shows the real and imaginary parts of APF for T = 0.8, 1.0 and 1.2 with Im J = 0.5 for each learning step, where each learning step contains batch training and MC update.The results in the figures are obtained without improvements explained in Sec.II E. In learning steps, the training is almost stable with large APF, but sometimes shows a sudden drop; see Appendix B for the distribution of the phase of the Boltzmann weight in the training.This problem may be solved with a large number of replicas because the bias of sampling in HMC is relaxed.We may also need a deeper neural network or a network based on physical knowledge of the model and/or theory to enhance the expressive power of the neural network.We will keep them in our future work. Figure 2 shows the real and imaginary parts of APF for T = 0.8, 1.0, and 1.2 with Im J = 0.5 for each learning step.Comparison of Fig. 1 with Fig. 2 suggests that training becomes more stable than that without the im- provements.Enhancement of APF is also observed in Fig. 3, which shows the real part of magnetization and the real and imaginary parts of APF at fixed T = 1.0 with Im J = 0.25 ∼ 1.0 for each learning step. Figure 4 shows the magnetization on the original and modified paths.Here, we consider Im J = 0.25 ∼ 1.0 with T = 0.8, 1.0 and 1.2.On the original path, the statistical errors are large due to the small APF, at least in the present number of configurations.In some regions, the error becomes very small, but the results do not reproduce the analytic result; this indicates that the HMC on the original path does not sample all relevant configurations.On the modified path constructed by the path optimization method with some improvements, we can see that the statistical errors are well reduced. V. SUMMARY In this paper, we have applied the path optimization method [4][5][6] to the (Lenz-)Ising model with a complex coupling constant, which is prepared as a laboratory to investigate the sign problem in spin models with the discretized degrees of freedom.The sum of spins is transformed into an integral using the Hubbard-Stratonovich transformation [21,22], which allows us to modify the in- tegral path on the real dynamical variable plane to that on the complex dynamical variable plane.We found that the path optimization method works in the spin model, at least in the Ising-type model.The average phase factor is enhanced on the modified integral path compared to that on the original integral path with improvements: the parallel tempering, the penalty term in the cost function, the mixed configurations in the training part, and the scheduler.On the original path, the statistical error of the magnetization can be huge, or can be underestimated even with 1000 configurations indicating lack of all relevant contributions in sampling, due to the sign problem.On the modified path by the path optimization, the expectation value of the magnetization reproduces the exact result with a well-reduced statistical error. It should be noted that the same procedure can work also in the gauge theory case if we can rewrite the sum for spins in the path integral.However, we should be careful with the gauge symmetry because it is hard to enhance the average phase factor without adequate treatment of the gauge symmetry.We may need the suitable gauge fixing [13], the gauge-invariant input [14], or the gauge-covariant network [15,42] for the path optimization method. Since the one-dimensional Ising model does not have a phase transition, it is interesting to apply the present method to the spin model which shows a phase transition, such as the higher dimensional Ising model and also the Potts model.While we only use machine learning to represent the integral path, we can also use it to accelerate the sampling of configurations near the phase transition point [43].We will report on these issues elsewhere.energy becomes where s is a complex vector with 2N -components consisting of Φ x and Φx , A is the 2N × 2N symmetric matrix, and where N is the number of sites.We impose the periodic boundary condition for this model. To keep the first term in Eq. (A5) real, we sum up a possible combination of Φ and Φ.The partition function is then given by where the sum takes over all possible states of the Potts spins and β is absorbed into H.Since the expression (A5) is similar to that of the Ising model , we can use the same formulation.Therefore, we can use the hybrid Monte Carlo method for the QCD-like Potts model if the effective Hamiltonian is real.The Hamiltonian becomes complex at finite µ, which causes the sign problem.It should be noted that the expectation value of the energy must be positive. Since degrees of freedom in the present model can be expressed by continuous dynamical variables, the path optimization method can be applied to the QCD-like Potts model. FIG. 1 . FIG.1.The magnetization and APF with Im J = 0.5 at T = 0.8, 1.0 and 1.2.Here, we do not introduce three improvements.The left panel shows the real part of the magnetization and the right panel shows APF.The circle and square symbols in the right panel are the results of the real and imaginary parts of APF, respectively. FIG. 2 . FIG. 2. The magnetization and APF with Im J = 0.5 at T = 0.8, 1.0 and 1.2.The three improvements are included in the training.The left panel shows the real part of the magnetization and the right panel shows APF.The circle and square symbols in the right panel are the results of the real and imaginary parts of APF, respectively. FIG. 3 . FIG. 3. The magnetization and APF with Im J = 0.25, 0.5, 0.75 and 1.0 at T = 1.0.The three improvements are included in the training.The left panel shows the real part of the magnetization and the right panel shows APF.The dotted line in the left panel denotes the analytic result.The circle and square symbols in the right panel are the results of the real and imaginary parts of APF, respectively. FIG. 4 . FIG. 4. The magnetization with Im J = 0.25 ∼ 1.0 at T = 0.8, 1 and 1.2.The three improvements are included in the training.The left (right) panel is the result on the original (modified) path. FIG. 5 . FIG. 5.The top (bottom) panel shows the histogram for the phase of the Boltzmann weight (un-reweighted magnetization) with Im J = 0.5 at T = 1.0 for the 0th, 10th and 20th learning steps from the left to the right panel without the improvements.
4,599
2023-09-12T00:00:00.000
[ "Physics" ]
Petrov Classification and holographic reconstruction of spacetime Using the asymptotic form of the bulk Weyl tensor, we present an explicit approach that allows us to reconstruct exact four-dimensional Einstein spacetimes which are algebraically special with respect to Petrov's classification. If the boundary metric supports a traceless, symmetric and conserved complex rank-two tensor, which is related to the boundary Cotton and energy-momentum tensors, and if the hydrodynamic congruence is shearless, then the bulk metric is exactly resummed and captures modes that stand beyond the hydrodynamic derivative expansion. We illustrate the method when the congruence has zero vorticity, leading to the Robinson-Trautman spacetimes of arbitrary Petrov class, and quote the case of non-vanishing vorticity, which captures the Plebanski-Demianski Petrov D family. Introduction Navier-Stokes and Einstein's equations are sets of non-linear equations, which appear to be closely related. This was first noticed in the framework of black holes, where Navier-Stokes describe the dynamics of horizon perturbations [1]. More recently, holography has shed new light in their relationship via the so-called fluid/gravity correspondence in asymptotically anti-de Sitter spacetimes [2]. In this case, fluid dynamics resides on the conformal boundary, and it corresponds to a relativistic conformal fluid described in terms of its traceless and conserved energy-momentum tensor. The connection between the incompressible horizon fluid and the fluid at the conformal boundary is realized using the holographic renormalization group [3,4]. Furthermore, an incompressible fluid can also be generically defined in the region between these two extreme points. Its dynamics, i.e. the conservation of its energy-momentum tensor, is inherited from the bulk Einstein's momentum constraints, while the Hamiltonian constraint, at leading order of a large-mean-curvature expansion, is interpreted as the equation of state [5][6][7]. Such a fluid interpretation is rather formal and may not always be physically accurate since Navier-Stokes equations appear only at first order of a derivative expansion. Moreover, as pointed out in these references, the fluid and gravity degrees of freedom match only under the assumption that the Einstein geometry is algebraically special in Petrov's classification. The evolution of geometry from the boundary towards the bulk can be formulated as an ADM-type Hamiltonian system which, as usual, requires two pieces of fundamental holographic data. For pure gravity dynamics, one piece is the boundary metric and the other one is the energy-momentum tensor. If the boundary system is in the hydrodynamic regime, the energy-momentum tensor describes a conformal, non-perfect fluid, but this needs not be true in general for the Hamiltonian evolution scheme to hold. Irrespective of its physical interpretation, the boundary metric together with the energy-momentum tensor allows us to reconstruct the Einstein bulk spacetime. The boundary metric and the boundary energy-momentum tensor are read off in the Fefferman-Graham expansion of the bulk metric, as leading and subleading terms, respectively [8,9]. In principle, given the two independent pieces of boundary data the bulk can be reconstructed order by order using the Fefferman-Graham series. Alternatively, this reconstruction can be achieved with the help of a derivative expansion. The latter was originally proposed in [10][11][12], and is based on the black-brane paradigm. From the bulk perspective, it assumes the existence of a null geodesic congruence defining tubes that extend from the boundary inwards. On the boundary, this congruence translates into a timelike congruence, and the aforementioned derivative series expansion is built on increasing derivative order of this field. At the perturbative level, the fluid interpretation is applicable and the boundary timelike congruence is always identified with the boundary fluid velocity field. Beyond the perturbative framework, however, this interpretation is not faithful due to the presence of non-hydrodynamic modes in the boundary energy-momentum tensor. In general, from a boundary-to-bulk perspective, it is unlikely that one could explicitly resum either expansion -the Fefferman-Graham or the derivative -and the generic bulk solution can be achieved only in a perturbative manner. 1 It makes however sense to pose the following question: given a class of boundary metrics, what are the conditions it should satisfy, and which energy-momentum tensor should it be accompanied with in order for an exact dual bulk Einstein space to exist? The aim of the present note is to provide a constructive answer to the above question in the case of four-dimensional Einstein spaces. Of course, at this stage, one may wonder why an answer should even exist. Actually, the resummability of the derivative expansion, irrespective of the dimension, was observed in the original papers [11] for the Kerr black holes. This property was latter shown to hold more systematically in four dimensions, even in the presence of a nut charge, which accounts for asymptotically locally anti-de Sitter spacetimes [14]. This is achieved by including an infinite, though resummable series of terms built on the boundary Cotton tensor [15,16]. There, the requirement was that the Cotton tensor of the boundary metric be proportional to the energy-momentum tensor, itself being of a perfectfluid form. This kind of ansatz unifies all known black-hole solutions with nut charge and rotation, and even allows us to find some new ones (in spirit, this is what happens e.g. when imposing curvature self-duality in Euclidean gravity or in Yang-Mills theories). It is not expected, however, to exhaust all possibilities, and many Einstein spaces with a rich holographic content are left aside. It is therefore reasonable to attempt finding a generic pattern that guarantees the existence of a bulk dual for appropriately chosen sets of boundary data. Here, we will present a general boundary ansatz, which gives us more exact solutions of Einstein's equations in the bulk. Remarkably, we are able to show that our ansatz unifies the above quoted black-hole solutions, described by perfect fluids, with e.g. Robinson-Trautman solutions, whose holographic dual is highly far from equilibrium. Resummation generates therefore non-perturbative effects i.e. non-hydrodynamic modes. The common feature of all these solutions is that they are algebraically special with respect to Petrov's classification: within the proposed method, the Weyl tensor of the four-dimensional bulk is controlled from the boundary data, and turns out to be always at least of type II. The implications of our work are threefold. Firstly, Einstein's equations are generically non-integrable and the above procedure aims at unravelling integrable sectors in the phase space of solutions, based on appropriate mappings onto integrable dynamical sectors in the dual field theory, such as integrable configurations of Euler's equations for relativistic fluids. Secondly, such a mapping may provide a powerful solution-generating technique, as opposed to standard Geroch-like methods valid in the presence of isometries, which are of limited use in asymptotically anti-de Sitter spaces (see [17] for a recent attempt). Thirdly, we can derive an large amount of non-trivial information about holographic strongly coupled field theories: for example in Ref. [16] it was shown that the existence of exact solutions with perfect-fluid like equilibrium in the perfect-Cotton boundary geometries implied that infinitely many transport coefficients of a special kind should vanish in the dual field theories. Enlarging the class of exact solutions with a specific relationship between the boundary data, automatically enables us to obtain highly non-trivial information of multi-point thermal correlation functions of the energy-momentum tensor, even far from the hydrodynamic regime. The organisation of the paper is as follows. In the first section we present our ansatz for shaping the boundary data in a manner that guarantees the resummability of the derivative expansion. The relationship between the bulk Weyl tensor and the boundary Cotton and energy-momentum tensors is also clarified. Our approach is constructive and duly motivated, but the formal proof answering the question raised above is left aside and will appear in a separate publication. Instead, we illustrate it in Sec. 2 by constructing an exact four-dimensional solution of Einstein's equations step by step from an appropriate set of boundary data. 1 Bulk reconstruction from boundary data The boundary quantities Consider a three-dimensional spacetime playing the role of the boundary, equipped with a metric ds 2 = g µν dx µ dx ν (µ, ν, . . . = 0, 1, 2) and with a symmetric, traceless and covariantly conserved tensor T = T µν dx µ dx ν . We assume for this tensor the least requirements for being a conformal energy-momentum tensor [18], and consider systems for which it can be put in the form with the a perfect-fluid part The timelike congruence u = u µ (x)dx µ is normalized (u µ u µ = −1) and defines the fluid lines. The tensor Π captures all corrections to the perfect-fluid component, i.e. hydrodynamic and non-hydrodynamic modes. The hydrodynamic part is the viscous fluid contribution, which can be expressed as a series expansion with respect to derivatives of u. The first derivatives of the velocity field are canonically decomposed in terms of the acceleration a, the expansion Θ, the shear σ and the vorticity 2 In the Landau frame, the hydrodynamic component of Π is transverse to u. The full Π is not transverse but The latter is the local energy density, related to the pressure via the conformal equation of state ε = 2p. However, it should be stressed that the presence of a non-hydrodynamic component tempers the fluid interpretation. In particular, it is not an easy task to extract the congruence u, because its meaning as a vector tangent to fluid lines becomes questionable. Another important structure in three spacetime dimensions, where the Weyl tensor vanishes, is the Cotton tensor 3 with ∆ µν = u µ u ν + g µν the projector onto the space orthogonal to u. 3 The Cotton and Levi-Civita are pseudo-tensors, i.e. they change sign under a parity transformation. It is therefore important to state the convention in use. with η µνσ = ǫ µνσ / √ −g . This tensor vanishes if and only if the spacetime is conformally flat. It shares the key properties of the energy-momentum tensor, i.e. it is symmetric, traceless and covariantly conserved. For later reference we introduce a contraction analogous to the energy density (1.4), (1.6) Bulk Petrov classification and the resummability conditions The four-dimensional Weyl tensor can be classified into distinct types, i.e. according to the algebraic Petrov types. For an Einstein space (with a given sign of the Ricci curvature) this provides a complete classification of the curvature tensor. In order to establish a connection with the three-dimensional boundary data it is useful to recall how the algebraic Petrov classification is obtained from the eigenvalue equation for the Weyl tensor. In particular, the Weyl tensor and its dual can be used to form a pair of complex-conjugate tensors. Each of these tensors has two pairs of bivector indices, which can be used to form a complex two-index tensor. Its components are naturally packaged inside a complex symmetric 3 × 3 matrix Q with zero trace (see e.g. [19] for this construction). This matrix encompasses the ten independent real components of the Weyl tensor and the associated eigenvalue equation determines the Petrov type. Performing the Fefferman-Graham expansion of the complex Weyl tensor Q ± for a general Einstein space, one can show that the leading-order ( 1 /r 3 ) coefficient, say S ± , exhibits a specific combination of the components of the boundary Cotton and energy-momentum tensors. 4 The algebraic Segre type of this combination determines precisely the Petrov type of the four-dimensional bulk metric and establishes a one-to-one map between the bulk Petrov type and the boundary data. Assume now that we wish to reconstruct the Einstein bulk spacetime from a set of boundary data. Given a three-dimensional boundary metric, one can impose a desired canonical form for the asymptotic Weyl tensor S ± , as e.g. a perfect-fluid form (type D) or matter-radiation form (type III or N) or a combination of both (type II) (see e.g. [22] for these structures). Doing so, not only do we design from the boundary the special algebraic structure of the bulk spacetime, but we also provide a set of conditions that turn out to guarantee the resummability of the perturbative expansion into an exact Einstein space. This is our central result as it answers the question asked earlier in the introduction. The rest of the paper will be devoted to making this statement as clear as possible and illustrating it with robust examples. It turns out that it is somehow easier to work with a different pair of complex-conjugate where k is a constant and T ± is related to S ± by a similarity transformation: T ± = P S ± P −1 with P = diag(∓i, −1, 1). Choosing a specific canonical form for these tensors, and assuming a boundary metric ds 2 , we are led to two conditions. The first, provides a set of equations that the boundary metric must satisfy: The second delivers the boundary energy-momentum tensor it should be accompanied with for an exact bulk ascendent spacetime to exist: The tensors given in Eq. (1.7) are by construction symmetric, traceless and conserved: We will refer to them as the reference energy-momentum tensors as they play the role of a pair of fictitious conserved boundary sources, always accompanying the boundary geometry. It turns out that the particular combination (1.7) of the energy-momentum and Cotton tensors is exactly the combination one finds if the the Weyl tensor is decomposed into self-dual and anti-self-dual components, which given the Lorentzian signature are complexconjugate. These are nicely captured in the Cahen-Debever-Defrise 5 decomposition. Finally, we note that some care must be taken when working with T ± instead of S ± . Indeed, the eigenvalues are equal, but not necessarily their eigenvectors. In particular, this means that one cannot determine the Petrov type unambiguously if considering the eigenvalue equation for T ± . 6 The derivative expansion and its ressumation ansatz We have listed in the previous section all boundary ingredients needed for reaching holographically exact bulk Einstein spacetimes. We would like here to discuss their actual reconstruction. We will use for that the derivative expansion, organized around the derivatives of the boundary fluid velocity field u. This expansion assumes small derivatives, small curvature, and small higher-derivative curvature tensors for the boundary metric. This limitation is irrelevant for us since we are ultimately interested in resumming the series. A related and potentially problematic issue, is the definition of u, which is not automatic when the boundary energy-momentum tensor T is not of the fluid type. In that case u should be considered as an extra ingredient of the ansatz, a posteriori justified by the success of the resummation. The guideline for the reconstruction of spacetime based on the derivative expansion is Weyl covariance [10,11]: the bulk geometry should be insensitive to a conformal rescaling of the boundary metric ds 2 → ds 2 /B 2 . The latter is accompanied with C → B C, and at the same time T → B T, u → u /B (velocity one-form) and ω → ω /B (vorticity two-form). Covariantization with respect to rescalings requires to introduce a Weyl connection oneform: which transforms as A → A − d ln B. Ordinary covariant derivatives ∇ are thus traded for Weyl-covariant ones D = ∇ + w A, w being the conformal weight of the tensor under consideration. In three spacetime dimensions, Weyl-covariant quantities are e.g. is Weyl-invariant. Notice also that for any symmetric and traceless tensor S µν dx µ dx ν of conformal weight 1 (like the energy-momentum tensor and the Cotton tensor) has In the present analysis, we will be interested in situations where the boundary congruence u is shear-free. Despite this limitation, wide classes of dual holographic bulk geometries remain accessible. Vanishing shear simplifies considerably the reconstruction of the asymptotically AdS bulk geometry because it reduces the available Weyl-invariant terms. As a consequence, at each order of Du, the terms compatible with Weyl covariance of the bulk metric ds 2 bulk are nicely organized. Even though we cannot write them all at arbitrary order, the structure of the first orders suggests that resummation, whenever possible, should lead to the following [10-12, 15, 16, 23]: Here r the radial coordinate whose dependence is explicit, x µ are the three boundary coordinates extended to the bulk, on which depend implicitly the various functions, η µνσ = ǫ µνσ / √ −g , κ = 3k /8πG, k a constant, and Σ is displayed in (1.14). Finally, 7 performs the resummation as the derivative expansion is manifestly organized in powers of q 2 = 2ω αβ ω αβ . This structure is inferred by the first orders, which are the ones that have been explicitly determined in Refs. [11,15]. Several remarks are in order here. Being algebraically special, the spacetimes at hand must admit a null, geodesic and shear-free congruence, as stated in the Goldberg-Sachs theorem. The congruence u in the bulk is null and geodesic, and becomes timelike and shearfree (but not longer necessarily geodesic) on the boundary, where it identifies with the fluid velocity field. It turns out to be indeed shear-free everywhere in the bulk, provided the conditions (1.8) and (1.9) are fulfilled. The absence of shear for the boundary fluid congruence seem therefore to be intimately related to the resummability of the derivative expansion into an algebraically special Einstein space. This is in agreement with the fact that the large number of Weyl-covariant tensors available when the shear is non-vanishing, makes it unlike that the resummation occurs. As already stressed previously, the energy-momentum tensor T, obtained in the proce- for concreteness a boundary metric of the form (1.18) where P and Ω are arbitrary real functions of (t, ζ,ζ), and b = B(t, ζ,ζ) dζ +B(t, ζ,ζ) dζ. (1.19) This is actually the most general three-dimensional metric, 8 as we make no assumption regarding isometries, with a specific choice of local frames. Part of our resummation ansatz is to assume that the boundary frame has been adapted to the fluid shear-free congruence, so In the latter expression we have introduced ε(x) and c(x) defined in (1.4) and (1.6) (x refers to the coordinates t, ζ,ζ common for bulk and boundary). The boundary metric and the reference energy-momentum tensors The resummation method presented here generalizes previous successful attempts to reconstruct exact Einstein spaces from boundary data [15,16]. In these works, the boundary metric was of the type (1.18) with two commuting Killing vectors, and the energy-momentum tensor was perfect-fluid and proportional to the Cotton tensor. This is a particular case of our present ansatz, with hydrodynamic boundary state (see Sec. 2.3). We will now move to a 8 We could even set Ω = 1, without spoiling the generality. 9 The Hodge duality is here meant with respect to the three-dimensional boundary: * (u ∧ dq) = η νσ µ u ν ∂ σ q dx µ . different situation and consider a specific family of boundary geometries, namely those with This is not the most general three-dimensional metric because it follows from (1.18) with Ω = 1 and b = 0. As we will see soon, it turns out to enable the holographic reconstruction of Robinson-Trautman Einstein metrics of all Petrov types. Here the Cotton tensor, computed using (1.5), reads: is the Gaussian curvature of the surfaces at constant t divided by k 2 . This tensor is real. We must now introduce a canonical reference energy-momentum tensor T ± and apply the strategy displayed above: (i) impose conservation (1.10); (ii) constrain the boundary metric using (1.8) and determine the actual energy-momentum tensor with (1.9); (iii) reconstruct the bulk Einstein space using (1.21). Equations (1.10) and (1.8) are expected to guarantee that Einstein's equations are fulfilled and provide information on the reached Petrov class. The latter must be related to the choice of reference tensor T ± . There are two basic and distinct cases will be considered here. Perfect-fluid form. For perfect-fluid reference tensors, it is necessary to introduce two complex-conjugate reference velocity congruences u ± . It is not useful to analyze the most general velocity congruences, but the most typical ones, keeping in mind that a redundancy is expected to exist, making different-looking boundary data correspond to identical Einstein spaces. Consider the normalized reference congruence 10 the physical congruence, α + = α + (t, ζ,ζ), and its complex-conjugate u − = u + α − P 2 dζ with α − = α + * . The physical congruence u is shear-free, has no vorticity, no acceleration but is expanding at a rate Θ = −2∂ t log P. (2.6) A perfect-fluid energy-momentum tensor based on these reference congruences reads: with M − = M * + . The choice of the functions α ± (t, ζ,ζ) and M ± (t, ζ,ζ) should be restricted so that T ± is conserved i.e. (1.10) is fulfilled. At this stage we may pose and ask a generic question: given a velocity congruence with expansion and acceleration, can one find a pressure field such that the corresponding traceless perfect-fluid energy-momentum tensor is conserved? The analysis of that question is performed in App. A and the answer is the following: a pressure locally exists if and only if the Weyl connection constructed out of the velocity, the expansion and the acceleration is flat (zero exterior derivative). If furthermore the Weyl connection is vanishing, the pressure is a constant. Here and similarly for the "+" by complex conjugation. We must impose dA ± = 0 and determine the reference pressures p ± (t, ζ,ζ) such that A ± = d ln p −1 /3 ± . The closure of A ± can be worked out systematically, with a simple generic solution. Using (2.8) we find that the functions α ± must be factorized: Using (2.9), the latter can be written explicitly as In conclusion, a congruence solving Euler equations is characterized by two functions and their complex conjugates:α ± (ζ,ζ) and The product (2.10) must satisfy Eq. (2.12). Radiation-matter form. Consider finally (2.14) In this expression β and γ are a priori functions of t, ζ andζ. The tensor is the symmetrized direct product of a light-like by a time-like vector. Its conservation enforces the dependence β = β(t, ζ) and the condition Notice that for vanishing β, we obtain a pure-radiation tensor i.e. the square of of a null vector. Resummation: the Robinson-Trautman Einstein spaces The preceding analysis has not put any restriction on the boundary metric (2.1). It simply selected the appropriate ingredients for a reference tensor to be conserved. We will consider a general conserved reference tensor of the form the three components being given in Eqs. (2.7) and (2.14). For this combination, and we can now require (1.8). The first observation is that this identification of the Cotton tensor requires which we will name M(t), a real function. Furthermore, it appears a pair of independent conditions plus their complex-conjugates. The first reads: Mk 4 (α + ) 2 P 4 + γ = 0 and c.c. , (2.19) while the second is where ∆ = 2P 2 ∂ζ ∂ ζ . This is a differential equation for the boundary metric ds 2 given in (2.1), and for M(t). It should be interpreted as an integrability condition for the resummed series expansion (1.16) to be exactly Einstein. We can now proceed and determine the bulk metric ds 2 res. , using (1.16). For that we need the energy-momentum tensor, given in terms of the reference tensor (2.16) by (1.9). Inserting (2.18) as well as the algebraic conditions (2.19) and (2.20) in the latter, we obtain the boundary energy-momentum tensor exclusively in terms of the metric data P, K and the function M(t): This tensor can be put in the form (1.1), (1.2) with u in (2.5) and ds 2 in (2.1). The energy density is determined using (1.4) and (2.5): , (2.23) and is time-dependent. The non-perfect component reads: and contains both hydrodynamic and non-hydrodynamic components. Putting everything together (here ω and q vanish 11 ) we obtain (1.21) with In order to close this analysis, we would like to come back to the two algebraic equations. (2.19) and (2.20) that the functions entering the boundary energy-momentum tensor should satisfy. In order to clarify their role, it is appropriate to remind that the bulk Robinson-Trautman metric is algebraically special, i.e. generically Petrov type II. Choosing the bulk 11 Note also that R = 2k 2 K and Σ = −k 2 Kdt 2 . null tetrad as in (2.25), the non-vanishing components of the Weyl tensor are The direction k, which on the boundary becomes the time-like congruence u, is generically a doubly degenerate principal null direction because the conditions (2.19) and (2.20) leave enough freedom on the a priori arbitrary functions M(t),α ± (ζ,ζ), β(t, ζ) and γ(t, ζ,ζ) to avoid any constraint on the functions P(t, ζ,ζ) or K = ∆ ln P. We may however tune the various functions defining the reference energy-momentum tensor (2.16), in order to increase the degeneracy of the bulk principal null direction, and explore in a boundary-controlled manner other Petrov bulk geometries. • Set M(t) = 0. This amounts to keeping 13 a purely radiation-matter reference energymomentum tensor. Now (2.19) reads: defines β(t, ζ), but also contrains K since now From the bulk perspective, the vanishing Bondi mass reads Ψ 2 = 0. Together with Eq. (2.32), these are precisely the conditions for the Robinson-Trautman be Petrov type III (see [19]). The principal null direction k is now triply degenerate. Two remarks are in order here. The first concerns the actual solutions of Robinson-Trautman Petrov type D. These are the Schwarzschild AdS and the C-metric AdS, which also belongs to the class of Plebański-Demiański, with black-hole acceleration parameter. The Plebański-Demiański, without black-hole acceleration parameter has been obtained along the lines of though of the present work in [16]. The C-metric holography, has also been analyzed in [27,28], and reveals many interesting peculiarities. The second remark is that a pure perfect-fluid reference energy-momentum tensor with arbitrary congruences u ± would have led to the condition (2.37) only. Without (2.38) the bulk would have been type II -not the most general though. This apparent violation of the one-to-one correspondence between canonical classes of boundary tensors and bulk Petrov types is due to the fact that we are using the reference tensors T ± instead of S ± , as explained in Sec. 1.2. • Finally, we can simply set T ± = 0. This case is somehow degenerate. Indeed, according to (1.8), the boundary has vanishing Cotton tensor and is thus conformally flat. So is the bulk since Ψ i = 0 for all i = 0, . . . , 4. The bulk Robinson-Trautman is now Petrov type O, which reduces to pure four-dimensional anti-de Sitter spacetime. Adding vorticity: towards Plebański-Demiański The hydrodynamic congruence carries vorticity when allowing for non-trivial b in (1.20). In this instance a genuine resummation operates in (1.16) because ρ = r (see (1.17)). Boundary data of this kind were discussed in [16] together with their resummed exact ascendents, demonstrating the power of the resummation. In the cases at hand, the boundary metric is (1.18) with Ω = 1 and the vector ∂ t tangent to the hydrodynamic congruence is assumed to be a Killing vector. This makes u = −dt + b geodesic, shear-and expansion-free with vorticity ω = 1 2 db. The reference tensors T ± are chosen to be of the perfect-fluid form T ± pf given in (2.7) with equal velocity fields u + = u − = u. Being geodesic and expansion-free, they allow the conservation of this tensor with constant M ± (see App. A): perfect-fluid physical boundary energy-momentum tensor. The latter statement shows that the boundary state is purely hydrodynamic with many vanishing transport coefficients, whereas the former leads to a family of boundary metrics depending on two real parameters, with hyperbolic, flat or spherical spatial parts. The resulting bulk geometry (1.16) turns out to be the general AdS-Kerr-Taub-NUT black-hole spacetime with hyperbolic, flat or spherical horizon, depending on three real pa-rameters: the mass m, the angular velocity a and the nut charge n. 15 The nut charge and the angular velocity a are encapsulated inside the constant c in (2.40). These geometries belong to the most general Petrov D class of Einstein solutions having two Killing vectors, namely the Plebański-Demiański family [31]. As already mentioned when quoting the C-metric at the end of Sec. 2.2, the Plebański-Demiański class has an extra physical parameter (the acceleration parameter), which can be introduced from the boundary perspective by relaxing the requirement u + = u − . The details of this case will appear elsewhere. Conclusions In order to put our results in perspective, let us come back to the original question asked in the introduction: given a class of boundary metrics, what are the conditions it should satisfy, and which energy-momentum tensor should it be accompanied with in order for an exact dual bulk Einstein space to exist? Our answer to this question is based on three steps and four equations: • The first step consists in choosing a set of two complex-conjugate reference tensors T ± , symmetric, traceless and satisfying the conservation equation (1.10). • Next, this tensor enables us (i) to set conditions on the boundary metric by imposing its Cotton be the imaginary part of T ± (up to constants), Eq. (1.8); (ii) to determine the boundary energy-momentum tensor as its real part, Eq. (1.9). • Finally, using these data and Eq. (1.21), we reconstruct the bulk Einstein space. Several comments are in order here for making the picture complete. Equation (1.21) is obtained using the derivative expansion, which is an alternative to the Fefferman-Graham expansion and better suited for our purposes. As such, it assumes that the boundary state is in the hydrodynamic regime, described by an energy-momentum tensor of the fluid type. The latter has a natural built-in velocity field, interpreted as the fluid velocity congruence. Our method (first and second steps), however, does not necessarily lead to a fluid-like energy-momentum tensor. This is not a principle problem, because non-perturbative contributions with respect to the derivative expansion (non-hydrodynamic modes) are indeed expected to emerge along with a resummation [32]. In practice, though, it requires an extra piece of information regarding the velocity field around which the hydrodynamic modes are organized. To face this issue, we made the most economical choice, with a fluid at rest (Eq. (1.20)) in the natural frames associated with the coordinates in use in the boundary metric (1.18). This choice is in agreement with the assumption of absence of shear, crucial for eliminating many terms in the derivative expansion of the bulk metric and making it resummable. 15 Bulk angular velocity and nut charge act both as sources for boundary vorticity [29,30]. We have not formally proven that the three-step procedure proposed here leads indeed to Einstein spaces. However, our approach makes it clear that canonical boundary reference tensors guarantee the bulk be algebraically special. As a bonus, it is possible to set a precise relationship between the Segre type of the reference tensor and the Petrov type of the bulk Weyl tensor. Many examples illustrate how the method works in practice and we have presented here the reconstruction of generic boundary data with a vorticity-free congruence. These lead to the whole family of Robinson-Trautman bulk Einstein spaces. The formal proof of the constructive method presented in this paper will be released in the future. Besides that technical development, which we have chosen to avoid here, several other issues deserve further investigation. This effort aims at better understanding how the bulk is controlled from the boundary beyond any perturbative expansion. We know e.g. that the Petrov class of the bulk is determined by the choice of the boundary reference tensors. Our working assumption was the absence of shear for the boundary hydrodynamic congruence. Would shear be an obstruction to resummability? Can one reconstruct spaces which are not algebraically special, with zero shear on the boundary? Can one better understand the interplay between the two perturbative expansions mentioned here, namely the Fefferman-Graham and the derivative ones? Finally, based on the fact that Eqs. (1.10) and (1.8) emerge as the boundary manifestation of Einstein's equations, we may wonder whether they possess some hidden symmetry à la Geroch, which would relate integrable boundary data (see [17] and the original references cited there). research of K. Siampos is supported by the Swiss National Science Foundation. Jakob Gath, P.M. Petropoulos and K. Siampos acknowledge the Germaine de Stael francoswiss bilateral program 2015 (project no 32753SG) for financial support. A On perfect-fluid dynamics In this appendix, we would like to set a useful criterion regarding the motion of conformal perfect fluids. For such fluids with velocity congruence u and pressure p(x), threedimensional Euler's equations read:    2u(p) + 3p Θ = 0, u(p) u + dp + 3p a = 0. where u(p) = u µ ∂ µ p. Combining these equations, we obtain: 3A + dp where A = a − Θ 2 u. For Eq. (A.2) to hold we extract a simple integrability condition: the Weyl connection A must be closed (hence locally exact) for a pressure field p(x) to exist and account for the expansion and acceleration of the fluid. If A vanishes, the pressure is constant; if A is not exact, the fluid moving on the congruence u is not perfect, or even the hydrodynamic regime is not applicable.
7,798
2015-06-16T00:00:00.000
[ "Mathematics" ]
State support of JSC "Russian railways" The main sources of funds for railway investment projects in Russia are still state and regional budgets, funds from the National Welfare Fund, the net profit of Russian Railways, and pension savings from the Pension Fund of Russia. The reform of the system of the Ministry of Railways and the creation of JSC "Russian Railways", which began in 2016, has not yet released the state budget from financing its needs, and there has been no active use of private investment. Moreover, the monopoly began to seek to shift the financing of not only the modernization and construction of tracks, but even the maintenance of existing infra-structure on the shoulders of the state and shippers. From year to year, there is an increase in the total amount of state support for Russian Railways. Part of the received state support funds in the period under review is spent inefficiently by JSC "Russian Railways". The remaining budget funds that are not used by JSC "Russian Railways" additionally burden the economy, increasing inflation by various surcharges to tariffs, but the available budget funds are not selected in full, and billions of fines are paid for this. There are low rates of implementation of individual investment projects, overestimation of expenses for the purchase of equipment, an increase in the cost of construction, the cost of contracts for a number of objects exceeds the cost determined by state expertise, etc. Introduction The reform of the system of the Ministry of Railways and the creation of JSC "Russian Railways" were almost primarily justified by the prospects of attracting investment resources. It was assumed that the Corporation, unlike the Ministry, would actively use private investment and free the state budget from financing its needs. In December 2016, the government of the Russian Federation for this purpose increased the authorized capital of Russian Railways by 24.98 billion rubles; as recently as in April 2017 -by another 29.78 billion rubles. However, it turned out somewhat differently: having taken a course to become an infrastructure company, the monopoly began to seek to shift the financing of not only the modernization and construction of tracks, but even the maintenance of existing infrastructure to the shoulders of the state and shippers. In modern conditions, one of the factors hindering the country's economic development is the insufficient level of development of infrastructure sectors, including transport infrastructure. Transport as a system-forming branch of the Russian economy cannot develop without modern infrastructure. It, in turn, is the most important condition for the effective use of the resource potential of both a particular region and the country as a whole. Modern infrastructure and effective management of the infrastructure complex contribute not only to increasing the productivity of the transport process and increasing the availability of transport services in accordance with social standards, but also to the growth of gross domestic product and the creation of new jobs, preserving the common economic space and transport mobility of the population. In a global economy, building and improving supply chains improves the competitiveness of the transport system on the world market and promotes the growth of exports of transport services. The development of transport infrastructure is determined by the influence of a number of factors that are usually considered as infrastructure-forming [5]. At the same time, the creation and smooth operation of infrastructure facilities requires significant capital investments, taking into account the significant capital intensity and inertia of the industry, while most infrastructure projects are often unique and practically unprofitable. According to the spatial development strategy, the implementation of infrastructure projects is de-signed to stimulate economic growth, primarily by removing structural restrictions, which will create a transport framework for economic development [9]. Problem Statement According to various estimates, the additional need for infrastructure investment in the Russian Federation is at least 3.5 billion rubles per year, and for accelerated economic development -more than 7 trillion rubles per year. However, such funds are unlikely to be attracted in the coming years [3]. Today, the share of budget financing in the field of infrastructure construction in our country is more than 50 percent (Federal and Regional budgets). The share of private investment in the infrastructure project market remains low (from 3 to 5 percent). For comparison, it is from 30 to 50 percent abroad [2]. When determining the priorities of the investment strategy of JSC "Russian Railways", one of the goals is to implement a set of measures aimed at increasing the capacity of the infrastructure. The new version of the Federal target program "development of the transport system" provides for financing activities from 2018 to 2021 in the amount of 7.75 trillion rubles [4]. The fact that the state has to invest significant funds in the Corporation to implement its investment projects for infrastructure development, we suggest analyzing the volumes and directions of state support for the period 2017-2019 (table 1, The government of the Russian Federation, on a long-term basis until 2030, is implementing state regulation of tariffs for the services of JSC "Russian Railways" for the use of public railway transport infrastructure in suburban passenger transport. Compensation for losses in the income of the infrastructure owner is carried out in the form of subsidies from the Federal budget. From year to year, there is an increase in the number of subsidies. Thus, in 2019, compared to 2017, its volume increased by 1.7 billion rubles, or 4.8 %. Research Questions In 2019, the implementation of state projects for the development of railway transport infrastructure continued, the financing mechanism of which was carried out through the budget of JSC "Russian Railways" through a contribution to the authorized capital. In addition, in 2018 and 2019, the National Welfare Fund of Russian Railways received 20 and 19.5 billion rubles, respectively, as a contribution to the authorized capital due to the issue of preferred shares, to finance the program of modernization of the railway infrastructure of the Baikal-Amur and Trans-Siberian railway lines with the development of throughput and transportation capacities. Real estate was transferred from state ownership in the form of a contribution to the company's authorized capital in the amount of 0.8 and 0.3 billion rubles, respectively, over the years. Purpose of the Study Direct financing of projects puts a significant burden on the state budget and does not provide an acceptable ratio of cost and quality of project execution [10]. The growth of state support for Russian Railways from regional budgets and extra-budgetary funds creates an additional burden on these budgets. Research Methods The accounts chamber of the Russian Federation, in accordance with the legislation, considers the results of the audit of budget execution and budget reporting on the execution of the Federal budget by the chief managers of budget funds. The main results are published on the website of the control body as the Board approves the relevant conclusions. The Board of the accounting chamber approves the conclusion on the execution of the Federal budget in the Federal Agency for railway transport. The objects of monitoring and control were the Federal Agency for railway transport (Roszheldor), JSC "Russian Railways". Findings In 2017, to continue the implementation of investment projects, JSC "Russian Railways" was provided with budget investments in the form of a contribution to the authorized capital in the total amount of 60.6 billion rubles. Projects are implemented within the framework of the investment program approved by the company's Board of Directors [1]. The analysis of the accounts chamber showed that the processes of planning budget allocations for making contributions to the authorized capital of Russian Railways and forming an investment program are not interrelated. The amount of contributions to the authorized capital in 2017 and the resulting balances at the beginning of the year exceeded the amount of Federal budget funding provided for in the investment program by 46.6 billion rubles. When planning budget allocations for making contributions to the authorized capital, Roszheldor does not conduct a proper analysis of the possibility of using budget funds, as well as the pace of their actual development. This led to the formation of significant balances at the year end. Thus, when making changes to the Budget for 2017, Roszheldor's proposals for additional financing of the project "Complex reconstruction of the Gorky-Kotelnikovo-Tikhoretskaya-Krymskaya section with a bypass of the Krasnodar railway junction" for 7 billion rubles were taken into account. As a justification for the need for additional project financing in 2017 Roszheldor indicated the need to maintain the pace of work in order to complete its implementation on time. At the same time, at the end of 2017, the undeveloped balance of Federal budget funds allocated for the project implementation amounted to 9.8 billion rubles. Roszheldor does not take into account the parameters provided for in the investment project passports when setting the terms of commissioning of a number of objects in contracts with Russian Railways. Thus, in accordance with the agreement of April 27, 2017, the deadline for commissioning individual objects within the framework of the project for the development of the Moscow transport hub is set for 2025. At the same time, according to the passport, the project implementation period and reaching its design capacity is 2020. A low level of planning Roszheldora of volumes of the budgetary appropriations directed on financing of investment program of JSC "Russian Railways" annually, entails the failure of JSC "RZD" of the obligations accepted under contracts of transfer of shares in the expense of budget investments and, as a consequence, the non-implementation rates before the end of 2017, provided public investment. According to the results of 2017, budget allocations in the amount of 3.7 billion rubles provided under the share transfer agreement dated April 27, 2017 were not disbursed. As a result, under the terms of the share transfer agreement dated April 27, 2017, the estimated amount of the fine is 5 % of the amount of the transferred contributions to the authorized capital of JSC "Russian Railways" (29.8 billion rubles) or 1.5 billion rubles. One of the reasons for the formation of budget balances is the slow pace of implementation of individual investment projects. Thus, the project "Integrated development of the Mezhdurechensk-Taishet section" should be completed in 2019 to develop a promising cargo flow towards the ports of the Far East. The project has been funded from the Federal budget since 2013. Due to non-fulfillment of project implementation obligations, at the beginning of 2017, the remaining unused budget funds in the amount of 20.5 billion rubles were formed. These funds were transferred back in 2013, 2014 and 2015. In 2017 JSC "Russian Railways" accepted works on this investment project only for 3.3 billion rubles. In accordance with the decisions taken by the Government, Russian Railways has been granted the right to use in 2017 the funds of unused contribution balances in the amount of 15 billion rubles for the implementation of the project for the construction of the Prokhorovka-Zhuravka-Chertkovo-Bataysk railway line and other projects. According to the explanation of Russian Railways, non-use of budget funds is due to non-fulfillment of its obligations by the contractor [1]. In general, Federal budget funds in the amount of 80.5 billion rubles were used to finance investment projects of JSC "Russian Railways" in 2017, which is 69 % of the total amount of contributions to the authorized capital and the balances of previous years at the beginning of the year. The amount of completed works is 68.9 billion rubles. The analysis of the accounts chamber showed that the expenses of JSC "Russian Railways" for the purchase of equipment for a number of construction and reconstruction projects as part of the investment program projects in 2017, financed from the Federal budget, exceeded the estimated cost of equipment approved by the state expertise (taking into account the corresponding deflator set by the forecast of socio-economic development of the Russian Federation). In particular, JSC "Russian Railways" signed contracts for the supply of equipment in 2017 for the object Complex reconstruction of the Kotelnikovo-Tikhoretskaya-Korenovsk-Timashevskaya-Krymskaya section with a bypass of the Krasnodar junction of the North Caucasus railway. Construction of the second track on the Forgotten-Polivyansky section for a total amount of 233.6 million rubles, which is 27 million rubles higher than the cost of equipment approved by the positive conclusion of the state expert examination of October 10, 2016, taking into account the deflator for 2017 (206.6 million rubles). Based on the results of the audit of the use of NWF funds for the modernization of the railway infrastructure of the BAM and Trans-Siberian railway, in 2017 the parameters of the passport of the specified infrastructure project were adjusted. The start of the operation phase has been postponed from 2018 to 2020. In the framework of adjusting the parameters of the passport of the project of JSC "Russian Railways" was a proposal of the accounting chamber about the direction of the project's financing income in the amount of 7.9 billion rubles received as interest on balances of funds of the NWF. In the new version of the project passport, Federal budget funding has been reduced by the specified amount. In July 2017 JSC "Russian Railways" made additional advances to contractors at the expense of the National Welfare Fund in the amount of 3.7 billion rubles, which significantly exceeded their needs for 2017. As a result, the unused balance of NWF funds from contractors amounted to 3.2 billion rubles (86 % of the amount of additional advance payments). At the same time, in accordance with the procedure for settlements under contracts with contractors approved by the company's order, advance payments under contracts for construction and installation work are set off in full annually no later than December 31. When calculating the initial (maximum) prices of construction and installation contracts in 2017, Russian Railways used deflators that did not correspond to the forecast of socio-economic development of the Russian Federation. This led to an increase in the cost of construction in the amount of 45.7 million rubles and, accordingly, expenses at the expense of the National Welfare Fund. In particular, according to the results of the state examination, the estimated cost of work on the object "Reconstruction of the locomotive maintenance point at the Ussuriysk station" in the prices of the 1st quarter of 2016 amounted to 405 million rubles. As a result of applying the deflator index that does not correspond to the forecast when calculating the initial (maximum) price of the contract, the price of the contract for work on the specified object amounted to 521.1 million rubles or exceeded the initial (maximum) price calculated taking into account the deflator set by the forecast by 17 million rubles. Vadim Mikhailov, first Deputy General Director of Russian Railways, noted that in some cases, Russian Railways, without waiting for contributions from the Federal budget, finances investment projects at its own expense, so as not to interrupt the construction process, with subsequent reimbursement from the Federal budget. This allows for a number of investment projects to receive advanced execution. The Board took the following action: on the results of monitoring of the investment program of JSC "Russian Railways" to send newsletters Roszheldora and JSC "Russian Railways"; the results of inspection of usage of funds of the NWF on the modernization of BAM and Transsib to send representation to the JSC "Russian Railways". They send reports on the results of events to the chambers of the Federal Assembly. Long-term development program of JSC "Russian Railways" until 2025 envisages the development of public-private partnership in railway transport (including concessions) for the construction of new lines, involving public resources, investment infrastructure owner interested cargo owners and other "working arrangements" and an additional four trillion private investment [6]. Public-private partnership is a promising and longterm form of cooperation between the state and private business. It allows attracting technologies and management experience accumulated by businesses, stimulating investment activity of private capital, as well as improving the efficiency of public investment, including in infrastructure projects on railway transport. The variety of PPP forms and models used allows mutually beneficial distribution of risks between project participants. The use of public-private partnership mechanisms helps to increase the competitiveness of infrastructure projects in the investment resources market. Therefore, new legislative initiatives are needed to encourage private investment in infrastructure construction, including in railway transport. Given the importance and role of transport infrastructure, discussions continue today about which mechanisms to attract private investment are most effective. The creation and modernization of transport infrastructure is, on the one hand, the task and responsibility of the state, since it has all the characteristics of a public good. On the other hand, its development is impossible without the participation of business, especially since there is a gap between the actual and necessary amounts of budget financing for infrastructure development. Many European countries faced the "infrastructure gap" in the 1990s. In the context of budget constraints, this required more active involvement of private capital in the infrastructure sector, which was done and significantly accelerated the creation of new and modernization of existing infrastructure facilities. Public-private partnership (PPP) can act as a mechanism for supporting infrastructure investments by the state, being a special scheme for implementing an investment project on mutually beneficial terms for both business and the state, it allows attracting "long money" from private investors to solve certain tasks that have, among other things, a social effect. In world practice, the most flexible and effective form of attracting infrastructure investment is a concession. The first concessions in Russia appeared in 2006, and today they account for almost 80 % of all projects that have passed the stage of commercial closure. Under concession agreements, infrastructure bonds are issued, which, in comparison with corporate bonds, have a number of advantages, including state guarantees and risk insurance. Concession and public-private partnership are the main forms of implementation of infrastructure projects. Their main difference is in the approach to the transfer of ownership of the object and the possibility of pledge of such rights. This format can provide a multiplier effect in the economy as a whole and more actively accumulate investment resources for infrastructure construction. Conclusion 1. JSC "Russian Railways", implementing its investment program, inefficiently spends the allocated funds of the Federal budget. 2. Undeveloped budget balances of JSC "Russian Railways" additionally burden the economy, increasing inflation by various surcharges to tariffs, but the available budget funds are not selected in full. 3. Having Achieved the next state support, the monopoly, it turns out, is not able to manage this money in a businesslike way and is obliged to pay billions in fines. Thus, at the end of 2017, 3.7 billion rubles were not spent. As a result, under the terms of the share transfer agreement dated April 27, 2017, the estimated amount of the fine is 5 %, or 1.5 billion rubles. 4. The processes of planning budget allocations for making contributions to the authorized capital of JSC "Russian Railways" and forming the investment program are not interrelated. When planning budget allocations for making contributions to the authorized capital, Roszheldor does not analyze the possibilities of their use and the pace of their actual development. As a result, the remaining funds are accumulated. As a result, the amount of contributions to the authorized capital in 2017 and the generated balances at the beginning of the year exceeded the amount of funding from the Federal budget provided for in the investment program by 46.6 billion rubles. However, this approach applies not only to contributions to the authorized capital. When agreeing on the terms of implementation of Russian Railways projects, the regulator does not check the passports of these projects, increasing them, as happened with a number of objects of the investment program of the Moscow transport hub. As a result, their implementation period increased by five years. 5. One of the reasons for underutilization of budget funds is the low pace of implementation of individual investment projects. For example, the project "Integrated development of the Mezhdurechensk-Taishet section", which is necessary for developing a promising cargo flow towards the ports of the Far East, should be completed in 2019. The state budget has been funding it since 2013. At the same time, the plans were not implemented in the proper amount, and by the beginning of 2017, the balance of unused budget funds reached 20.5 billion rubles. 6. Overstatement of expenses for the purchase of equipment financed from the Federal budget, and an increase in the cost of construction and, accordingly, expenses at the expense of the National Welfare Fund were identified. 7. In general, Federal budget funds in the amount of 80.5 billion rubles were used to finance the monopoly's investment projects in 2017, which is 69 % of the total amount of contributions to the authorized capital and the balances of previous years by the beginning of last year. At the same time, the monopoly accepted completed works for only 68.9 billion rubles. In other words, 31 % of the funds were not used, and the budget spent 11.6 billion rubles more than the amount of work accepted. 8. The Monopoly did not give a clear confirmation of what and how the 20-26 billion rubles received from this additional 2 % of the tariff were spent. 9. JSC "Russian Railways", implementing the project "Development of public railway infrastructure on the Mezhdurechensk-Taishet section in 2016-2017 and the expired period of 2018", allowed inefficient use of budget funds and organized work at a low level. The project is designed to increase the capacity of the highway -this is necessary for the economic development of Khakassia, Kuzbass, the South of the Krasnoyarsk territory and the development of the Tyva coal deposits. It has been implemented since 2010, but the state-owned company cannot complete it in any way. According to the accounts chamber, it has received interest on funds allocated from the budget, finances work carried out without permission and at an inflated cost and makes unjustified advances to contractors. The total cost of the project is 45.6 billion rubles, of which 35.7 billion rubles are from the Federal budget, and 9.9 billion rubles are from Russian Railways. The state represented by the Federal Agency for railway transport even in 2015, fully met all their financial obligations. However, due to the low level of work organization, including the lack of project documentation that has passed state expertise in accordance with the established procedure, the goals of the project Integrated development of the Mezhdurechensk-Taishet section may not be achieved within the established time frame [7]. Construction deadlines are not being met for 13 of the 22 projects, they were extended from 2016 to 2020. In December 2018, the technical readiness of 8 objects was less than 50 %, and work on three objects did not even begin. The audit also revealed the risks of inefficient use of budget investments. Thus, the cost of contracts for a number of objects exceeds the cost determined by the state expertise. In addition, when forming the initial contract price, the company used deflator indices that do not correspond to the indicators of the socio-economic development forecast. As a result, the cost of contracts for the construction of a number of objects is overstated by 89.8 million rubles. This, according to the accounts chamber, entails risks of inefficient use of Federal budget funds for the specified amount. Meanwhile, Russian Railways received income from placing budget investments in the accounts of credit organizations. So, on May 30, 2014, the state-owned company placed a Deposit in one of the banks in the amount of 12.5 billion rubles. Interest income -2.4 billion rubles JSC "Russian Railways" spent on its business activities. The passport of the infrastructure project was repeatedly re-approved: the values of quality indicators were reduced, one investment object worth 110 million rubles was excluded, but the total cost of the project was not reduced. Moreover, the company financed construction on a number of sites that was carried out without permission. The audit showed that the financing of construction and reconstruction works in the period under review in the absence of design and estimate documents that have passed state expertise in accordance with the established procedure, entails risks of additional Federal budget expenditures. The cost of contracts for the construction of a number of objects exceeded the estimate made by the state expertise, for a total of 478.2 million rubles. A similar excess of hundreds of millions of rubles was allowed for the supply of equipment. In addition, JSC "Russian Railways" allowed the expenditure of funds for the author's supervision of a number of objects, while these costs were excluded from the estimates during the state examination, which again led to inefficient use of Federal budget funds. Unjustified advance payments to contractors under contract agreements, which were practically not carried out in 2018, resulted in an increase in accounts receivable by 1.1 billion rubles, or 2.3 times. As of the date of completion of the audit, the volume of accounts receivable amounted to 1.9 billion rubles, 84 % of which are budget investments (1.6 billion rubles). 11. The Volume of unfinished investments of JSC "Russian Railways" in 2017 increased by 13 billion rubles and amounted to 43.7 billion rubles. The amount of income of JSC "RZD" in the form of percent from placing of means FNB on the account in VTB Bank for 2015-2017 amounted to 7.9 billion rubles. As of January 1, 2018, total expenditures (since 2013) for the implementation of the Infrastructure project "Modernization of the BAM and Trans-Siberian railway infrastructure" amounted to 140.6 billion rubles, or 25 % of the total cost. In total, in 2017, construction of 16 planned facilities worth 4.9 billion rubles was completed. At the same time, permits for commissioning of the specified objects are not issued, and the objects have not been put into operation. In addition, work has not been completed on 6 objects, the construction of which should have been completed in 2017. The construction readiness of these objects ranges from 60 to 90 % [8]. Thus, all of the above indicates that JSC "Russian Railways" in terms of using state support funds needs increased control, it is necessary to carefully monitor the determination of the cost of work and equipment and the effectiveness of the organization of work, as well as to stop the practice of making capital investments without positive conclusions of the state expertise. 12. Infrastructure investment is considered one of the most effective tools for stimulating economic growth. Of course, the real dynamics of infrastructure investment will be determined primarily by the activity of the state. There is no doubt that the participation of the state is extremely important here, because it acts primarily as a subject of financial and legal guarantees, which will help maintain the necessary level of confidence in long-term investments, which are investments in the creation and modernization of infrastructure.
6,274.8
2021-01-01T00:00:00.000
[ "Economics" ]
Spectral decoupling for training transferable neural networks in medical imaging Summary Many neural networks for medical imaging generalize poorly to data unseen during training. Such behavior can be caused by overfitting easy-to-learn features while disregarding other potentially informative features. A recent implicit bias mitigation technique called spectral decoupling provably encourages neural networks to learn more features by regularizing the networks' unnormalized prediction scores with an L2 penalty. We show that spectral decoupling increases the networks′ robustness for data distribution shifts and prevents overfitting on easy-to-learn features in medical images. To validate our findings, we train networks with and without spectral decoupling to detect prostate cancer on tissue slides and COVID-19 in chest radiographs. Networks trained with spectral decoupling achieve up to 9.5 percent point higher performance on external datasets. Spectral decoupling alleviates generalization issues associated with neural networks and can be used to complement or replace computationally expensive explicit bias mitigation methods, such as stain normalization in histological images. Introduction Neural networks have been adapted to many medical imaging tasks with impressive results, often surpassing human counterparts in consistency, speed and accuracy [1].However, these networks are prone to overfit easy-to-learn, or statistically dominant, features while disregarding other potentially informative features.This leads to poor generalisation to data generated by different medical centres, reliance on the dominant features, and lack of robustness [2,3].For example, a neural network classifier for skin cancer, approved to be used as a medical device in Europe, had overfit the correlation between surgical margins and malignant melanoma [4].Due to this, the false positive rate of the network was increased by 40 percentage points during external validation.Furthermore, three out of five neural networks for pneumonia detection showed significantly worse performance during external validation [5] and recent neural networks for COVID-19 detection rely on confounding factors rather than actual medical pathology [6].Even small differences in the sharpness of images from two different scanners can degrade the performance of neural networks significantly (see Section 3.2). Although generalisation issues need to be solved before any neural networks can be applied in clinical practice, the phenomenon is still poorly understood [7].This may be because the detection of generalisation issues is hard and often requires state-of-the-art methods of explainable AI [6].An external dataset is one of the only methods of testing generalization performance, although it will uncover generalisation issues only when the neural network fails to generalize to the dataset.If a neural network achieves high overall accuracy on the external dataset, it may still always fail for some subset of samples.Any particular external dataset may also contain the same sources of bias as the training data. Explicit methods have been proposed to address specific sources of bias, like using augmentation to address staining differences in tissue section slides [8] or normalising each image with a common standard [9,10].The obvious problem with explicit methods is that they only control for selected biases and more subtle sources of bias, like small differences between patient populations, may go unaddressed.Implicit methods of bias control are required before neural networks can be safely applied to clinical practice. arXiv:2103.17171v4 [eess.IV] 17 Dec 2021 Preprint -Spectral decoupling for training transferable neural networks in medical imaging 2 Learning dominant features at the cost of other potentially informative features, also known as shortcut-learning, is a common problem in all neural networks and one of the main reasons behind the generalisation issues [3].Shortcut-learning occurs mainly because of gradient starvation, where gradient descent updates the parameters of a neural network in directions capturing only dominant features, thus starving the gradient from other features [11].The gradient descent algorithm finds a local optimum by taking small steps towards the opposite sign of the derivative, the direction of the steepest descent [12].The recently proposed method of spectral decoupling [2] provably decouples the learning dynamics leading to gradient starvation when using cross-entropy loss, thus encouraging the network to learn more features.The effect is achieved by simply adding an L2 penalty on the unnormalised prediction scores (logits) of the network. We evaluate the utility of spectral decoupling as an implicit bias mitigation method in the context of medical imaging.We use simulation experiments to show that spectral decoupling increases networks' robustness to data distribution shifts and can be used to train generalisable networks on datasets with a strong superficial correlation.The findings are then evaluated by training prostate cancer and COVID-19 classifiers, where the networks trained with spectral decoupling achieve significantly higher performance on all evaluation datasets. Spectral decoupling In spectral decoupling, the network is regularised by imposing an L2 penalty on the unnormalised outputs of the last layer of the network, or logits ŷ, which is then added to cross-entropy loss, L CE .This penalty provably [2] avoids the conditions leading to gradient starvation in networks trained with cross-entropy loss.Two variants of the penalty are defined as For Equation 1, there is a single tunable hyper-parameter λ. For Equation 2, hyper-parameters λ and γ are tuned separately for each class, a total of four hyper-parameters for the binary classification task in our study.Pseudo-code for implementing Equation 1 is presented in Algorithm 1. for (images, targets) in loader: # Pass images through the network.logits = net(images) All digital slide images are cut and processed with HistoPrep [15].A summary of the prostate datasets is presented in Table 1. COVID-19 For COVID-19 detection, we use large open-access repositories of chest radiographs.COVIDx8 dataset is compiled from five different open-source repositories and contains radiographs from over 15,000 patient cases from at least 51 countries, with over 1500 COVID-19 positive patient cases [16,17,18,19,20].BIMCV± dataset (iteration 2) contains 3033 positive and 2743 negative COVID-19 patient cases, and 9171 radiographs, after exclusions, collected from the multiple same medical centres during the same time period [21].Only PA and upright AP radiographs [16] with windowing information were selected from the BIMCV± dataset.PadChest dataset contains over 67,000 COVID-19 negative patient cases, and 114,227 radiographs from a single medical centre in Valencia, Spain [22].19 corrupted images were excluded from the PadChest dataset. COVIDx8 dataset is reserved as an external dataset, and two training datasets are compiled by using only the BIMCV± dataset and by adding the PadChest and BIMCV± datasets together.5% of both training datasets are set aside for validation. Simulation datasets Two simulation experiments are used to more closely investigate the utility of spectral decoupling as an implicit bias mitigation method.For both experiments, the dataset from Helsinki University Hospital described in Section 2.2 is modified in specific ways. Cutout dataset A dominant feature present in a real-world dataset could be, for example, a biological marker, a certain cancer type or a scanner artefact.To represent these kinds of features, 16 cutouts of 8 × 8 pixels are added to the images (Figure 1). For the experiment, 200,000 images are selected for the training set with an equal amount of samples with cancerous and benign annotations.For the training set, cutouts are added to 25% and 2.5% of the benign and cancerous samples, respectively.This makes the presence of cutouts in the image spuriously correlated with a benign annotation.If the network overfits this correlation, cancerous samples with cutouts may be classified as benign.Thus for the test set, cutouts are added to all cancerous samples and none of the benign samples.For a control training set, cutouts are added to all images.Networks trained with this dataset provide a reference point of the performance with cutouts but without the spurious correlation. Robustness dataset Shifts from the training data distribution are common when evaluating the neural network with datasets from different medical centres.Small changes in the images due to differences in, for example, sample preparation or imaging equipment can cause shifts from the training data distribution.We assess the networks' robustness to these data distribution shifts, by applying transformations with increasing magnitudes to the images in the test set.Image sharpness and stain intensity were selected to represent possible dataset shifts caused by differences in the used scanner and sample preparation, respectively. The UniformAugment augmentation strategy consists of applying random transformations with a uniformly sampled magnitude to the images before feeding them to the network [23].Sharpening the image is included in the set of possible transformations [24], meaning that the network sees sharpened images during training.Thus, the data distribution shift caused by sharpening images is being explicitly mitigated, which should help the network to predict correct labels for evaluation images with higher sharpness.Blurring the image is not included in the set of possible transformations [24], meaning that the network will not see randomly blurred images during training.Thus, the data distribution shift caused by blurring the images will not be explicitly mitigated and the use of UniformAugment should not directly help the network with blurry evaluation images. By evaluating the network with increasingly sharpened or blurred images, it is possible to assess whether spectral decoupling can improve upon situations where the data distribution shift is, and is not explicitly addressed.Additionally, there are large differences in the sharpness values of real-world datasets from different medical centres and scanners (Figure 2). Step-wise blurring is achieved by simple averaging with a n × n kernel, where n ∈ {2, . . ., 20}.Sharpened version of the image x sharp is created by applying kernel to the original image x original .Sharpness is then gradually increased by creating a new image x blend with where α ∈ {0, 0.1, . . ., 1} defines the amount of sharpness increase.To assess the data distribution shifts caused by differences in sample preparation, the intensity of haematoxylin and eosin stains are computationally modified.Haematoxylin highlights cell nuclei, and eosin cytoplasm, connective tissue and muscle.The stain intensities depend on multiple steps in the staining process, and thus the final colour distribution of the slide images varies a lot [8].The stain intensity modification is achieved by first separating the haematoxylin and eosin stains with the Macenko method [25].The concentrations of each stain can then be reduced by multiplication with a value between 0 and 1 before the stains are combined back together.An example of the method is shown in Figure 3. Training details EfficientNet-b0 network [26], with dropout [27] and stochastic depth [28] of 20% and an input size of 224 × 224, is used as a prostate cancer classifier for all experiments.For augmentation, the input images are randomly cropped and flipped, resized, and then transformed with UniformAugment [23], using a maximum of two transformations.Each network is trained for 90 epochs, with a learning rate of 0.005 batch size 512 and cosine scheduling.Weight decay of 0.0001 is used for networks trained without spectral decoupling.When training neural networks with spectral decoupling, weight decay is disabled. For COVID-19 detection, we replicate the training regimen from [6], where a DenseNet-121 network [29] is pre-trained with the ImageNet dataset and then fine-tuned for 30 epochs as a binary COVID-19 classifier.All hyper-parameters, other than spectral decoupling, are set to values reported in the paper. For spectral decoupling, Equation 2 is used for the first simulation experiment on dominant features (Section 3.1) and COVID-19 detection (Section 3.4).Equation 1is used for all other experiments (Sections 3.2 and 3.3). Each experiment is repeated five times and the summary metrics for these runs are reported.All reported performance metrics are balanced between the classes when necessary and a cut-off value of 0.5 is used to obtain a binary label from the normalised predictions of the network.To compare paired receiver under the operating characteristic (ROC) curves, we use one-tailed DeLong's test and report the Z-values and p-values [30]. Experiments In this section, the utility of using spectral decoupling as an implicit bias mitigation method is explored with both simulation and real-world experiments. Dominant features To assess the utility of spectral decoupling in situations where the training dataset contains a strong dominant feature, the cutout dataset defined in Section 2.3.1 is used.Five networks are trained with either spectral decoupling or weight decay on the training set.Additionally, five networks are trained on the control dataset with weight decay to provide a reference point of the performance under no spurious correlation caused by the domi-Preprint -Spectral decoupling for training transferable neural networks in medical imaging 5 2. Accuracy is defined as the fraction of all instances that were correctly identified, and recall as the fraction of positive instances that were correctly identified. The use of spectral decoupling increases the accuracy by 8.5 percentage points over weight decay and almost reaches the performance of the network trained on the control dataset.The networks trained without spectral decoupling appear to make false predictions based on the dominant feature, although the class activation maps [34] of the trained neural networks, do not significantly differ between weight decay and spectral decoupling.As hyper-parameters were tuned on the test set, the results should be interpreted only as a demonstration that spectral decoupling can offer an important level of control over the features that are learned. The simpler variant of spectral decoupling in Equation 1 did not increase the networks' performance in any way, and only after extensive hyper-parameter tuning, Equation 2produced the reported results.The hyper-parameter tuning was sensitive to the selected parameters, and even small changes to the final values significantly reduced the accuracy of the neural network.Similar results were also reported with the real-world example in the original paper [2].As extensive hyper-parameter tuning can deter researchers from applying the method, we limit hyperparameter tuning to a simple grid search over limited search spaces for all other experiments, as described in Section 2.1. Robustness To assess whether spectral decoupling increases neural networks' robustness to data distribution shifts, five networks are trained with either spectral decoupling or weight decay and evaluated on the robustness dataset described in Section 2.3.2.Additionally, five networks are trained with weight decay but without UniformAugment to assess how much the augmentation strategy improves robustness.The robustness to data distribution shifts caused by sharpening, blurring and reducing the intensity of either haematoxylin or eosin stain are presented in Figure 4. Performance of all networks trained with weight decay and without the augmentation strategy degrade to roughly 50% accuracy.Training the networks again with UniformAugment significantly increases robustness to all data distribution shifts except with haematoxylin stain intensity reduction (Figure 4C).When the data distribution shift is included as a possible augmentation (Figure 4A), the increase in accuracy is almost 40 percentage points with the most severe distribution shift.When the data distribution shift is not included as a possible transformation (Figure 4B-D without augmentation.This result demonstrates the importance of using augmentation as an explicit bias mitigation method. Although the use of augmentation already increased the accuracy by almost 40 percentage points, the use of spectral decoupling is able to improve the accuracy by a further 4.6 percentage points with the most severe data distribution shift (Figure 4A).The increase in accuracy is more pronounced with blurring, 12.4 percentage points with n = 19 (Figure 4B), and eosin stain intensity reduction, where networks trained with spectral decoupling achieve 1.2 to 8.5 percentage points higher accuracy with a 0.9 to 0.0 multiplier (Figure 4D).These data distribution shifts are not included as a possible transformations in UniformAugment, and thus not explicitly controlled.With haematoxylin stain intensity reduction, all networks degrade similarly in performance (Figure 4C).These results show that spectral decoupling is able to significantly complement and improve upon augmentation, as well as improve robustness to data distribution shifts that are not explicitly controlled by augmentation. Preprint Prostate cancer detection To assess whether the results of the simulation experiments translate into improvements in real-world datasets, we train networks with and without spectral decoupling to detect prostate cancer on haematoxylin and eosin stained whole slide images of the prostate.These networks are then evaluated on four different datasets described in Section 2.2. The results are presented in Figure 5. Networks trained with spectral decoupling show higher performance on all evaluation datasets.The difference between weight decay and spectral decoupling gets more pronounced as we move further away from the training dataset distribution.Finally, there is a 9.5 percentage point increase in accuracy over weight decay on the dataset from a different medical centre.The reported performances are not comparable between evaluation datasets, as each dataset has been annotated with a different strategy and thus contain different amounts of label noise. To further explore why networks trained without spectral decoupling fail to generalise to the dataset from Radboud University Medical Center (Figure 5D), the robustness to haematoxylin and eosin stain intensities are explored in Figures 6A-B.Spectral decoupling is less sensitive to both haematoxylin and eosin stain intensity reduction and interestingly, networks trained with weight decay actually increase in accuracy when reducing the eosin stain intensity.This indicates that the difference between spectral decoupling and weight decay performance in Figure 5D, may be partly due to differences in the stain intensities between the two medical centres.To explore this possibility, the stain intensities of the external dataset are normalized with the Macenko method [25] to match the training data stain intensities and the resulted performance increases are reported in Figure 6C.Both networks trained with either spectral decoupling or weight decay benefit from stain normalization.Stain normalization is especially beneficial for networks trained with weight decay, where the mean network accuracy is increased by 7.5 percentage points.Networks trained with spectral decoupling still perform better than networks trained with weight decay coupled with stain normalization.These results demonstrate that spectral decoupling can complement or even replace normalization methods, with negligible computational requirements (Figure 6D). COVID-19 detection To assess whether spectral decoupling can help in real-world situations with strong dominant features and spurious correlations, we train 5 networks with and without spectral decoupling to detect COVID-19 positive patients in chest radiographs.Two different training datasets are used to train the networks and all networks are evaluated on the same external validation set, described in Section 2.2.2.We first train neural networks with the BIMCV± dataset, which represents an ideal situation where both the positive and negative samples originate from similar sources.Second, we train networks with the combined PadChest and BIMCV± dataset.This dataset represents a situation where the network can easily achieve high performance by only learning to detect where a sample originates as most of the negative samples come from a single medical centre. After training all networks, the predictions from each network are averaged to obtain ensemble predictions for both weight decay and spectral decoupling.ROC curves for ensemble predictions are presented in Figure 7, with bootstrapped (n = 1000) 95% confidence intervals (CI) for each area under the ROC curve (AUROC) value.When training networks with the combined PadChest and BIMCV± dataset, AUROC values of networks trained with either method decrease, although the number of training samples is increased over tenfold.The decrease in AUROC is similar for both weight decay and spectral decoupling, 0.065 and 0.067, respectively.This indicates that spectral decoupling is unable to mitigate bias in the combined dataset.As most of the negative samples originate from a single medical centre, shortcut learning seems to happen even though spectral decoupling encourages the network to learn more features.Detecting where a sample originates is especially easy with radiographs due to systematic differences between data repositories and medical centres, which could be exploited by a neural network [6].Thus, the higher AUROC value of spectral decoupling is more likely due to increased robustness to data distribution shifts than avoidance of shortcut learning. Discussion Generalisation performance is defined as the main challenge standing in the way of true clinical adoption of a neural network [7].Van der Laak et al. [7] argue that there is a need for public datasets which are truly representative of clinical practice.Although this is indeed important, we argue that training datasets, no matter how large, will never account for all possible variations caused by differences in imaging equipment, sample preparation and patient populations.Thus, it is crucial to couple extensive multi-source datasets with explicit and implicit bias mitigation methods to train neural networks which are robust to unseen variations. Two explicit methods of bias mitigation have been proposed for medical imaging.Augmentation of the training samples is crucial as it substantially increases robustness for distribution shifts from the training data caused by differences in imaging equipment or sample preparation (Figure 4, [8]).Thus, it is strongly recommended to use extensive augmentation strategies for training neural networks intended for clinical practice.Normalization of all images to a common standard would substantially reduce the distribution shifts [9,10,35], but comes with a considerable computational cost (Figure 6D).Both methods address important problems and should be complementary to any implicit methods of bias control. Spectral decoupling is, to our knowledge, the first implicit bias mitigation method for addressing the generalisation issues in neural networks.The method is complementary to augmentation, increasing the robustness for distribution shifts already addressed with augmentation (Figure 4A).Above all, spectral decoupling significantly increases the robustness for distribution shifts not addressed by augmentation (Figure 4B) and could be used to replace computationally expensive stain normalisation methods (Figure 6C). By encouraging the neural network to learn more features, spectral decoupling can also help in situations where the training dataset contains strong dominant features or spurious correlations (Table 2).This is crucial as the dominant features can also be inherent to the data, such as different cancer types.For example, with prostate cancer, different Gleason grades [36] are often unbalanced in the training set.Due to gradient starvation [11], the features of the underrepresented Gleason grades may not be learned by the neural network.Balancing the dataset, so that all Gleason grades are represented equally, is not easy or even desired as the grading is based on a continuous range of histological patterns. In COVID-19 detection, the networks' performance decreased similarly for both weight decay and spectral decoupling (Figure 7), when training the networks on the combined BIMCV± and PadChest dataset.Radiographs contain systematic differences between data repositories and medical centres, such as laterality tokens and differences in the radiopacity of the image borders, which could arise from variations in patient position, radiographic projection or image processing [6].These differences can be easily leveraged by neural networks to detect where a single radiograph originates.We speculate that spectral decoupling was unable to prevent shortcut-learning due to the ease of shortcut learning in the combined PadChest and BIMCV± dataset.In addition, our results showing the ability to prevent shortcut learning (Table 2) were obtained after considerable hyper-parameter optimization and no significant differences could be seen in the class activation maps between networks trained with either weight decay or spectral decoupling.Thus, removal of any obvious superficial correlations from the training dataset is crucial as there seems to be a limit of how much spectral decoupling can help with dominating features and spurious correlations. The advantages of spectral decoupling can be clearly seen when the network is evaluated with out-of-distribution samples (Figures 4, 5) and 7).Neural networks trained with spectral decoupling retain their performance with samples further from the training data distribution, which is exactly what is required from neural networks intended for clinical practice [7].Although using an external dataset may not reveal all generalization problems, it is clear that without spectral decoupling the neural networks fail to generalize to this particular external dataset Preprint -Spectral decoupling for training transferable neural networks in medical imaging 8 from Radboud University Medical Center (Figures 5D and 6).Even in COVID-19 detection, where spectral decoupling seems to fail in preventing shortcut learning, the performance of the network is significantly increased over the state-of-the-art. Conclusions Spectral decoupling is the first implicit bias mitigation method for training neural networks to be used across multiple medical centres.The method adds no computational costs, is easy-toimplement, and complements and improves upon explicit bias mitigation methods.Our results recommend the use of spectral decoupling in all neural networks intended for clinical use. Figure 2 : Figure 2: Kernel density estimation of the variance of the images after a Laplace transformation.A higher variance indicates a sharper image.The image is generated from the pre-processing metrics calculated by HistoPrep [15]. Figure 3 : Figure 3: Separation of the heamatoxylin and eosin stains with the Macenko method. Figure 4 : Figure 4: Robustness for data distribution shifts from the training data.The lines show the mean accuracy and the shaded regions represent one standard deviation around the mean. Figure 5 : Figure 5: Neural network performance on evaluation datasets.Each consecutive evaluation dataset moves further from the training data distribution.Networks trained with spectral decoupling improve accuracy by 0.35 (A), 1.0 (B), 3.6 (C) and 9.5 (D) percentage points over weight decay.All models are trained with UniformAugment. Figure 6 : Figure 6: Spectral decoupling can complement or even replace computationally heavy stain normalization methods.Robustness to data distribution shifts, on the external dataset, caused by haematoxylin (A) or eosin (B) stain intensity reduction.(C) Network accuracy increases when normalizing haematoxylin and eosin stain intensities with the Macenko method.(D) Comparison of the computational requirements between spectral decoupling and the Macenko method.Images per seconds estimation for spectral decoupling is calculated with a Equation 1, where ŷ is a 512 × 1 matrix and Macenko stain normalisation is performed on resized images of size 224 × 224. Table 2 : Results of the simulation study with the cutout dataset on dominant features.The mean and standard deviation (SD) values are reported for each set of five trained networks. -Spectral decoupling for training transferable neural networks in medical imaging
5,831
2021-03-31T00:00:00.000
[ "Medicine", "Computer Science" ]
Intricacy of Mitochondrial Dynamics and Antiviral Response During RNA Virus Infection Viruses are known to hijack the intracellular organelles, including mitochondria, endoplasmic reticulum, lipid droplets, and cytoskeleton to promote its replication. The host responds to invading viruses by mounting antiviral responses and rearrangement of its organelles. In particular, the mitochondria are one of the target organelles exploited by viruses and their proteins to suppress the host antiviral response. In this review, we have comprehensively summarized the impact of mitochondrial dynamics in modulating antiviral response during emerging and re-emerging RNA virus infections caused by genus Flavivirus (Dengue virus, Zika virus, Hepatitis C virus), and SARS-CoV-2, the causative agent of COVID-19 pandemic. In addition to knowledge gaps in mitochondria-virus interaction studies, we discuss recent advancements in therapeutics regulating the mitochondrial dynamics to combat viral infections. INTRODUCTION In addition to being a powerhouse, mitochondria play a crucial role in various cellular functions, including cell-cycle control, cell development, and apoptosis (1)(2)(3). Mitochondria take a central stage in cellular metabolism since the tricarboxylic acid cycle (TCA), fatty acid oxidation (FAO), oxidative phosphorylation (OXPHOS), calcium buffering, and heme synthesis occur within the mitochondria (4). Due to the overarching role in maintaining cellular homeostasis and innate immune response, the tight regulation of mitochondrial function is crucial during cellular stress stimuli and pathogen invasion. During viral infection, pattern recognition receptors (PRRs) trigger the production of interferons (IFNs). However, mitochondrial antiviral-signaling protein (MAVS) acts as a central hub for signal transduction initiated by RIG-I-like receptors, involved in the recognition of viral RNA (Figure 1). Amongst viral pathogens, RNA viruses are the leading cause of human infections and are responsible for major epidemics and pandemics, including the ongoing COVID-19. In particular, the RNA viruses can rapidly mutate resulting in the evolution of new variants which can escape from the host immune surveillance (5). Several studies show that viral infection alters mitochondrial dynamics, a synchronized process to combat extracellular threats and maintain cellular homeostasis (6)(7)(8)(9). Mitochondrial dynamics encompass the process of mitochondrial elongation (through fusion) and mitochondrial division (through fission). In addition, the damaged mitochondria are removed by mitochondria-selective autophagy, a process called mitophagy. Together, the synchronization of these processes maintains the health of the cell and mitochondriaregulated host metabolism. In the following section, we briefly discuss these three processes. The fusion of the mitochondria stimulates RLR-mediated-MAVS signaling along with the interaction of MAVS and STING at mitochondria-associated membranes (MAMs) with the endoplasmic reticulum (16). Koshiba et al. revealed that mitochondrial fusion and mitochondrial membrane potential regulated by MFN1 and MFN2, respectively, are essential for MAVS-mediated signaling (17,18). Moreover, the deletion of MFN1 or MFN2, reduced viral-induced IFNs and proinflammatory cytokines, thereby increasing the viral replication. Suppression of mitochondrial fusion is usually favored by the virus for its proliferation and evasion of the antiviral innate immune signaling as evidenced in SARS-CoV, SARS-CoV-2, Influenza, and HIV infection studies (19)(20)(21). Mitochondrial elongation, therefore, exacerbates viral-infection induced-RIG-I-dependent antiviral innate immunity and aids in reducing viral replication ( Figure 2). b) Mitochondrial Fission The splitting up of mitochondria into smaller organelles is known as mitochondrial fission. It initiates with the recruitment of dynamin-related protein (Drp1) to mitochondria upon its posttranslational modification (phosphorylation, nitrosylation, and sumoylation) (22,23). Drp1 is primarily cytosolic but migrates to the outer mitochondrial membrane (OMM) to initiate mitochondrial fission by binding to Fis1. Mitochondrial fission cascade can also be triggered independent of Drp1, by endoplasmic reticulum (ER) tubules and actin filaments (24), wherein close contacts between ER and mitochondria-associated membranes (MAMs) interact with the fission apparatus (25,26). The mitochondrial fission is complemented by mitophagy for the exclusion of the damaged portion of the organelle (24,27) (Figure 3). Smaller mitochondrial size diminishes the RLR (RIG-I-like receptor) signaling (9). Thus, depletion of Drp1 prevents mitochondrial fission and boosts antiviral response (28,29). Furthermore, the association of ER and mitochondria leads to the stimulation of cytosolic RNA sensors-RIG-I and MDA5 in a MAVS-dependent manner during mitochondrial fission (7). Viral dsRNA intermediates promote mitochondrial fission leading to a decrease in RLR signaling activated by the viral RNA exposure to the host immune system, thereby enabling the virus replication (28). Likewise, during bacterial infection, mitochondrial fragmentation is promoted, reducing the host immune response to enable their intracellular survival (21). c) Mitophagy Mitophagy represents the selective autophagy of faulty or damaged mitochondria to preserve homeostasis of mitochondrial dynamics at large. Mitochondrial fission upon stress, infection, or pathological diseases leads to the trigger of mitophagy as a final cellular rescue response (16,30,31). Mitophagy is facilitated by two independent pathways with differences in their requirement on ubiquitin (Ub), namely the PTEN-induced kinase 1 (PINK1)/Parkin pathway and receptormediated pathway (32,33). PINK1/Parkin pathway is an Ubdependent pathway mediated by two key proteins: a) PINK1, a mitochondrial serine/threonine kinase, and b) an E3 ligase termed Parkin, a signal amplifier in response to PINK1 activation (34). In regular mitochondrial functioning, cytosolic PINK1 tagged with a mitochondrial target sequence (MTS) translocates to the IMM by specific outer and inner membrane-associated translocases, TOM, and TIM, respectively. PINK1 is degraded through proteolysis in a process comprising the elimination of MTS by mitochondrial processing protease (MPP) and cleavage by presenilin-associated rhomboid-like protease (PARL) (27). However, the loss of membrane potential (DYm) in damaged mitochondria decreases the activity of TOM and TIM leading to the stabilization of PINK1 on the OMM (30,(35)(36)(37)(38). PINK1 and Parkin work synchronously to facilitate Ub-tagging of damaged mitochondrial membranes. Consequently, the dysfunctional mitochondria are engulfed by a phagophore leading to the formation of a mitophagosome that ultimately transports it to a lysosome. Mitophagy can also occur in a receptor-mediated pathway that includes the receptors on OMM and IMM, including BNIP3, NIX, FUNDC1, PHB-2, and others ( Figure 4). Some viruses (HBV, HCV, NDV, measles) shift the mitochondrial dynamics towards fission and mitophagy to favor viral replication and reduce overall mitochondrial mass to lessen the host antiviral response (39)(40)(41)(42). However, the functional involvement of mitophagy in the antiviral innate immune response is still in infancy but holds significant promise in identifying a potential antiviral therapeutic target. Impact of Emerging and Re-Emerging RNA Viruses on Mitochondrial Dynamics In the following section, we will summarize and discuss the impact of mitochondrial dynamics on antiviral response, and viral replication during infection caused by Dengue virus, Zika virus, Hepatitis C virus, and SARS-CoV-2 ( Figure 5 and Table 1). Dengue Virus Dengue virus (DENV) is an arthropod-borne RNA virus comprised of a positive single-stranded RNA. DENV belongs to the genus Flavivirus and is responsible for epidemics in tropical and sub-tropical regions around the globe with an estimated 100 million symptomatic cases per year (43). The viral genome codes for three structural proteins -Capsid (C), Envelope (E), and Pre-membrane (PrM) and seven nonstructural proteins NS1, 2A, 2B, 3, 4A, 4B, and 5, aiding in viral replication. DENV's effect on mitochondrial dynamics has been associated with an increase in antiviral immune evasion (19,20,44,45) with contradicting studies demonstrating their effect on mitochondrial morphology. Yu et al. showed that DENV NS2B3 protein partially cleaves MFN1 and MFN2, attenuating the interferon responses resulting in increased viral replication and cell death (18)(19)(20). However, recent studies demonstrate that DENV NS4B and NS3 proteins enhance mitochondrial fusion along with a reduction in mitochondrial fission via degradation of the total-and p616-Drp1. The induction of mitochondrial fusion degrades the integrity of MAMs, and the sites of ER-mitochondria interaction, alleviating RIG-I dependent activation of IFN response, thereby promoting DENV replication. Inversely, these findings were verified by knocking down Mfn2, which led to mitochondrial fragmentation and increased production of IFN-l1, and impaired DENV replication (19). Moreover, it was reported that the induction of mitochondrial fission and subsequent fragmentation with the use of a potent mitochondria uncoupling reagent, Carbonyl cyanide cholorophenylhydrazone (CCCP) or via overexpression of activated Drp1 led to a reduction in viral replication, suggesting that mitochondrial elongation is beneficial in DENV replication (20). Furthermore, even in mosquito cells, DENV infection increased the mitochondrial fusion by increasing the MFNs and no alteration of Drp1 levels (46). (47)(48)(49)(50)(51)(52). Currently, there are no vaccines or specific antiviral drugs available to treat ZIKV diseases. ZIKV NS2B3 and NS3 proteins were shown to downregulate the expression of MAVS, IFN (specifically IFNb), and ISGs (53). ZIKV NS3 protein prevents the transport of RIG-I and MDA5 to the mitochondria by binding to the 14-3-3 binding motif of MAVS, thereby inhibiting the RLR signaling pathway (54). However, there are contradicting reports on the effect of mitochondrial dynamics on antiviral immunity against ZIKV. ZIKV infection caused mitochondrial elongation, which was enhanced by the knockdown of Drp1 (19,55). However, in human retinal pigment epithelial cells, ZIKV infection increased mitochondrial fission (56). On the other hand, a recent study showed that ZIKV infection in astrocytes leads to ROS imbalance, mitochondrial functional defects, and DNA breakage leading to neurological disorders without any effect on its morphology (57). ZIKV NS1 protein triggered abnormal mitochondrial fragmentation and a decrease in MFN2 levels contributed to ZIKV-induced cell death in neuronal cells (58). Hepatitis C Virus Hepatitis C virus (HCV) is an important human pathogen belonging to the family Flaviviridae with the single stranded-RNA genome. Approximately 71 million people are clinically infected with HCV, resulting in nearly 400,000 deaths annually due to liver cirrhosis and hepatocellular carcinoma (HCC) (59). The HCV genome encodes ten proteinsfour structural proteins C, E1, E2, and p7 along with six non-structural proteins, NS2, 3, 4A, 4B, 5A, and 5B (60). The transmission of HCV is primarily via intravenous drug use, blood transfusions, and unsterilized medical pieces of equipment. There are multiple candidates for HCV prophylactic vaccines, however, none are available for use (61). There is a highly effective antiviral drug, Daclatasvir with a curing rate of 95% available to date (62). HCV NS3/4A protease cleaves the MAVS protein and inhibits the formation of the MAVS signalosome, leading to diminished immune response and IFN production (63). HCV infection promotes mitochondrial fission and mitophagy to prevent the spread of virus-induced mitochondrial damage. This allows the maintenance of an adequate cellular environment for viral dissemination and the prevention of apoptosis. HCV NS5A triggers mitochondrial fragmentation, loss of mitochondrial membrane potential, and Parkin translocation to the mitochondria, leading to mitophagy (64). Furthermore, NS5A protein inhibits the activity of electron transport chain (ETC) enzyme complex I leading to increased mitochondrial calcium uptake, mitochondrial permeability, and ROS production (41,64). HCV stimulates the synthesis of the Ub-dependent proteins -PINK1 and Parkin and triggers their translocation to the IMM and subsequent mitophagy. The induced mitophagy can enhance HCV-regulated inhibition of oxidative phosphorylation (40). The HCV core (C) protein inhibits mitophagy by sequestering Parkin (65). The underlying mechanism of how HCV and its core protein mediate these effects remains to be characterized. HBV/HCV alters mitochondrial dynamics to enhance mitochondrial fission and mitophagy and keep a check on the mitochondrial injury, thereby contributing to persistent HCV infection (41). HBV/HCV-induced mitophagy leads to attenuation of IFN signaling through which the increased PARKIN-MAVS interaction cripples the innate immunity (39)(40)(41). Interestingly, Kim et al. have shown a promising protective role of Ginsenoside Rg3 (G-Rg3) treatment against HCVinduced mitophagy, which follows mitochondrial fission (66). The role of mitophagy in regulating flavivirus infection has not been studied in-depth and the functional involvement of various flavivirus proteins in inducing mitophagy would aid in a better understanding of the mechanisms at large. SARS-CoV-2 infection is associated with altered mitochondrial dynamics resulting in oxidative stress, proinflammatory cytokine production, and cell death. The morphology of mitochondria in SARS-CoV-2 infected cells is significantly displaced and arranged around the dsRNA regions in the cytoplasm. The intra-cristal space, as well as the matrix, is expanded leading to thinner mitochondria (73). The virulence factors ORF9b and dsRNA of SARS-CoV as well as SARS-CoV-2 localize in the mitochondria and targets the MAVS signalosome, degrading the TRAF3 and TRAF6 signaling molecules, thereby hampering the antiviral response (74,75). SARS-CoV ORF9b trigger degradation of Drp1 leading to mitochondrial fusion limiting the host cell IFN response against the virus (21). Similarly, SARS-CoV-2 triggers inhibition of mitochondrial fission to facilitate its replication. Protein-protein interaction studies have indicated that SARS-CoV-2 ORF9b interacts with TOMM70, a mitochondrial import receptor that plays a critical role in modulating interferon response (71,76). ORF9b localizes to mitochondria and causes mitochondrial elongation by triggering ubiquitination and proteasomal degradation of Drp1, thereby inhibiting fission. ORF9b also targets the MAVS signalosome by usurping PCBP2 and AIP4 to trigger the degradation of MAVS, TRAF3, and TRAF6, thereby limiting the host antiviral interferon response (74,77). In addition, SARS-CoV-2 NSP13 and 9C protein may also be involved in altering the innate immune response by regulation of MAVS signal transduction (71). The SARS-CoV NSP2 protein interacts with PHB, PHB2, and STOML2 while SARS-CoV-2 ORF3b interacts with STOML2 to regulate mitochondrial homeostasis, mitophagy, and mitochondrial fusion and finally alter the innate immune response of the host (77)(78)(79). For SARS-CoV-2, the ORF9b protein induces autophagy by interaction with Prohibitins (PHBs). One of the SARS-CoV-2 viral proteins ORF3a includes a 20nt base sequence, which could target the host USP30 transcript, a mitochondrial deubiquitinase involved in mitochondrial homeostasis, and mitophagy (71,80). The complexity of SARS-CoV-2 infection has increased due to evasion from vaccine acquired immunity and the evolution of numerous viral variants sweeping the world. The study of various mutants on host immunity and the organelles should be given importance and kept on track for therapeutic intervention. CONCLUSIONS Mitochondria are a network of dynamic organelles with recurrent cycles of fission and fusion. These processes help in intermixing, content distribution, maintenance of energy homeostasis, and mitochondrial functional capacity. Mitochondrial fission and fusion constitute a major part of mitochondrial dynamics while mitochondrial quality control is regulated by mitophagy (81). Amongst the flaviviruses, the alteration of mitochondrial dynamics has been studied in HCV, DENV, and ZIKV while it remains unknown for other emerging flaviviruses such as WNV, JEV, and YFV ( Figure 5). A few of the viruses (HCV, Influenza A) and their viral proteins induce the cleavage of MAVS from mitochondria, thereby reducing their ability to induce interferon response (82)(83)(84). The viruses (HCV, ASFV, HIV-1) also alter the intracellular distribution of mitochondria either by concentrating the mitochondria near the viral factories to meet the energy demand during viral replication or by cordoning off the mitochondria within the cytoplasm to prevent the release of the mediators of apoptosis (8). These cellular functions are performed to provide energy for viral replication and release of progeny virion. However, mechanisms regulating mitochondrial dynamics during flavivirus, and SARS-CoV-2 infection have not been studied to date. Interestingly, intracellular calcium concentration also regulates mitochondrial dynamics since the calcium-dependent phosphatase calcineurin dephosphorylates Drp1, facilitating the recruitment of Drp1 to the mitochondria and the consequent mitochondrial fission (85). The involvement of calcium channels and the variation in the concentration of calcium ions has not been studied in the process of mitochondrial fission with flaviviruses and SARS-CoV-2, an interesting area to be explored in detail. Since mitochondria are the source of energy and play an important role in antiviral immunity, the damage to mitochondrial DNA may help in evading the mitochondrial antiviral immune response (86). Indeed, several viruses (HSV-1, HCV, EBV, HIV) degrade host mitochondrial DNA (mtDNA) to augment their genome replication (86)(87)(88)(89)(90)(91). Also, studies have reported the enrichment of SARS-CoV, and SARS-CoV-2 viral RNA in mitochondria and nucleolus, implicating their role in regulating the viral life cycle, ranging from virion assembly to disruption of host-mitochondrial function (92,93). Interestingly, the viral ORFs can release mtDNA in the cytoplasm and activate the inflammasome pathway, thereby suppressing innate and adaptive immunity (94). However, there is a lack of studies on the effect of flaviviruses on mtDNA, which would shed light on its role in mitochondrial dynamics and antiviral immunity. Given the importance of mitochondrial dynamics in various cellular processes, pharmacological modulators of mitochondrial dynamics have been employed in combination with direct-acting antivirals (DAAs) to combat viral infection. Several therapies (Tenofovir, Zalcitabine, and Didanosine) against human immunodeficiency virus (HIV) infection exert antiviral activity by modulating mitochondrial function (95,96). Among RNA viruses, mitochondrial hyper-fusion drug Mito-C and 8-O-(E-p-methoxycinnamoyl) harpagide (MCH), have been reported to possess antiviral activity against influenza virus by influencing mitochondrial dynamics (97,98). However, further studies are needed to investigate their effects against other RNA viruses including flaviviruses and SARS-CoV-2. A notable DAA, Sofosbuvir can competitively block the HCV NS5B polymerase and effectively inhibit HCV-RNA synthesis (99). While Sofosbuvir has proven efficacious, it has also been shown to destabilize mitochondrial membrane potential and further induce mitochondrial fission. However, a novel ginsenoside (G-Rg3) has been shown to inhibit HCVinduced abnormal mitochondrial fission and stabilize mitochondrial membrane potential, further potentiating the therapeutic effect of Sofosbuvir (66). As previously stated, it has been shown that SARS-CoV-2 alters mitochondrial dynamics to diminish the host immune response. Perhaps the addition of a pharmacological agent such as G-Rg3 to aid in the stabilization of mitochondrial dynamics may prove beneficial in the acute treatment of SARS-CoV-2 and other RNA viruses and warrants further investigation. While these recent findings have allowed us to identify potential therapeutic targets, further studies are needed to decipher how these viruses alter mitochondria. Moreover, elucidation of the role of emerging flavivirus structural and nonstructural protein involvement in regulating mitochondrial dynamics could provide therapeutic advances with the potential to reduce the viral disease burden on the human population. These studies aid in developing therapeutic approaches in the absence of a vaccine candidate against several RNA viruses and their emerging variants of concerns. FUTURE DIRECTIONS a. Studies related to mitochondrial dynamics in RNA viruses of public health concern including JEV, WNV, and YFV. b. Interplay of intracellular calcium in regulating mitochondrial dynamics during RNA viral infections. c. The interaction of RNA virus proteins with mtDNA in regulating mitochondrial dynamics and antiviral response. d. Therapeutic strategies to block viral replication in host cells by regulating mitochondrial dynamics. e. Effect of chronic or long-term viral infection (e.g., Long COVID) on mitochondria. AUTHOR CONTRIBUTIONS AK conceived the idea, and SS and KD wrote the first draft. AK and SS revised the manuscript to final form. All authors read and approved the manuscript. FUNDING Research in our laboratory is supported in parts by National Institute of Health (NIH) Grants (R21AI135583, R01EY026964, and R01 EY027381 to A.K.), NIH Core Grant P30EY004068 (to Linda D. Hazlett), and an unrestricted grant from Research to Prevent Blindness Inc. (to Kresge Eye Institute, Wayne State University).
4,256
2022-06-16T00:00:00.000
[ "Biology" ]
Integral equations on compact CR manifolds Assume that \begin{document}$ M $\end{document} is a CR compact manifold without boundary and CR Yamabe invariant \begin{document}$ \mathcal{Y}(M) $\end{document} is positive. Here, we devote to study a class of sharp Hardy-Littlewood-Sobolev inequality as follows \begin{document}$ \begin{equation*} \Bigl| \int_M\int_M [G_\xi^\theta(\eta)]^{\frac{Q-\alpha}{Q-2}} f(\xi) g(\eta) dV_\theta(\xi) dV_\theta(\eta) \Bigr| \leq \mathcal{Y}_\alpha(M) \|f\|_{L^{\frac{2Q}{Q+\alpha}}(M)} \|g\|_{L^{\frac{2Q}{Q+\alpha}}(M)}, \end{equation*} $\end{document} where \begin{document}$ G_\xi^\theta(\eta) $\end{document} is the Green function of CR conformal Laplacian \begin{document}$ \mathcal{L_\theta} = b_n\Delta_b+R $\end{document} , \begin{document}$ \mathcal{Y}_\alpha(M) $\end{document} is sharp constant, \begin{document}$ \Delta_b $\end{document} is Sublaplacian and \begin{document}$ R $\end{document} is Tanaka-Webster scalar curvature. For the diagonal case \begin{document}$ f = g $\end{document} , we prove that \begin{document}$ \mathcal{Y}_\alpha(M)\geq \mathcal{Y}_\alpha(\mathbb{S}^{2n+1}) $\end{document} (the unit complex sphere of \begin{document}$ \mathbb{C}^{n+1} $\end{document} ) and \begin{document}$ \mathcal{Y}_\alpha(M) $\end{document} can be attained if \begin{document}$ \mathcal{Y}_\alpha(M)> \mathcal{Y}_\alpha(\mathbb{S}^{2n+1}) $\end{document} . So, we got the existence of the Euler-Lagrange equations \begin{document}$ \begin{equation} \varphi^{\frac{Q-\alpha}{Q+\alpha}}(\xi) = \int_M [G_\xi^\theta(\eta)]^{\frac{Q-\alpha}{Q-2}}\varphi(\eta)\ dV_\theta, \quad 0 Moreover, we prove that the solution of (1) is \begin{document}$ \Gamma^\alpha(M) $\end{document} . Particular, if \begin{document}$ \alpha = 2 $\end{document} , the previous extremal problem is closely related to the CR Yamabe problem. Hence, we can study the CR Yamabe problem by integral equations. where G θ ξ (η) is the Green function of CR conformal Laplacian L θ = bn∆ b + R, Yα(M ) is sharp constant, ∆ b is Sublaplacian and R is Tanaka-Webster scalar curvature. For the diagonal case f = g, we prove that Yα(M ) ≥ Yα(S 2n+1 ) (the unit complex sphere of C n+1 ) and Yα(M ) can be attained if Yα(M ) > Yα(S 2n+1 ). So, we got the existence of the Euler-Lagrange equations Moreover, we prove that the solution of (1) is Γ α (M ). Particular, if α = 2, the previous extremal problem is closely related to the CR Yamabe problem. Hence, we can study the CR Yamabe problem by integral equations. 1. Introduction. CR geometry, the abstract models of real hypersurfaces in complex manifolds, has attracted much attention in the past decades. Noticing that there is a far-reaching analogy between conformal and CR geometry, such as model space, scalar curvature, Sublaplacian and Yamabe equation etc., many interesting and profound results on CR geometry were obtained, see [2, 5, 6, 8-14, 16, 18-24, 27, 28, 34, 35] and the references therein. Inspired by the idea of [7,16,17,36], we want to study the curvature problem of CR geometry from the point of integral curvature equation. Following, involved notations can be found in the Section 2. Let (M, J, θ) be a compact psedudohermitian manifold without boundary. Under the transformationθ = φ 4 Q−2 θ with φ ∈ C ∞ (M ) and φ > 0, Tanaka-Webster scalar curvatures R andR, corresponding to θ andθ respectively, satisfy where L θ = b n ∆ b + R is the CR conformal Laplacian related to θ and b n = 2 + 2 n . For a given constant curvatureR, the existence of (1.1) is known as CR Yamabe (1.5) As pointed by Zhu in [36], on S n integral curvature equations are equivalent to the classical curvature quation if α is strictly less than dimension; while for the case α strictly greater than dimension, they are not equivalent and integral curvature equation has some advantages. So, it is interesting and valuable to study the integral curvature equation (1.5). which is closely related to a class of Hardy-Littlewood-Sobolev inequalities with Namely, for any f, g ∈ L 2Q/Q+α (M ) with 0 < α < Q, there exists some positive constant C(α, M ) such that . (1.7) In fact, by the parametrix method, we know by [11] holds by a similar argument with Theorem 15.11 of [11]. In this paper, we will mainly devoted to study the extremal problem (1.6) by Hardy-Littlewood-Sobolev inequalities and will prove the following results. Because of the hypoellipticity of operator L (in fact L satisfies the Hörmander condition [18]), we know that the Green function G θ ξ (η) is C ∞ if ξ = η. Moreover, using CR normal coordinates at ξ and the classical method of parametrix, we can construct the Green function as (without loss of generality, we take the coefficient of singular part as one) where w is the regular part. Particular, if M is locally CR conformal flat, then w satisfies ∆ b w = 0 in some neighbourhood of ξ. Therefore, w is C ∞ in this neighbourhood because of the hypoellipticity of ∆ b . If n = 1, Cheng, Malchiodi and Yang [5] proved that w ∈ C 1,γ (M ) for any γ ∈ (0, 1). In the sequel, we always assume that w(ξ, η) ∈ C 1 (M × M ). Then, we can rewrite G ξ (η) as By complex linearity, we can extend L θ to CH(M ) and induce a hermitian form on T 1,0 as It is easy to see that Levi form is CR invariant. Namely, if θ is replaced byθ = f θ, L θ changes conformally by Lθ = f L θ . We say M is nondegenerate if the Levi form is nondegenerate at every point, and say M is strictly pseudoconvex if the form is positive definite everywhere. In this paper, we always assume that M is strictly pseudoconvex. Based on the Levi form L θ , we can take a local unitary frame {T α : α = 1, · · · , n} for T 1,0 (M ). Then, there is a natural second order differential operator, namely the Sublaplacian ∆ b , which is defined on the function u as where R is the Tanaka-Webster scalar curvatures and b n = 2Q Q−2 . Take u = φ, we have the prescribed curvature equation (1.1). Furthermore, for given constant curvatureR, the existence of (1.1) is known as CR Yamabe problem, which was introduced by Jerison and Lee, see [19,20]. Adopting the above notations, we can rewrite the sharp Hardy-Littlewood-Sobolev inequalities on H n and S 2n+1 (see Frank and Lieb's result [12]) as Theorem 2.1 (Sharp HLS inequality on H n ). For 0 < α < Q and p = 2Q Q+α . Then for any f, g ∈ L p (H n ), where And equality holds if and only if for some c 1 , c 2 ∈ C, r > 0 and a ∈ H n (unless f ≡ 0 or g ≡ 0). Here H is defined as 2.3. Folland-Stein normal coordinates (see [11,20]). On some open set V ⊂ M , take a set of pseudohermitian frame {W 1 , · · · , W n }. Then, {W i , W i , T, i = 1, · · · , n} forms a local frame, where T is determined by θ(T ) = 1 and dθ(T, X) = 0 for all X ∈ T M . As the Theorem 4.3 and Remark 4.4 of [20], we can summarize the result of Folland-Stein normal coordinates in the following theorem. Theorem 2.3 (Theorem 4.3 of [20] ). There is a neighbourhood Ω ⊂ M × M of the diagonal and a C ∞ mapping Θ : Ω → H n satisfying: in which O k E indicates an operator involving linear combinations of the indicated derivatives with coefficients in O k , and we have used ∂ z to denote any of the derivatives ∂/∂z j , ∂/∂z j . (The uniformity with respect to ξ of bounds on functions in O k is not stated explicitly [11], but follows immediately from the fact that the coefficients are C ∞ .) Theorem 2.4 (Remark 4.4 of [20]). Let T δ (z, t) = (δ −1 z, δ −2 t), K ⊂⊂ V , and let r be fixed. With the notation of Theorem 2.3 and B r = {u ∈ H n : |u| ≤ r}, then T δ • Θ ξ (Ω ξ ) ⊃ B r for sufficiently small δ and all ξ ∈ K. Moreover, for ξ ∈ K and u ∈ B r , . (Here O k may depeng also on δ, but its derivatives are bounded by multiplies of the frame constants, uniformly as Fix the local coordinates of U by u = (z, t) = Θ ξ for some given point ξ ∈ U . Then, for 0 < β < 1, the standard Hölder space Λ β (U ) is [20]). Γ β ⊂ Λ β/2 (loc) for 0 < β < ∞ and there exists some positive constant C such taht f Λ β/2 (U ) ≤ C f Γ β (U ) for any f ∈ C ∞ o (U ). Now for a compact strictly pseudoconvex pseudohermitian manifold M , choose a finite open covering U 1 , · · · , U m for which each U j has the properties of U above. Choose a C ∞ partition of unity φ i subordinate to this covering, and define Following, for convenience, denote p α = 2Q Q−α and q α = 2Q Q+α . 3. Estimation of the sharp constant. Proof. Since (G θ ξ (η)) Q−α Q−2 ∼ ρ(ξ, η) α−Q as ρ(ξ, η) → 0, then for any small enough δ > 0, there exists a neighbourhood V of the diagonal of M × M such that Recall that f (u) = H(u) is an extremal function to the sharp HLS inequality in Theorem 2.1, as well as its conformal equivalent class: Thus where B is a positive constant. Let Σ R = {u = (z, t) ∈ H n : |z| < R, |t| < R 2 } be a cylindrical set, where R is a fixed constant to be determined later, and take a test function g(u) ∈ L qα (H n ) as Then, With (3.3), we have For I 2 , by HLS inequality (2.7), we have Hence, for small enough , we have For any given point ξ ∈ M , there exists a neighbourhood V ξ ⊂ V such that Theorem 2.3 hold. So, choose R small enough such that Σ R ⊂ Θ ξ (V ξ ) and (Θ −1 Sending to 0 and then letting R, δ approach to zero, we obtain the estimate. Subcritical HLS inequalities and their extremal function. Proposition 4.1 (Young's inequality). Let X and Y are measurable spaces, and let the kernel function K : X × Y → R be a measuralble function satisfying where C is some positive constant and r ≥ 1. Then, for any f ∈ L p (Y ) with 1 − 1/r ≤ 1/p ≤ 1, the integral operator Proof. For the case r = 1, the result reduces to the case of Lemma 15.2 of [11]. where q > 1 and 1 q > 1 p − α Q . Moreover, operator A is compact for any q satisfying q > 1 and 1 q > 1 p − α Q , namely, for any bounded sequence {f j } +∞ j=1 ⊂ L p (M ), there exists a subsequence of {Af j } +∞ j=1 which converges in L q (M ). Proof. Obviously, it is sufficient to prove the compactness of the operator A with kernel K(ξ, η) = ρ(ξ, η) α−Q . Define the extremal problem as Obviously, we know that D M,p,q < +∞ because of Proposition 4.2. Moreover, we have where the sharp constant can be attained by some nonnegative function f p ∈ L p (M ) satisfying f p L p (M ) = 1 and Remark 4.5. A direct computation deduces that the extremal function f p satisfies the Euler-Lagrange equation Denoted by g(ξ) = f p−1 (ξ). Then (4.6) reduces to (4.7) where q = p p−1 is the conjugate exponent of p. By a classical routine, we have the following regularity result. Proposition 4.6 (Γ α regularity). If g(ξ) ∈ L p (M ) satisfies (4.7), then g ∈ Γ α (M ). The proof can be completed by the following two Lemmas. Proof. Because of the compactness of M , it is sufficient to prove that, for any ξ ∈ M , Lemma 4.7 holds on the neighbourhood V ξ . Hence, without loss of generality, we restrict variable ξ on a neighbourhood V ξ0 for some point ξ 0 ∈ M . Using the Folland-Stein normal coordinates, we can complete the proof by a similar process of the second part of Lemma 4.3 of [16]. For concise, we omit the details. Following, we will investigate the limitation of the sequence of solutions {f p } ⊂ Γ α (M ) of (4.6), and then complete the proof of Theorem 5.1 by compactness. First, it is routine to prove Proof. By Lemma 4.7, it is sufficient to prove that {f p } 2Q Q+α <p<2 is uniformly bounded in L ∞ (M ). Following, we will prove it by contradiction. Suppose not. Then f p (ξ p ) → +∞ as p → 2Q Q+α + , where f p (ξ p ) = max ξ∈M f p (ξ). Let Θ ξp be normal coordinates. We can assume that there is a fixed neighbourhood U = B r (0) of the origin in H n contained in the image of Θ ξp for all p, and for each p we will use Θ ξp to identify U with a neighbourhood of ξ p with coordinates (z, t) = Θ ξp .
3,166.8
2021-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Close to threshold eta' meson production in proton-proton collisions at COSY-11 Intensive quest for the eta and eta' bound states is currently ongoing at both theoretical and experimental levels e.g. at COSY, ELSA, GSI, JINR, JPARC, LPI, and MAMI. These studies were already supported by data provided by the COSY-11 collaboration including determination of the total width of the eta' meson. In addition, the first rough estimation of the eta'-N interaction from the excitation function of the cross section for the pp-->pp eta' reaction was also performed. Recent precise measurement in this field from the COSY-11 experiment allows to summarize results on the eta' meson production cross section in proton-proton collisions at COSY-11. Motivation It is impossible to prepare a beam or target out of short lived particles like the η or η mesons. Therefore their interaction with other hadrons cannot be investigated in the standard way via scattering experiments. However, production of these mesons close to the kinematical threshold with low relative velocities with respect to nucleons gives a chance to study their interaction with nucleons. It may manifest itself as structures in a meson-nucleon invariant mass distributions and as enhancement in the excitation function with respect to predictions based on the assumption that the kinematically available phase space is homogeneously populated. Measurements of the η− and η −nucleon and nucleus systems may yield valuable new information about dynamical chiral and axial U(1) symmetry breaking in low energy QCD. The binding energies, meson-nucleon scattering lengths and in-medium masses of the η and η are sensitive to the flavoursinglet component in the mesons and hence to the non-perturbative glue associated with axial U(1) dynamics [2,3]. QCD inspired models including confinement, chiral and axial U(1) dynamics yield a range of predictions for the η and η nucleon scattering lengths and binding in nuclei. The quark condensate is modified in nucleus which changes the properties of hadrons in nuclear medium and these medium modifications can be understood at the quark level through coupling of the scalar isoscalar σ (and also ω and ρ) mean fields in the nucleus to the light quarks in the hadron. The COSY-11 experiment The collision of a proton from the COSY stochastically cooled beam [32] with a hydrogen cluster target proton of COSY-11 [33] may cause an η meson creation. The ejected protons of the pp → ppη reaction are then separated from the circulating beam by the magnetic field due to their lower momenta and were registered by the detection system consisting of drift chambers and scintillation counters. The reconstruction of the momentum vector for each registered particle is based on the measurement of track direction by means of the drift chambers and the knowledge of dipole magnetic field. Together with the independent determination of the particle velocity from the measured time of flight between scintillator detectors the particle identification is provided. Knowledge of the momenta of both protons before and after the reaction allows to calculate the mass of unobserved particles. Number of reconstructed η mesons together with luminosity determination based on the cross section for elastically scattered pp events and registered number of pp → pp events allows for pp → ppη cross section determination. Measurements of the total cross section for pp → ppη reaction together with theoretical excitation functions are summarized in Figure 1. COSY-11 data are gathered in Table 1. Conclusions The η production cross sections in proton-proton collisions provided by COSY-11 collaboration for the last 16 years [24][25][26][27]31] together with the recent precise determination of the η -proton scattering length in free space [31] constitute a significant contribution to the study of the η properties and the search of η bound state [42]. Jost [40] Jost: full Q range [40] N-G-W [41] on-shell [37][38][39] COSY-11 [31,34] SPESIII [35] DISTO [36] Figure 1. The total cross sections for the pp → ppη reaction as a function of the excess energy Q. Experimental data with the statistical and systematic errors separated by dashes are marked as solid circles for the COSY-11 experiment [31,34], as open squares for SPESIII measurements [35] and as open triangle for the DISTO experiment [36]. In addition the superimposed lines show results of fits parameterizing the pp-FSI enhancement factor as in Refs. [37][38][39] (thick dashed line), inverse of the squared Jost function [40] (thin solid line) and Niskanen-Goldberger-Watson model [41] (thin dashed line) with the η -proton scattering length as a free parameter. The thick dashed line is shown only in the range of applicability of the formula used for the enhancement factor [37]. For comparison the thick solid line shows result of the fit obtained for the whole Q range with pp-FSI parametrization from Ref. [40].
1,090.6
2014-11-24T00:00:00.000
[ "Physics" ]
Nerve growth factor from Chinese cobra venom stimulates chondrogenic differentiation of mesenchymal stem cells Growth factors such as transforming growth factor beta1 (TGF-β1), have critical roles in the regulation of the chondrogenic differentiation of mesenchymal stem cells (MSCs), which promote cartilage repair. However, the clinical applications of the traditional growth factors are limited by their high cost, functional heterogeneity and unpredictable effects, such as cyst formation. It may be advantageous for cartilage regeneration to identify a low-cost substitute with greater chondral specificity and easy accessibility. As a neuropeptide, nerve growth factor (NGF) was involved in cartilage metabolism and NGF is hypothesized to mediate the chondrogenic differentiation of MSCs. We isolated NGF from Chinese cobra venom using a three-step procedure that we had improved upon from previous studies, and investigated the chondrogenic potential of NGF on bone marrow MSCs (BMSCs) both in vitro and in vivo. The results showed that NGF greatly upregulated the expression of cartilage-specific markers. When applied to cartilage repair for 4, 8 and 12 weeks, NGF-treated BMSCs have greater therapeutic effect than untreated BMSCs. Although inferior to TGF-β1 regarding its chondrogenic potential, NGF showed considerably lower expression of collagen type I, which is a fibrocartilage marker, and RUNX2, which is critical for terminal chondrocyte differentiation than TGF-β1, indicating its chondral specificity. Interestingly, NGF rarely induced BMSCs to differentiate into a neuronal phenotype, which may be due to the presence of other chondrogenic supplements. Furthermore, the underlying mechanism revealed that NGF-mediated chondrogenesis may be associated with the activation of PI3K/AKT and MAPK/ERK signaling pathways via the specific receptor of NGF, TrkA. In addition, NGF is easily accessed because of the abundance and low price of cobra venom, as well as the simplified methods for separation and purification. This study was the first to demonstrate the chondrogenic potential of NGF, which may provide a reference for cartilage regeneration in the clinic. Adult human mesenchymal stem cells (MSCs) attracted the most attention for cartilage tissue engineering studies, because of their high proliferation rate, easy availability and capacity to differentiate into multiple cell types. 1 For MSCbased therapy, the strategies involve the use of growth factors and 3D scaffold systems. Growth factors have critical roles of inducers that regulate the chondrogenic differentiation of MSCs. However, traditional growth factors such as TGF-β1 fall short in meeting the needs of clinical applications because they are limited by their high cost, rapid degradation and ready loss of activity. Moreover, the versatility and functional heterogeneity of growth factors may lead to osteophyte formation instead of chondrogenesis during cartilage regeneration. [2][3][4][5][6] Therefore, low-cost growth factors with more specific effects on chondrogenesis may be advantageous. Cartilage metabolism is controlled by many locally acting cytokines and growth factors, which may derive from the surrounding nerve terminals. The crucial effects of sensory and sympathetic neurotransmitters on proper limb formation during embryonic skeletal growth have been well documented. 7 This is also confirmed by the detection of neuropeptide containing nerve fibers in the interior of the cartilage and periosteum. 8 Clinical observations suggest that nerve fibers are important for the regulation of skeletal metabolism. 9 Patients with neurological disorders exhibit skeletal pathophysiology. 7,10 Nel-like molecule-1 (Nell-1), a growth factor that is strongly expressed in neural tissue, was shown to promote chondrocyte proliferation, ECM deposition 11,12 and regulate chondrogenic differentiation. 13 Nerve growth factor (NGF) is a peptide neurotrophin (NT) that could accelerate the wound healing process, 14 regulate the growth of cells from tissues other than nerves. [15][16][17][18] The influences of neurotrophic factors on adult mammalian spinal cords were also studied. 19 However, NGF was rarely studied in cartilage regeneration, and the effect of NGF on chondrogenesis has not been investigated. Emerging evidences showed that NGF, either alone or in combination with BMP and NCP, shows an effect on the quantity of cartilage developed. 20 NGF and its two receptors, tropomyosin kinase receptor A (TrkA) and p75 pan-NT involved in chondrogenic differentiation. 27,28 Based on these findings, the neuropeptide NGF is likely to have major implications for chondrogenic differentiation and cartilage regeneration. Another advantage of NGF is its easy accessibility. Unlike TGF-β1, which has limited sources, NGF exists in many animals and can be obtained from the submaxillary salivary gland of male mice and snake venom. 29 Most NGFs have been isolated and characterized from snake venoms that are considered to be a rich source of NGF. 30 Venoms from the Chinese cobra (Naja atra) is abundant in southern China, which potentially lower the cost. [31][32][33] Previously, we isolated NGF from Chinese cobra venom by gel filtration and ion exchange chromatography. 33,34 The simplified two-step method is useful, but the extracts were not pure and were contaminated with other proteins. A more effective method for obtaining a large amount of purified NGF that can be easily industrialized should be developed. In this study, we extracted NGF from Chinese cobra venom by simplified three-step chromatography improved upon our previous studies. Further, the potential effects of NGF on the chondrogenic differentiation of bone marrow MSCs (BMSCs) and cartilage regeneration were investigated in vitro and in vivo. The underlying mechanism was also explored. Our findings suggested that NGF affects chondrogenesis and cartilage reconstitution, providing reference for clinical application. Results Preparation of NGF. The procedure for purifying NGF from Chinese cobra venom was shown in Figure 1a. At each purification step, only the fraction with NGF bioactivity was collected. As shown in the Sephadex G-75 size exclusion chromatogram (Figure 1b), NGF was eluted in the fraction containing peak 5 and was accompanied with other components. The next step is ion exchange on CM Sepharose CL-6B, as shown in Figure 1c. The fraction containing peak 2, which displayed NGF bioactivity was collected, which was analyzed with high-performance liquid chromatography (HPLC) on a TSK-G2000-SW to purify and test the purity of NGF (Figure 1d). The purity was approximately 99%. On sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), the molecular weight of pure NGF is approximately 25 kD (Figure 1e). The yield was high, resulting in approximately 12 mg of NGF from 2.0 g of venom. To verify the protein, western blot analysis with a mouse anti-viper antibody against NGF was performed. In Figure 1f, the band at 25 kD was confirmed as NGF. The bioactivity of NGF was tested using neurite outgrowth of chicken dorsal root ganglia (Figures 1g and h). Neurite length was significantly increased in the NGF-supplemented medium, accompanied by an increased axonal arbor density. The effects of NGF on monolayer cultures of BMSCs in vitro Cell cytotoxicity Cell cytotoxicity and cell viability. We used the MTT assay to evaluate whether NGF affected the growth of BMSCs cultured in vitro and to select optimal concentrations that better supported BMSC growth. As shown in Figure 2a, NGF at concentrations of 1.5, 3 and 6 μg/ml, NGF increased BMSC proliferation approximately 4.2%, 8.2% and 4.0%, respectively, compared with the untreated BMSCs. These concentrations were chosen for further investigation. Live-dead staining was used to determine the effect of NGF on cell viability ( Figure 2b). The majority of cells in all groups were stained green, indicating a good viability of cells after 21 days of culture. More live cells and fewer apoptotic cells (shown in red) were present in the NGF groups than in the control group. TGF-β1 treatment was comparable with the N2 group, both of which showed improved cell viability compared with the N1 and N3 groups. Cell proliferation and biochemical assay: To study the effect of NGF on BMSC proliferation, the determination of DNA content and HE staining were performed (Figure 2c). The DNA content increased in a time-dependent manner in all of the groups. The NGF-treated groups showed a significantly higher DNA content than the control. Among the NGF-treated groups, the increase in the DNA content was most prominent in the N2 group, with the increases of 35.2%, 30.1% and 44.6% on days 7, 14 and 21, respectively. N2 was similar to TGF-β1. This result was also confirmed by HE staining (Figure 2d). Given the differences in cell number, the GAG content was normalized to DNA content to reveal any differences in the biosynthetic activity of the cells among all groups ( Figure 2e). The GAG content was the highest in TGF-β1 group at each time point (Po0.05). NGF induced a significant increase in GAG accumulation compared with the control except N1 at day 7. Among the NGF-treated groups, N2 elicited more GAG secretion than the other two. Gene expression and secretion of type I and II collagen: The expression of the ACAN, SOX9, COL2A1, COL1A1, RUNX2, ENO2, GDNF, BDNF and CNTF was detected by qRT-PCR (Figure 2f). After 14 days, the levels of cartilage-specific genes, including ACAN, SOX9 and COL2A1, were notably increased by NGF and TGF-β1 compared with the control. In particular, 3 μg/ml NGF induced the highest expression of ACAN, SOX9 and COL2A1 of all of the NGF groups. COL1A1 expression was also greatly increased by both NGF and TGF-β1. The expression of COL1A1 in the NGF groups (particularly the N2 group) was significantly lower than that in the TGF-β1 group. The expression of RUNX2, a key transcription factor associated with hypertrophy and osteoblast differentiation, was similar to that of COL1A1. The expression of markers for neural differentiation -ENO2, GDNF, BDNF and CNTF, were not induced by NGF as shown by lower level than the control. Immunohistochemical staining was used to detect the expression of collagen type I and II after the chondrogenic induction of BMSCs in vitro. Large areas of type II collagenpositive staining were observed in the NGF-treated groups, particularly in the N2 group, which approximates the TGF-β1 group (Figure 2g). In contrast to type II collagen, type I collagen, which is the marker of fibrocartilage, was more negatively stained in the NGF-treated groups than in TGF-β1 group (Figure 2h). Chondrogenic effects of NGF on 3D BMSCs cultures Cell viability, cytoskeletal morphology and GAG production: As shown in Figure 3a, live cells (green), which were generally spherical/oval in shape attached to and grew within the hydrogel in a time-dependent manner. More live cells were present in the BCT and BCN groups than control. Comparatively, BCT is better than BCN. The effect of NGF on cytoskeletal reorganization in 3D cultures of BMSCs at day 21 was investigated by staining with rhodamine-phalloidin and Hoechst 33258. As shown in Figure 3b, a small amount of polymerized actin was distributed in the control cells. In contrast, intensively polymerized actin was observed in both the BCN and BCT groups. Less actin with relatively weaker fluorescence was observed in the BCN group compared with the BCT group. Biochemical assays were used to quantify the DNA content and GAG production after 7, 14 and 21 days of culture. Both the calculation of live/dead cells ( Figure 3c) and the DNA content showed the increased cells with time in all groups ( Figure 3d). Compared with those in BC, the number of cells in the BCT and BCN groups increased prominently, with 1.23-and 1.17-fold at day 21, respectively. As shown in Figure 3e, GAG accumulation in the BCT and BCN groups was significantly increased compared with the BC group (Po0.05). Compared with the control, GAG accumulation in the BCN group increased by 9.2%, 45.3% and 53.0% at days 7, 14 and 21, respectively, which was slightly lower than the BCT groups of which increased by 12.3%, 51.7% and 59.0%. Histological and immunohistochemical findings: The chondrogenic effects of NGF on BMSCs in 3D culture were evaluated by histological and immunohistochemical staining of cartilage-specific matrices on days 7, 14 and 21. In both the BCN and BCT groups, more cells with the typical features of chondrocytes were embedded in lacuna structures compared with BC group (Figure 3f). Consistent with the GAG content (Figure 3e), safranin O staining indicated that more abundant GAGs were homogeneously distributed in the cells of both the BCN and BCT groups than control ( Figure 3g). In addition, stronger positive expression of type II collagen ( Figure 4a) and type I collagen (Figure 4b) was observed in the BCN and BCT groups than in the BC group. In comparison, expression of collagen type I and type II in BCT group was abundant than that in BCN groups. qRT-PCR analyses for gene expression in BMSCs seeded on collagen for the induction of chondrogenesis: The chondrogenic differentiation profile of BMSCs grown in 3D cultures was detected by assessing the mRNA expression levels of ACAN, SOX9, COL2A1, COL1A1, RUNX2, ENO2, GDNF, BDNF and CNTF after 7, 14 and 21 days of culture ( Figure 4c). The expression of cartilage-specific genes, including ACAN, SOX9 and COL2A1, were extensively upregulated in the BCT and BCN groups compared with the BC group. Comparatively, BCT stimulates higher expression of the cartilage-specific genes than BCN. However, lower levels of COL1A1 and RUNX2 were observed in the BCN group than in the BCT group. The expression of ENO2, GDNF, BDNF and CNTF, which are markers for neural differentiation, were much lower in both the BCN and BCT groups than in the control. These results were also confirmed by the protein expression levels of collagen I and collagen II ( Figure 4d). Therapeutic effect of NGF on cartilage defect Gross assessment: A cartilage defect model was created by drilling a 4 mm diameter hole in the patellar groove ( Figure 5a). Care was taken not to perforate through the cartilaginous layer. Then, the defect was filled with injectable collagen hydrogels seeded with BMSCs ( Figure 5b). After 4, 8 and 12 weeks of therapy, the engineered cartilage with part of the subchondral bone was harvested. At 4 weeks of repairing, defects were still grossly distinguishable from the surrounding cartilage tissue in all groups (Figure 5c). In the control, the defect was still evident at 8 weeks and newly formed tissue was not fully filled at 12 weeks. Neo-tissue was formed in the defects after 8 weeks of both the BCN and BCT groups. Twelve weeks postsurgery, glossy and smooth cartilage-like tissues were regenerated and well integrated with the surrounding tissues in both BCT and BCN groups. In order, BCT, BCN and BC exhibited decreased macroscopic scores at each time point (Figure 5d). Biomechanical testing: The compressive stiffness of the repaired tissues from the three groups was determined at 4, 8 and 12 weeks (Figure 5e). The BCN and BCT engineered cartilage was significantly stiffer than the control, with increases of 47.7% and 38.5%, respectively, at week 12. The BC engineered cartilage showed the lowest mechanical strength, indicating the formation of fibrous tissue or fibrocartilage. Histological observation: Upon histological observation after 4, 8 and 12 weeks of therapy, fibrous tissue with a loose and detached interface was observed in the defect of BC group (Figure 5g). In contrast, glossy and smoothly regenerating tissues gradually formed hyaline cartilage-like tissues similar to the surrounding normal cartilage in BCT and BCN groups. The results were also confirmed by the histological scores ( Figure 5f). Opposite to the modicum amount in BC groups, the production of GAG increases with time in neocartilages of both BCT and BCN groups, resulting in little difference with surrounding tissue after 12 weeks (Figure 6a). Immunohistochemical staining showed that much more positive staining of collagen type II was present in BCT and BCN groups than control. Comparatively, deeper staining of collagen type I was in BCT than in BCN group. Cartilage-related gene and protein expression: In agreement with the histological findings, the expression of ACAN, SOX9 and COL2A1, were significantly increased in the BCN and BCT engineered cartilage compared with the control (Figures 7a-d). Slightly lower COL2A1 expression and significantly decreased COL1A1 expression were observed in BCN compared with BCT. The expression of the collagen I and collagen II proteins also confirmed the qRT-PCR findings (Figure 6b and c). upregulated compared with the control, although lower than the BCT group (Figures 7h and j). The engineered cartilage in the BCN and BCT groups also exhibited upregulated expression of these proteins (Figures 7i and k). Discussion The yield of NGF in snake venoms is generally approximately 1% or less. 30 Bian et al. 35 used a two-step method to purify NGF, with the yield of 0.51%. In our previous study, we used two-step method by gel filtration and ion exchange columns to extract NGF. 33,34 Although the yield was 0.65%, the NGF was not so pure. In this study, we successfully isolated highly pure NGF from cobra venom with a yield of 0.6% and a purity of 99% by adding a TSK-G2000-SW chromatography step (Figure 1), which can efficiently separate proteins with molecular weights ranging from 5000 to 200 000 Da. 36,37 The improved three-step procedure for NGF extraction is easily accessed and is of low cost. Besides, the abundant and inexpensive cobra venom in Southern China greatly decreases the cost. In comparison, recombinant growth factors are much more expensive and complicated to prepare. Thus, highly pure NGF may be easily industrialized by this simplified procedure. Chondrogenic potential of NGF was confirmed both in vitro and in vivo, which has not been reported yet. In 2D and 3D cultures, the immunohistochemical and qRT-PCR analyses showed that the expression of cartilage-specific markers, including ACAN, SOX9 and COL2A1, was significantly upregulated in the NGF groups compared with the controls (Figures 2 and 4). The 3D cultures also displayed normal features of cartilage, with large numbers of round Figure 3) and the expression of cartilage-specific markers was markedly increased compared with the monolayer culture, which indicated that NGF and 3D scaffold exhibited a synergistic effect. When applied to cartilage repair, NGF also accelerated the healing process, as evidenced by the histological findings, qRT-PCR/WB analyses and biomechanical tests (Figures 5,6, and 7). Tissue engineering technique by only using stem cells and scaffold is useful for the repair of defect. Our study showed that in control groups (BC group), the cartilage defect was gradually repaired with the upregulation of collagen type II over time. However, the healing process is much longer and the therapeutic effect is less satisfiable than that with the assistance of growth factors. [38][39][40][41] Interestingly, NGF was superior to TGF-β1 in chondrogenic specificity as evidenced by considerably decreased expression of collagen type I (Figures 2 and 4), a fibrocartilage marker 42 and RUNX2 (Figures 2 and 4), a critical for chondrocyte terminal differentiation. This indicates that NGF can better prevent fibrogenic and hypertrophic differentiation to maintain the chondrocytic phenotype of the MSCs and the characteristic of hyaline cartilage may be better retained by NGF. It has been reported that accompanied with chondrogenesis and hypertrophic differentiation, 43 TGF-β1 is also implicated in osteogenic differentiation. 44,45 Thus, NGF may be somewhat preferred over TGF-β1 regarding its differentiation specificity, although a long-term investigation was needed. Most studies have shown the potential of NGF to induce the differentiation of stem cells along the neuronal lineage. [46][47][48] However, NGF induced the MSCs to differentiate into the chondrogenic lineage instead of the neuronal linage in this study, as evidenced by no neuronal cells and downregulated expression of neuronal-specific markers 49 (Figures 2 and 4). This may be a result of the elaborate environment that favors chondrogenesis, such as the high density of cells and the addition of other chondrogenic supplements, such as dexamethasone, ascorbate and ITS. In particular, NGF can greatly increase the expression of cartilage-specific genes and proteins in 3D cultures (Figure 4), which mimic the condensation of mesenchymal cells during chondrogenesis in embryonic development. The results indicated that NGF predominantly induced the stem cells into chondrocytes instead of the neuronal phenotype in an environment favoring chondrogenesis. The analysis of the molecular mechanisms revealed that several signaling pathways are involved in NGF-induced chondrogenesis, including the MAPK/ERK and PI3K/AKT signaling pathways. The PI3K/AKT signaling pathway has an important role in the physiological effects of NGF 50 and chondrocyte differentiation. 51,52 Both our in vitro and in vivo results (Figure 7) showed that NGF stimulated PI3K and AKT phosphorylation during the process of MSCs differentiation, resulting in a decrease in the expression of collagen type I and RUNX2 (Figure 4). The results indicated that the PI3K/AKT signaling pathway was critical for NGF-induced chondrogenesis. Another important signaling pathway, MAPK, P38 MAPK and ERK1/2, are involved in chondrogenic differentiation during adult life. [53][54][55] Here, we report that MAPK activity functionally contributes to NGF-induced chondrogenesis in MSCs, as shown by increased phosphorylation and activity of P38 and ERK1/2 both in vitro and in vivo (Figure 7). Thus, NGF may induce the chondrogenesis of MSCs by mediation of the MAPK/ERK and PI3K/AKT signaling pathways, which was similar to TGF-β1. TrkA is a specific receptor of NGF and is involved in bone formation and healing. 56 The binding of NGF stimulates the autophosphorylation and dimerization of TrkA, resulting in the activation of the PI3K pathway and the MAPK pathway. 57 Signaling via p75 NTR is believed to be related to cell apoptosis and growth arrest. 58 The results of this study showed that TrkA expression in the BCN group was significantly higher than that in the BC group both in vitro and in vivo, whereas expression of p75 NTR was downregulated (Figure 7), which indicated that the chondrogenic effects of NGF on BMSCs may be mediated by the activation of TrkA receptor and the inhibition of p75 NTR . Our rationale for introducing NGF into MSC cultures during chondrogenesis for cell-based therapy of cartilage defect was to determine whether we could improve elasticity and minimize dedifferentiation and hypertrophy. The results suggest that NGF triggers the chondrogenic differentiation of MSCs via interactions between NGF and the TrkA/ p75 NTR receptors, and these interactions subsequently activate downstream molecules, such as PI3K and AKT. Consequently, the activated PI3K and AKT lead to decreased expression of markers of chondrocyte terminal differentiation (Figure 7). Thus, NGF may be favorable substitute for traditional growth factors in chondrogenesis. Materials and Methods Preparation of NGF Separation and purity of NGF: Crude cobra venom was sequentially separated on Sephadex G-75, CM Sepharose CL-6B and TSK-G2000-SW chromatography columns. Two grams of crude cobra venom was dissolved in 10 ml of buffer (1% HAc). After centrifugation, the soluble fraction was collected and loaded on the Sephadex G-75 chromatography column. The mobile phase was 1% HAc, with a flow-rate of 2 ml/min. A 10 ml fraction containing NGF activity was obtained and then loaded on the CM Sepharose CL-6B chromatography column after dialyzed overnight. After equilibration with 0.05 M NaAc-HAc (pH = 5) buffer, 0.05 M NaCl buffer was used as the mobile phase, with a flow-rate of 1 ml/min. The fraction with NGF activity was collected and then separated by HPLC on the TSK-G2000-SW column. In all, 0.1% (w/v) trifluoroacetic acid (TFA) containing 0.25 M NaCl was used as the mobile phase for linear gradient elution. The fraction with NGF activity was collected, dialyzed for desalination and lyophilized. The UV absorption of the proteins was monitored at 280 nm, with a sample volume of 15 μl The molecular weight and identification of NGF: The molecular weight of the chromatography-purified NGF was confirmed by SDS-PAGE. The gel was fixed with 12% trifluoroacetic acid and stained with Coomassie blue R-250. Bioactivity of NGF: Neurite outgrowth from chick embryonic dorsal root ganglia was used to determine the biological activity of NGF. After pre-gummed with rat tail collagen for 3 h at 371C the dorsal root ganglia were harvested from 8-day-old chick embryos and incubated with 100 ng/ml NGF for 18 h at 37°C with 5% CO 2 . Root ganglia with no treatment was used as control. Animals. A total of 90 female New Zealand white rabbits (weighting 2.5-3 kg and 2 months old) were obtained from Guangxi Medical University, Nanning, China. The rabbits were housed individually at a constant temperature and relative humidity (60%), with free access to a standard diet and water. All experiments were conducted in accordance with the standard guidelines approved by Animal Care and Experiment Committee of Guangxi Medical University (protocol number: 2014-12-3). Isolation and culture of BMSCs. BMSCs were harvested from the bone marrow extracted from New Zealand rabbits. The rabbits were anesthetized with 5 mg/ml pentobarbital (30 mg/kg), and a sterile medullo-puncture needle was used to collect the bone marrow from the bilateral femurs. A bone marrow mononuclear cell isolation kit (TBD2013CRA, Tian Jin Hao Yang Biological Manufacture Co., Ltd, Tianjin, China) was used for BMSC extraction. The isolated BMSCs were cultured in alpha-modified Eagle's medium (Gibco, Thermo Fisher Scientific, Waltham, MA, USA) containing 10% (v/v) fetal bovine serum (Hyclone, Logan, UT, USA) and 1 % (v/v) antibiotics (penicillin 10 000 U/ml and streptomycin 10 000 μg/ml, Solarbio, Beijing, China) under a humidified atmosphere with 5% CO 2 at 37°C. Scaffold preparation and cell seeding. Collagen type I isolated from calf skin was prepared as described in previous studies. 59,60 Collagen type I was dissolved in a CH 3 COOH solution to a final concentration of 10 mg/ml. Then, 0.5 M NaOH was used to neutralize the solution. The cells were detached with 0.25% trypsin/ETDA and centrifuged. BMSCs were loaded in neutralized collagen solution at a final density of 1 × 10 7 cells/ml. Finally, the cell-matrix constructs were incubated at 37°C for 10 min to allow gelation. Cytotoxicity assay. The cytotoxic effect of NGF on BMSCs was assessed using an MTT (Gibco) analysis. BMSCs were seeded at a density of 1.56 × 10 4 /cm 2 in a 96-well plate and cultured with NGF ranging from 1 to 9.5 μg/ml for 24 h. MTT (5 mg/ml) was added to each well and incubated at 37°C for 4 h. Dimethylsulfoxide (Sigma) was used to dissolve the formed crystals and the absorbance was detected by an Microplate Reader (Thermo Fisher Scientific, Waltham, MA, USA) at 570 nm. All experiments were performed in sextuplicate. As determined by the MTT analysis, optimal concentrations of 1.5, 3 and 6 μg/ml were chosen for further investigation. Fluorescence microscopy: viability and F-actin staining. A live/ dead cell viability assay kit (Invitrogen Life Technologies, Waltham, MA, USA) was used to evaluate the viability of BMSCs in response to NGF treatment for 7, 14 and 21 days. Cells and cell-matrix constructs were harvested and quickly rinsed with PBS, followed by the incubation with medium containing calceiu-AM and propidium iodide for 5 min in the dark. The images were captured using a laser scanning confocal microscope (Nikon A1, Tokyo, Japan). Live/dead cell viability for the 3D cultured cells was calculated from 2X image using ImageJ software (NIH, Bethesda, MD, USA). To observe the filamentous actin (F-actin) organization and distribution in the hydrogel, staining for the actin cytoskeleton was performed in all the groups. The cells and BMSC-collagen constructs were washed twice with PBS after 7, 14 and 21 days, and then fixed with 4% paraformaldehyde for 10 min. After washing with PBS, the samples were incubated with 0.1% Triton X-100 for 5 min to permeabilize the cells. Then, the constructs were incubated in rhodamine-phalloidin for 30 min, followed by Hoechst 33258 for 5 min. The images were acquired using a laser scanning confocal microscope (Nikon A1). Biochemical assay. After culture for 7, 14 and 21 days, the cells and BMSC-collagen constructs were digested with 60 μg/ml proteinase K (Sigma) for 16 h at 60°C. To determine the DNA content, the cell lysates were incubated with Hoechst 33258 (Invitrogen, Life Technologies, USA) solution for 5 min. The fluorescence intensity was determined with a spectrofluorometer (Thermo Fisher Scientific, USA) at 460 nm using calf thymus DNA as a standard. To determine the glycosaminoglycan (GAG) content, a colorimetric assay using 1, 9dimethylmethylene Blue (DMMB; Sigma) dye was performed. The cell lysates by proteinase K were coupled with the DMMB reagent and the absorbance was measured at 525 nm using FlexStation III (Molecular Devices, Sunnyvale, CA, USA). The GAG content was quantified using a standard curve of chondroitin sulfate (Sigma) and normalized to the total DNA content. All experiments were performed in sextuplicate. Animal model for the cartilage defect repair studies. Totally 90 New Zealand white rabbits were used for study of cartilage repair. A cartilage-only defect (4 mm diameter) was created in the middle of each patellar groove of the rabbits after general anesthesia. Then, injectable collagen loaded with allogenic BMSCs cultured for 14 days in chondrogenic medium without chondrogenic supplements (BC group, n = 30) or with 10 ng/ml TGF-β1 (BCT group, n = 30) or 3 μg/ml NGF (BCN group, n = 30) was injected into the defect of the rabbits. After 4, 8 and 12 weeks of repair, animals were killed and the repaired cartilage samples were harvested for analysis. Biomechanical test. The compressive strength of the engineered cartilage was analyzed using a compression strength tester (model HY-0230; Shanghai Hengyi Instruments Co., Ltd, Shanghai, China). The repaired articular cartilage was fixed to the apparatus using a metal pin that attached the graft to the tensioner system of the testing machine. Biomechanical loading was assessed after the related parameters were set. The crosshead speed was approximately 0.06 mm/ min. The ratio of equilibrium force to cross-sectional area was divided by the applied strain to calculate the equilibrium modulus (in MPa). Histological examination. After 7, 14 and 21 days, monolayer cultured cells were fixed with 4% (v/v) paraformaldehyde for 30 min. The 3D cell-gel composites were fixed for 48 h and then embedded in paraffin and cut into 5 μm sections. For the in vivo study, after gross inspection, the articular samples were fixed in 4% paraformaldehyde, decalcified, embedded in paraffin and cut into 5 μm sections. The cells and sections were stained with hematoxylin-eosin (HE; JianCheng Biotech, Nanning, China) for a histological evaluation of cell morphology. Safranin O staining was performed to detect GAG accumulation in the 3D constructs and repaired cartilage. Masson's trichrome staining was used to examine the extent of collagen deposition and fibrosis in the repaired cartilage. An inverted phase contrast microscope (Nikon A1) was used for the histomorphometric and histological observations. The repaired articular cartilage samples were graded using the scale described by Wakitani 62 by three independent observers (LZ, LD and JT) who were blinded to the conditions to comprehensively evaluate the regeneration of the tissue in the defects. Immunohistochemical examination. The secretion of collagen types I and II was detected with an immunohistochemical staining kit (Bioss, Beijking, China). To visualize the proteins, the cells per sections were fixed in 4% (w/v) paraformaldehyde and treated with Triton X-100. The cells per sections were incubated with 3% H 2 O 2 for 10 min at room temperature to exclude endogenous peroxidase activity. Then, the samples were blocked with normal goat serum for 10 min at room temperature. After 1 : 200 dilutions, mouse anti-rabbit collagen type I (COL1A1, Acris OriGene Technologies, Inc., Rockville, MD, USA, TA342814) and collagen type II (COL2A1, Acris Antibodies GmbH, AF5710) antibodies were added to the cells per ections overnight. Then, the cells per sections incubated with the secondary antibody after washed with PBS. Subsequently, the antibody binding was visualized by a 3, 3'-diaminobenzidine tetrahydrochloride (DAB) kit (Boster, Wuhan, China) before brief counterstaining with hematoxylin. Eventually, the cells per sections were gradually dehydrated, sealed with a neutral gum and observed and photographed with an inverted phase contrast microscope (OLYMPUS Co., Tokyo, Japan). RNA extraction and qRT-PCR analysis. Total RNAs of the cells per constructs and cartilage samples were extracted with a Total Isolation RNA kit (Invitrogen) according to the manufacturer's instructions. The real-time quantitative polymerase chain reaction (qRT-PCR) was used to analyze the expression levels of the aggrecan (ACAN), SRY-related high mobility group-box gene 9 (SOX9), alpha-1 type II collagen (COL2A1), alpha-1 type I collagen (COL1A1), enolase 2 (ENO2) and runt-related transcription factor 2 (RUNX2), enolase 2 (ENO2), glial cell-derived neurotrophic factor (GDNF), brain-derived neurotrophic factor (BDNF) and ciliaryneurotrophic factor (CNTF) genes. The primer sequences and GenBank accession numbers used for qRT-PCR are summarized in Table 1. RNA was reverse transcribed into cDNAs using a reverse transcription kit (Fermentas, Hanover, MD, USA). Detection System (RealPlex 4, Eppendorf Corporation, Hamburg, Germany) with a Fast Start Universal SYBR Green Master (Mix, Roche, Basel, Switzerland) at 95°C for 10 min, followed by 95°C for 15 s and 60°C for 1 min. The dissociation curve for each primer pair was analyzed to confirm the primer specificity, and GAPDH was used as an internal control. The expression levels of the target RNAs were calculated based on the threshold cycle (Ct) as R = 2 À ΔΔCT .
7,551.6
2017-05-01T00:00:00.000
[ "Biology", "Medicine" ]
A CRITICAL ASSESSMENT OF JOHN MILBANK’S CHRISTOLOGY John Milbank is well known for attempting to develop a participatory theology. This article specifically assesses his Christology. The first section provides a synthetic explication of his Christology by focussing on his notions of participation, paradox, poesis , incarnation, the cross, and ecclesiology. The second section provides a critical assessment. The central argument is that Milbank’s Christology is inadequate in a participatory sense, because it lacks particularity and personal relationality. This inadequacy is probably due to the way in which he fuses Neo-Platonism and postmodern lingualism in order to construct his ontology. In order to maintain his non-violent and poetic ontological position, Milbank needs to revert to a general, “high” and impersonal Christology, and disregard “low” Christology. However, if one’s ontological construction leads to a detached Christology, which does not adequately affirm the central notion of one’s theology, serious doubts arise concerning the legitimacy of one’s method. INTRODUCTION Milbank's theology, as is the case with the Radical Orthodox movement, is to a large degree informed by his resistance to the genealogy of modernism and secularism with its Cartesian epistemology and its striving for a single system of truth based on universal reason. In order to overcome this typical Enlightenment-type of rationality, Milbank (1991:225) proposes that Christian Theology must make a half-turn back to pre-modernity by relying on sources that are not influenced by modernism's strict separation of immanence and transcendence. The central theological framework of John Milbank's theology is the notion of "participation", as developed by Plato and reworked by the Christian tradition, specifically Augustine, Christian 278 Neo-Platonism, Thomas of Aquino and the nouvelle theologie of Henri de Lubac and Hans Urs von Balthasar. Using the concept of participation, Milbank attempts to overcome the modern dualisms between nature and grace, reason and revelation by proposing a theology that defies the notion that the immanent can function independently of God. Participatory and incarnational theology entails that faith and reason are included within the framework of "participation in the mind of God". No aspect of the natural world or of human experience can, therefore, be viewed in isolation from God or theology (Milbank et al. 1999:4;cf. Smith 2005:17). This article investigates the Christology of John Milbank. The question arises as to whether Milbank's ecclesial Christology succeeds in providing a sufficient participatory Christology (not a participatory ontology)? In essence, this question pertains to the coherency of Milbank's theology. Is he able to apply his participationist model consistently throughout his theology? If not, why not? The underlying premise of this article is that a sufficient participatory theology needs both a participatory creational ontology and Christology. Without a participatory Christology, a participatory ontology will lack specificity. Since John Milbanks' Christology is not well known among reformed South African theologians, the first section of this article will explain Milbank's Christology. In order to do so, this article will focus on key concepts in his theology that relate to Christ such as methexis, poesis, incarnation, the cross, atonement, forgiveness and the relationship between Christ and the ecclesia. Milbank perceives truth as being mediated by beauty. The second section of this article will pose some critical questions and assess whether Milbank succeeds in providing an adequate participatory Christology. JOHN MILBANK'S CHRISTOLOGY Milbank's Christology should be understood against the backdrop of his reading of De Lubac 's Surnaturel (1946), the teaching of Aquinas and the sophiology of Bulgakov. He also shares Barth's Christological starting point, but, as will be illustrated later, rethinks it in a non-fideistic manner. He shares his doctrine of atonement significantly with Rahner, while he borrows his interest in the Christ event as "narrative" from Balthasar. It is, however, not the purpose of this article to compare or contrast his Christology with that of other thinkers, but rather to examine the participatory character of his Christology. Milbank, has "invalidated" modernism's approach to the world, as "consisting of fixed essences". Instead, reality consists of "temporary relational frameworks" that are fluid, constantly shifting and always being "re-distributed with greater and greater freedom" (Milbank 1991a:225;Hedges 2010:804). Christian theology could, according to Milbank (1991a:226), "with equal validity imagine temporal processes as reflecting eternity; as the possibility of a historical progress into God". Reality is then perceived as "characterised by flux and ceaseless alterations, so that we cannot know this reality but only join in its occurrence". This stands in contrast to Plato who, for the sake of upholding the possibility of knowledge, postulated the essential being of things, their static eidos, which is not subject to change. Milbank's Christology is informed by his understanding of creation: Through its belief in creation from nothing Christianity admits temporality, the priority of becoming and unexpected emergence. Creation is a gift of God, but also a developing order where created difference proceeds from the continuous emanation of divine difference. Existing harmonies, existing extensions of time and space, constantly give rise to new intentions, to movements of the Spirit, to further creative expression, new temporal unravelings of creation ex nihilo, in which human beings must consciously participate (Milbank 1991a:236). For Milbank, the world is about the reconciliation of being with itself. He resists the modern idiom of transcendence, inaugurated by Duns Scotus, which separates the finite and the infinite by stating that God and creation exist in the same fashion in being (Milbank 2005:27). Duns Scotus did not relate God to creation through a hierarchical process of emanation, but emphasised God's free and sovereign will. Milbank's criticism is that such a line of thought does not take into account the need for mediation. The participation of the finite in the infinite is, according to Milbank (2009:110), best described by the mediating concept of paradox which falls within the domain of the metaxological (initially, he used the term analogy). Milbank (2009:163-164) describes "paradox" as follows: Whereas dialectics is concerned with impossible contradiction that must be overcome, paradox is concerned with a coincidence of opposites that can be persisted with. The logic of paradox can also be described as the constitutively relational or metaxological, 280 because it is about that which is 'shared' and lies 'between' identity and difference, univocality and equivocality. Things thus, according to Milbank (2009:164), correspond in terms of their "difference" and differ with respect to their "likeness". There is a continuous interplay between the "same" and the "different" that causes both creative tension and peaceful forms of co-existence (Milbank 2009:167). According to Milbank (2009:170-171), [t]he infinite is related paradoxically to the finite in the sense that infinite and finite both coincide and do not coincide, they are distant from one another yet united with another. When we see things as identical with their opposites, when we see things as like each other in terms of their differences, we are sensing the involvement of the finite with the infinite. God's involvement with the immanent exists therein that He is the "giving source" and "inner reality" of everything. Conversely, humanity's relationship with God is paradoxical, because it exists therein that "we are identical with God, only because God is our deepest identity" (Milbank 2009:209). Since everything finds its origin in God, it also finds its ontological modes in the being of God. Creation is thus a gift of God who reminds us of the Giver who is God. Milbank employs the notion of gift as a transcendental category in relation to all the topoi of theology. Creation and grace are gifts, Incarnation is the supreme gift, while the Fall, evil and violence are the refusal of gift (Milbank 2003:ix). He asserts that the only true reality can be a shared reality -the giving of a gift by a Giver. "Giving" is just as transcendental a term as "being", and is inseparable from exchange (Milbank 1995:119, 121). With "exchange", Milbank means that a recipient always has to respond to a gift in gratitude, but not in a similar fashion by giving the same gift back, because this would be an insult. There must be a "non-identical repetition" between gift and "counter gift" (Milbank 1995:125). A real gift, therefore, must "express something of the giver" and yet leave the recipient a certain "mimetic freedom" to respond gratefully. Gift exchange thus involves a "free gift" that you must give, but also "an obligation that is not fulfilled unless you fulfil it in an entirely free way" (Milbank 2011:27). The gift and the Giver can never be identified in an absolute way. Since the esse of God gives existence to everything and is the existence of everything, the universe is ultimately grounded in God and essentially theocentric. Every sphere of life is innately theological, because all knowledge of creatures are simultaneously knowledge of God. All systems that attempt to function in a non-theological way are at heart "nihilistic" (Milbank 2006:278-279). Milbank's theocentric grounding of reality necessarily entails that he disposes with the notion that theology must start from "below". He rather maintains that conceptions of the "below", that is, notions of human subjectivity and relationship, are constituted within the narrative that simultaneously postulates the "above" (Milbank 1991a:226). The ontological premise that everything is grounded in God necessarily entails that God's intertrinitarian nature will play an important role in Milbank's theological reflection. The doctrine of the Trinity is, according to Milbank, a statement of faith that God is in Himself relationship and because of this the Trinitarian God exchanges love infinitely (Milbank 1991a:234). Because God in Himself is relationship, God in Himself is also community, and then a community in process, infinitely realized, beyond any conceivable opposition between perfect act and perfect potential (Milbank 1991a:234). Yet, God in a manner exists outside God, because He goes outside Himself and returns to Himself: hence the Father, Son and Holy Spirit (Milbank 2009:109). Through this outgoing and returning, God "births creation and all finitude". The entire cosmos, which includes time and space, is thus part of the unfolding loving relationship of the Trinity (cf. Milbank 2009:145). Milbank perceives the world as a gift exchange between the Father and the Son, a reconciliation between the finite and infinite and, therefore, as part of the inner life of God. The Spirit continuously seeks communion and expresses the exchange between the Father and the Son, while Jesus as historical figure is the incarnated expression of the ontological reconciliation between finite and infinite being (cf. Wisse 2007:352;Milbank 2009:189). The economic Trinitarian working of Paternal voice -Christ -Ecclesia discloses to us how humanity functions in and through time (Milbank 1991a:236). Human beings are, according to Milbank (1991a:236), images of the Trinity who participates in the Trinity, while the Spirit constantly gives rise to new movements. Since God is "community in process", knowledge is a "process of learning which is true if divinely illumined". Outside of this process no knowledge of an object is possible (Milbank 1991a:234). If our desires correspond to the Father and Spirit, the Divine Logos illuminates our mind Vorster A critical assessment of John Milbank's Christology 282 (Milbank 1991a:234). Our desire is moved by "infinite lack, the pull of the goal" (Milbank 1991a:235). Christianity thus pursues from the outset an universalism which is open to difference, new insights, additions and progressions towards God, but make these differential additions a harmony in the body of Christ (Milbank 1991a:227). Christ, by being first, defines the way to God and also determines the nature of the new ecclesial society. Humanity progresses into deification which entails "a reception of the fullness of Being and a receiving of God" (Milbank 1991a:230). Poesis But how does humanity receive God? To answer this, Milbank employs the concept of poesis. If we understand creation to be ex nihilo and through God's word (Logos), language ought to be regarded as primordial and reality as fundamentally linguistic in nature. Language, which for Milbank, includes the entire range of significant human cultural productions, is not representative but constitutive of reality, which means that human existence is poetic in nature. The notion of participation can, according to Milbank, also be extended to language history and culture. Not only do being and knowledge participate in a "God who is and who comprehends"; but human making also participates in a God "who is infinite poetic utterance: the second person of the Trinity" (Milbank 2003:ix). Poetic existence, according to Milbank (1997:123), can be described as an activity, mode of knowledge and ethical behaviour which is concerned with aesthetics and the beautiful, that which fits and harmonizes. Milbank develops his account of poesis under three headings namely poetic activity, understanding, and praxis. Milbank regards truth as being mediated by beauty. The human being's poetic activity is driven by its expressive nature and desire to appropriate its environment as a system of value which in the end creates a world of meaning (Milbank 1997:124). The desire for beauty creates a longing and seeking that cannot be perfectly grasped or possessed (Milbank 1996:42). The product that the human being creates is characterised by "self-exceeding", because we come to depend upon a world of meanings that we have constituted ourselves. It possesses a certain virtus of its own such that it cannot be replaced by the subject which is the author (Milbank 1996:42). The self-exceeding nature of poetic activity underlies poetic understanding. Human beings primarily understand through images and metaphors. These linguistic features make abstract meaning possible by creating concepts of representation that are able to make something present through something else (Milbank 1997:127). However, this creates the question as to whether poetic meaning could be stable? Milbank's response is that we need concrete universals to create stable meaning. A concrete universal is that which harmonises and brings aesthetic unity. This can only be created through the "mediation of the sensus communis" (Milbank 1997:128). Since poesis is concerned with discernment of forms that are suitable and fitting, and since this can only be done through the means of representation it is not possible to separate poesis from praxis (Milbank 1997:129). Ethical activity is derived from our poetic representations. For instance, in order to understand heroism, we need stories and images of heroes. These poetic representations help us establish adequate human goals that lead to deeper possibilities of human behaviour (Milbank 1997:129). The poetic nature of human existence leads into humanity's poetic encounter with God. God, of his own free will, finds somewhere between our cultural products that are not truly in our control space to confront us (Milbank 1997:74). Because God is truly transcendent, He never confronts the creature through the I-Thou relationship, but always address the creature as the expressive self. Our poetic quest for telos becomes a quest for God, because the divine reality is the telos of human reality. In fact, God creates the human being as a poetic being in order to connect to human beings. Revelation is thus, according to Milbank, not "an imposition that happens outside normal processes of history, but it is the surplus within poesis itself". Through poesis divine and human creations interconnect, without God interposing in a way that violates natural human intent (Milbank 1997:130). Rather, the Divine overtakes and completes human creations, so that revelation is something positive in addition to reason. The human being's aesthetic desire makes God's glory attractive for humanity. That is the reason why in history we have noted a quest for a mediator (cf. Milbank 1997:131). The figura of the mediator corresponds to the poetic notion of a concrete universal. Christ is an adequate representation of God to humanity and an adequate representative of God to humanity. He is the concrete universal that creates stability in meaning 284 and aesthetic harmony. In Jesus we recognise "the divine overtaking and fulfilling of all human purposes". From a divine perspective, Jesus is the origin of all meaning but, from a human perspective, he is the "inheritor of all already constituted meanings". Jesus thus metaphorically represents all human intent, the word of God as well as the fulfilment of Creation (Milbank 1997:132-133, 139). Incarnation and cross The paradoxical relationship between the transcendent and immanent is expressed in the Incarnation (Milbank 1997:132-33, 139). God became "man in order to incorporate us into the Trinity, opening up our realm into the beyond of the infinite life of God". He thus saves us from "materialism and pure immanence" (Milbank 1999b:31). Through the incarnation, the Father "hands over the created realm to filial rule until the eschaton" (Milbank 2010:157). Christ's incarnation is not about bringing a sacrifice that can offer back creation to God by representing all of humankind before God, but it concerns deification. It is about the foundation of the Church, which is a community of charity and forgiveness. The doctrine of the incarnation identifies Jesus with the divine Logos and establishes the practical relation of the church to Jesus (cf. Milbank 1991a:233). According to Bauerschmidt (1999:424), Milbank attempts to avoid the "extrincism that often attends articulations of incarnation" by shifting priority from Jesus to the church. The Gospels are primarily concerned with a new form of life within a "web of signification", not about Jesus as subject (Bauerschmidt 1999:424). It is not belief in the fact of incarnation that transforms our lives, but it is the translation of the incarnation into a "mode of being" that transforms our lives (Milbank 1991b:315). Within this new mode of being, Jesus is "the space in which all true identities are located; the source, goal and content of all our lives" (Milbank 1991b:325). To identify Jesus, the gospels resort to metaphors that articulate themselves in spatial and vertical terms and abandon the temporal and horizontal. Incarnation, therefore, "cannot be by the absorbing of divinity into humanity, but only by the assumption of humanity into divinity" (Milbank 1991b:316). Christ's human existence is entirely "derived from the divine person of the Logos by which He is enhypostazised" (Milbank 2010:210). Though Jesus's affinity with God was so strong as to "constitute identity", the identity does not consist in substantial nature, but in an identity of character, hypostatis or persona (Milbank 2003:203). According to Milbank, Jesus was a full integral human being only by virtue of the fact that he was a divine person, and that the goodness of divinity completely ensued by grace to his human nature. Even though every specific characteristic of Jesus is entirely human and temporal, his personhood was divine rather than human (Milbank 2010:210). Since the Gospels are ultimately concerned with a new mode of being, the kingdom of God, which is the universal community of the church, comes before the cross (Milbank 1991b:314). The kingdom is really offered by Christ to humanity, and the cross is the result of the rejection of this offer. This rejection suggests the character of sin. To sin is to "refuse the love of God and to render oneself incapable of recognizing God" (Milbank 1991a:231). It is the ultimate "distorted construct" (Milbank 1997:139). In a world dominated by evil and violence, a self-offering to God necessarily involves suffering. Since, according to the logic of creatio ex nihilo, to be is entirely to receive, a constant giving up of oneself is the only way to get oneself back and to keep participating in Being (Milbank 1996:52). This is the reason why suffering is "at the heart of Christ's perfect self-offering to God" (Milbank 1991a:231). Milbank (1991a:231) states it thus: Only God Himself can fully suffer evil -not in eternity, which is beyond suffering -but in the human creation. Hence the necessity for the Deus Homo. Through his suffering, the "God-man who by His innocence fully sees and so fully suffers, exposes the illusion of self-possession" (Milbank 1996:53). Christ on the cross suffered death, human malice, and the misuse of the law for the sake of the welfare of the political community. Through his suffering, Christ redefines beauty as the "incorporation and transfiguration of the ugly" (Milbank 1996:139). His suffering may then be assumed by us as "the only mode of access to his innocence" (Milbank 1996:53). The cross is thus not some kind of atonement that effects a change in God's attitude towards us. Such a form of atonement is meaningless, because it can only remain extrinsic to us (Milbank 1991a:231). Atonement rather means: that the flux is permitted to flow again, that the ever different articulation of our responses continues. Jesus's assuming of the burden of sin is an atonement because Jesus's response is a nonviolent one: He refuses the violence which would actively distort his own work. Through his crucified body He now makes to us a totally non-violent, unconstraining appeal. Christ did not die on the cross merely instead of us; rather, having uniquely suffered the death of 286 the innocent, he calls on all human beings to partake of this death, and in a measure to repeat it (Milbank 2010:45). Milbank rejects penal substitution theory in no uncertain terms. The danger of a cultic understanding of Christ's death is that it suggests that Christ's death is "a kind of eternal transaction between God and humanity that is a mere extrinsic fact only to be believed in" (Milbank 2003:62-64). However, true gift can never be a transaction; it is by nature "reciprocal and a-symmetric". Christ's abandonment "offers no compensation to God, but raises us up into the eternal gift exchange of the Trinity" (Milbank 2001:552). God has no need to be appeased in order to become reconciled to us; He always and eternally was reconciled in himself. He has no need to forgive since he goes on giving (Milbank 2003:62-64). The atoning nature of Christ's work rather lies therein: that sin locks one into finitude, and so further into the structures of death and sinfulness. This can be overcome only by the entry of the infinite into the finite through the God-man and the paradoxical identification of the infinite with the finite (Milbank 2010:212). Redemption, therefore, is not about God forgiving us, since our sins cannot harm God, but rather about his giving us the gift of the capacity for forgiveness (Milbank 2003:62). For Milbank, God's forgiveness is not an extrinsic forensic declaration that individuals are no longer guilty. It is rather an unlimited positive circulation that is allowed to continue (cf. Milbank 2003:48, 64;Boersma, 2005:190). Reconciliation is also no event between us and God, but is rather mediated by God to us, making it effective for us, and so ensuring that we too are reconciled. Instead of a cultic understanding of Jesus's death, we should regard Jesus's death as almost "inevitable", because his rejection of violence negated the basis of all human, political and social mechanisms that hitherto existed (cf. Milbank 2003:100). Christ is substitute in the sense that the divine Son through His assumed human nature makes the return offering of true worship to the Father -a return that humanity should make but cannot make because of the Fall. An innocent other must first show the way forward of true worship before it thenceforward becomes imitable (cf. Milbank 2003:46). Christ is thus a "sign and perfect metaphor of forgiveness", whereas atonement is "nothing more than forgiveness, because forgiveness is in itself atonement" (cf. Milbank 1991b:325-326, 328). It follows that atonement cannot be "once and for all", but must be continuously renewed through the practice of forgiveness (Milbank 1991b:327). Christ's example must somewhere and somehow be followed and this "mimesis must clearly involve further acts of mutual atoning which realizes the hypostatic presence of the Holy Spirit" (Milbank 2003:42). Imitation of Christ is never a straightforward moralism. It is only imitation in the sense that the church is caught up in the eternal process of participatory exchange, in which believers live by the charismatic gifts that are theirs through the Spirit (cf. Milbank 2003:153). Christ and ecclesia For Milbank, grace is about deification, the "gratuitous raising of humanity above itself to God, not a judicial corrective for sin" (cf. Milbank 2005:34, 80). The gift of Christ is the gift of the Spirit to the church, a gift that is a divine indwelling power in us to begin realize the kingdom of love upon earth (Milbank 2005:38). The central aspect of salvation, therefore, is the creation of perfect universal community which is "inaugurated by the Incarnation and hypostatic descent of the Spirit on earth" (cf. Milbank 1991a:232;Milbank 2003:105). It is only the primacy of ecclesiology that can remove us from extrincism, because the incarnation of Christ can only be perceived in the existence of the church which "both transmits the signs of atonement and repeats atoning practices" (Milbank 1991b:327). This implies that Christological doctrine is a deduction from ecclesiology (Milbank 1991b:329). The Gospels cannot be read as the story of Jesus, but as the story of the (re) foundation of a new city, a new kind of human community. Jesus figures in this story simply as the founder, the beginning, the first of many (Milbank 1991b:317). Genuine ecclesiology, for Milbank, comprises a philosophy of history that recounts the Church's actual concrete intervention in the social order. Christianity essentially involves the claim that the "interruption" of history by Christ and his bride, the Church, is "the most fundamental of events, interpreting all other events" (Milbank 2006:390). Jesus came to expose the secret of social violence hidden since the foundation of the world and to preach the kingdom as the possibility of a life refusing mimetic rivalry and, in consequence, violence (Milbank 2006:396). Milbank regards the church as the altera civitas on pilgrimage through this temporary world, its goal is peace and its means are non-coercive (cf. Milbank 2006:382). Because the Church is already, by virtue of its institution, a reading of other human societies, it becomes possible to 288 consider ecclesiology as also a sociology, and simultaneously to think of theology as social science (Milbank 2006:382, 383). All political theory is relocated by Christianity as thought about the church, which is a new community that practices a new ethos characterised by non-violence, charity, peace, reconciliation, and forgiveness (cf. Milbank 2006:410). By seeking to recover the reality of an original peaceful creation beneath the negative distortion of dominium, the church is able to realize the political objectives of justice and virtue that the polis could not arrive at (Milbank 2006:414, 423). Human relations, therefore, need to be brought within the true asylum of the church. The new community is empowered by Christ to forgive, suffer and make continuing atonement (Milbank 1991b:317). It is in the church and through "the practise of forgiveness" that we achieve participation (Milbank 2003:x). Human beings receive God by giving practical recognition to Christ as the fulfilment of human intent by regarding our entire lives as nothing but an interpretation of Christ as presented to us in the Scriptures and in the Sacraments (Milbank 1997:139). They specifically receive God by partaking in the death of Christ and repeating the event of the cross through their own suffering, because "self-offering to God entails suffering that resumes contact with a wholly positive order of mutual ecstatic giving" (Milbank 2010:45). Christ has abolished the sacrifices of the earthly city, but instead he has inaugurated a new kind of efficious sacrifice of praise, self-sharing and probable attendant suffering which unites us with Him in the heavenly city and at the same time totally obliterates all the contours inside and outside which constitute human power (Milbank 1991b:318). The event of transformation needs to be non-identically repeated and, therefore, made to happen (Milbank 1991b:319). As we offer ourselves in and with Christ, we also participate in the "infinite process of gift exchange" (Milbank 2003:102). The theme of resurrection and the Church as the body of Christ restores, according to Milbank (1991b:319), concreteness to the notion of incarnation. We can only know God through the community of the body of Christ. The community is what God is like, but God is also unlike the community. It is this inexpressible reality to which the community continues to try to respond. If God can only be given content through community, "then speaking of God is not just a matter of words, but also of images and bodily actions" (Milbank 1991a:228-229). The Eucharist allows a direct participation in Christ and works in us the sense that we have now come to share in the divine life as God's children. It is in the Eucharist that we celebrate a sacrificial gift exchange when God offers himself to us in a dying whose loss is overtaken by a giving (Milbank 1996:54). Our entire perception is informed by the Holy Spirit, a sensus communis inaugurated in us by Christ as an adequate sense of metaphorical judgement which is the necessary transcendental condition for the adequate concrete universal (Milbank 1996:140). Through the Spirit, Christ is conceived again in us -though in a linguistic fashion. The Spirit stands in the interpretative gap between the Father and Son and He acts as the guarantor of the Church's 'poetic' imitation of Christ as exemplar (cf. Bauerschmidt 1999:419). It is in this sense of continuing to the image of Christ that we genuinely participate in Christ and not as "a kind of sub-personal, quasi-material inclusion" (Milbank 1996:141). The ecclesia is the infinite resurrected body of Christ composed of faithful who are living offerings to God and who are lured by God who is the ultimate goal of all human life (Milbank 2010:46). The resurrection is "no proof of divinity, nor a kind of vindication for Jesus's mission". It is rather "the memory of community continuing beyond death" (Milbank 1991b:232). Milbank (2010:43) claims that Paul's theology is informed by such a vision of the resurrected man. We have in Christ already proleptically undergone death (cf. Milbank 2001:552). Being already dead, we can no longer sin or be subject to the law, but we belong to Christ through His body, the church, to bear fruit for God. However, Christ's full "incarnate appearance lies always ahead of us" (Milbank 1991b:319). The longing for a universal resurrection is "a political act for it is the ultimate refusal of all denials of community" (Milbank 1991b:232). Without resurrection there "can never be any final reconciliation" (Milbank 1999:38). In the third age of the "Johannine Church" all will become "sons Vorster A critical assessment of John Milbank's Christology 290 of God" and the world will be restored "through the Spirit to the rule of the Father" (Milbank 2010:157). The world thus anticipates a final historical event that will be also the final disclosure of the meta-historical secrets of eternal outgoings from God (Milbank 2010:59). A CRITICAL ASSESSMENT There is much to be admired in Milbank's theology. His project to resist secularism and autonomous reasoning through the notion of "participation" is a valid one. We need a participatory and incarnational theology that mediates between Creator and creation in order to overcome the dualisms of secularism. Milbank rightly affirms that there is no autonomous reality, no "reserve of created territory" and that the immanent can only be sustained if it participates in the transcendent. The question, however, is whether the Christology that Milbank proposes really provides us with an adequate participatory Christology? Some methodological concerns need to be raised first. Milbank's notion that the Western Neo-Platonic and semi-Aristotelian tradition, that is best exemplified in the passage from Augustine to Thomas of Aquino, represents the authentic and pure Christian tradition, but that theology went wrong with Duns Scotus; and Suares seems overly simplistic and symptomatic of a highly problematic interpretation of the historical development of Christian theology. Milbank often refers to the Protestant tradition as theologies of mere imputation that is guilty of extrincism, Biblicism and other distortions. Yet the question is: Why does Milbank regard the Christian Platonic and semi-Aristotelian tradition as the authentic expression of the Christian faith? Why not the biblical tradition from which he diverges radically in his Christology? Is Platonic ontology some kind of philosophical presupposition that is required to make the Christian faith authentic? A second methodological concern is the eclectic character of Milbank's Christology. He fuses pre-modern Neo-Platonic ontology, which is concerned with the harmonic order of things, with a postmodern lingualism that emphasises the transient flux of reality. It is, however, debatable whether a method that picks the bits it finds acceptable from historically opposing philosophies and then represent it as true Christianity is theoretically sound and theologically credible. Milbank's conflation of different, often opposing, sources to provide a coherent scheme of thought leads to historical inaccuracies. Not surprisingly, historical theologians frequently accuse Milbank of misinterpreting his sources, specifically Augustine, Thomas of Aquino, Henri de Lubac and Duns Scotus; to serve his own understanding of authentic Christianity (cf. Wisse 2007;Hankey & Hedley 2005). We might add the name of Paul, since Milbank's interpretation of the views of Paul in Paul's new moment seems very "Milbankian". A third methodological concern is that Milbank's Christological premise is characterised by an ontological speculation on the interdivine structural being of God. God's essence is charity which leads to limitless divine selfsharing. Creation is part of a gift exchange between the Father and the Son, and Christ's incarnation attempts to incorporate us into the Trinity. By describing God's essence as charity, Milbank risks subjecting God's nature to the charitable, at the expense of the other attributes of God, to such a degree that the charitable actually becomes the real god within the divine nature that makes God self-excessive (cf. Milbank 2005:41). This contradiction illustrates the problem with theology that regards God's being as intelligible. As soon as one formulates concepts that attempt to explain the being of God, one runs into serious difficulties. Human beings simply do not have the ability to gain an adequate cognitive grasp of the Divine Being, although the Being of God can be postulated. The question is: Does Milbank not employ a priori methods of speculation about God's being in order to fit his theology? Is his apriori speculation not merely human construals that work onwards and upwards transcendentally into God (cf. Olthuis 2005:286)? Would it not be better to reflect on God's acts, speech and communion with creation, rather than on his essence, which cannot be meticulously penetrated? The theologies of the main exponents of Protestantism, namely Luther, Calvin and Barth, differ from Milbank in that their notions of participation are not primarily derived from speculative ontological constructions, but in general from Christology. The reason for this is that the Reformers, as well as Barth, were sceptical of speculative philosophy. Although they did not deny God's essence, they believed that we must form our knowledge of God a posteriory from the revelation God gives us of Himself and His works. Calvin thus states: We know the most perfect way of seeking God, and the most suitable order, is not for us to attempt with bold curiosity to penetrate to the investigation of his essence, which we ought more to adore than meticulously to search out, but for us to contemplate him in his works whereby he renders Himself near and familiar to us, and in some manner communicates himself (Calvin ICR:1.5.9). In my view, the main problem with Milbank's Christology is the lack of realism and particularity. We might ask whether Milbank's notion of reality as "linguistic" is not a reductionist concept that leads to abstraction at the 292 expense of realism. His ontological approach indeed tends to collapse into a form of abstraction that takes on a life of its own. By imposing general truths upon the particular through his fusion of Christian Platonic ontology and a postmodern linguistic meta-narrative, he creates a closed and rather idealistic system of thinking that lacks particularity. Richardson (2003:278) rightly asks: But how can this 'idealist approach' be realist enough to speak of the 'particular'. How can linguistic idealism, presumable the idea that reality is fundamentally linguistic, provide us with a God to whom we might pray, yet who exists outside the language we pray in. This lack of particularity is especially evident in his Christology, where linguistic idealism and incarnational ontology obscure the particular identity of Jesus. Biblical notions such as crucifixion, suffering, substitution, atonement, representation, and resurrection are provided with a new content where the "particular" makes way for idealistic theories and concepts that exemplify a new mode of being. For Milbank (1991b:328), the historical concreteness of Jesus is buried beneath an avalanche of metaphors and typological stories which themselves tend to spell out the mere formal grammar of the 'fact' of incarnation. The salvational significance of the cross is thus reduced to a hermeneutical and poetic form of liberation that has no particular historical-soteriological significance. Ultimately, the primary narrative of the historical person of Jesus is overtaken by a meta-narrative of the incarnate Logos. Bauerschmidt (1999:417) rightly notes that Milbank's desire to make the "speculative excess" of Trinitarian and Christological doctrine integral to discourse about Jesus seems to run the risk of losing its grounding in the stories of the man Jesus. The direction of Milbank's Christian metaphysics leads away from placing "undue emphasis on the specifics of Jesus life in favour of stressing his relations within the contencation of signs" (Bauerscmidt 1999:423). However, if attention to the historical Jesus is lost, Christian Christology could degenerate into an ambiguous type of metaphysical discourse that deprives Jesus of any particular and specifiable content and remains in the realm of the speculative. Milbank is so focused upon relinquishing all forms of extrinsic thinking that he dispenses with the whole notion of a covenantal I-Thou relationship between God and human beings (cf. Milbank 1997:74). It appears that Milbank regards the notion of a God that imposes his will from outside the human subject and creation as a "violent" concept. He equates imposition, even divine imposition, with violence. His counter-narrative, therefore, emphasises that God does not reveal himself from outside the human being, but in the intrinsic poetic activity of the human subject. Milbank, seemingly, regards the notions of autonomy, sovereignty and contractual relations as inherently "violent", because all of these concepts make use of distinctions between a subject and an object, an I and thou, a self and an other. By setting these categories in conflict with each other, a process of violence and counter-violence is allowed to unfold. However, when we dispense with the extrinsic qualities of the humandivine relationship, we risk subsuming human and divine nature. In Milbank's theology, the personal relationship between the human and the divine, which lies at the core of the biblical theme of the covenant, seems to make way for a form of panentheism. Although Milbank does not explicitly mention panentheism in his works, his writings certainly contain "latent and underlying panentheistic contours" (Mir 2012:526). His panentheism becomes particularly evident in his view of poesis. God is revealed not from without, but is embedded within culture, language, history and human making. Mir (2012:539) notes: The relation between human and divine poesis is revealed in that history, culture and language are not alien to the divine but are the divine's actual revelatory unfolding. Although Milbank indeed develops a participatory Christology, it is a metaphysical participation that is inadequate in a "personal relational" sense. Reconciliation remains a detached metaphysical event between two principles, the finite and the infinite, in the being of God, whereas the personal nature of the reconciliation between God and human beings is nonexistent. In fact, the historical Jesus is a superfluous concept in Milbank's theology, an alien addition to the being of God (cf. Wisse 2007:354). This detached kind of approach is highlighted by Milbank's depiction of God as an impassable God who cannot suffer nor be offended or hurt by sin, and thus does not need to forgive, because he is genuinely transcendent and not merely a higher transcendent reality in the same order as us (cf. Milbank 1997:422). The question is: Why then does Milbank describe God as a God of gift, charity and love? If God is not capable of suffering, why would He be capable of love or charity? Because of his denial of the I-Thou relationship, Milbank's understanding of forgiveness and atonement departs radically from classical Christianity. For Milbank, there is really no divine forgiveness distinct from human forgiveness. He reduces sin to a mere anthropological concept that concerns interhuman relationships and does not affect the divine-human 294 relationship in any "personal" sense. Boersma (2005:192) rightly notes that Milbank cannot accept the notion of God for giving us the guilt or the debt of punishment, because his ontology of peace, which regards violence as a form of non-being, does not allow for punishment. Milbank associates punishment with violence and evil. Of course, when God does not punish, there is no need for Him to forgive either. However, is all violence necessarily evil? Can violence, as Boersma (2005:199) rightly asks, not also be redemptive? Often good violence is needed to counteract and root out bad violence. There is often a necessity for a violence that punishes, protects, liberates, and reconciles. Classical Christianity has traditionally affirmed the need for a kind of redemptive violence in its doctrine on atonement. Milbank's use of the terms "violence" and "power" is complex, because it leads to a conceptual breakdown. He universalises these terms to such a degree that they eventually lose any distinctive content. In the case of atonement, it forces him to do away with any sacrificial understanding of atonement. Milbank views an I-Thou notion of the relationship between God and humanity as "violent". Closely related to Milbank's concept of forgiveness is the impression that he lacks a sufficiently radical view of the nature of the Fall and sin. Since Milbank depersonalises the relationship between God and the human being, and views reconciliation as mediation between the finite and the infinite realms, the Fall should also be understood in impersonal terms as a loss of participation, rather than as the misdirection and total distortion of creature life. His inadequate understanding of the radical nature of sin ultimately reflects in his rather utopian view of the church. Finally, Milbank's attempt to shift the subject matter of the Gospels away from Jesus to the church and his identification of the church with the reign of God seems highly contentious, because he runs the risk of subsuming Christology into ecclesiology. In his thinking, God reigns in and through the church, so that the church, in fact, becomes a co-redeemer. Christ fully arrives at his divine personhood only in and through the repetition and substitution of the church. His divine personhood is still taking shape in the life of the church, which completes the atoning work of Christ by continuing to make atonement. However, if Christology is deduced from ecclesiology, as Milbank proposes, Christology is removed from the centre of gravity and replaced by the primacy of ecclesiology. What implications does this have for a truly participatory Christology? Does the institutional church not occupy the space that Christ ought to inhabit in the life of the believer? 3.1 Is Milbank's Christology adequate in a participatory sense? In the introduction the question was raised as to whether Milbank's Christology is sufficiently participatory in nature. The central argument of this article is that Milbank's Christology is inadequate in a participatory sense, because it lacks particularity and personal relationality -two aspects that are essential for true participation. This is reflected in Milbank's denial of the possibility of an I-Thou relationship between God and human beings; his view of God as an impassable transcendental reality that cannot be offended and does not need to forgive; his negation of the importance of the historical identity of Jesus; his understanding of forgiveness as a human enterprise; his understanding of reconciliation as a metaphysical event between the infinite and the finite; the conceptual and idealistic content that he gives to atonement, substitution and representation, and the primacy he gives to ecclesiology vis-à-vis Christology. If Milbank's theology is built upon the notion of participation, why would his Christology lack an adequate participatory character? The answer is likely to be found in the manner in which he constructs his ontology. Milbank builds his ontology upon the fusion of a monistic Neo-Platonic view of reality and idealist postmodern linguistics and not, for instance, on the more relational concept of creation ordinances, as is the case in Neo-Calvinism. This fusion enables him to relate all ontological modes of being to God's being and to affirm the metaphysical participation of all things in divine being, but conversely, does not allow him to speak of a personal I-Thou relationship between God and human beings, because an I-Thou relationship is not possible if creature things emanate from God (Neo-Platonism) (cf. Milbank 2003:107), nor if language constitutes reality (postmodern lingualism). Consequently, he needs to revert to a general, "high" and impersonal Christology, and disregard "low" Christology, in order to maintain his ontological position. However, if one's ontological construction leads to a detached Christology that does not adequately affirm the central notion of one's theology, serious doubts arise as to the legitimacy of one's method. A truly participatory ontology and theology is only possible if Christology forms the centre of gravity. Ontology needs to be constructed from authentic Christological premises, not from classical, philosophical or postmodern premises. However, Christology can only provide theological and ontological consistency, if the "low" and "high" dimensions of Christology are combined. "Low Christology" prevents excessive and ambiguous speculation by safeguarding the particular and historical
10,540.6
2013-03-01T00:00:00.000
[ "Philosophy" ]
Optimization of Digital Image Processing Method to Improve Smoke Opacity Meter Accuracy — One of the parameters of exhaust emission testing on diesel engines is the level of smoke opacity. If the opacity is high then the emission quality is bad. The instrument for measuring smoke opacity is called Smoke Opacity Meter. The commonly used basic concept to measure smoke density is by utilizing a light sensor (optical sensor). Development of Smoke Opacity Meter applies the concept of Digital Image Processing. Even though it has initially begun, the measurement result is yet as perfect as Optical Sensor Concept. Therefore, this paper describes on how to implement the Digital Image Processing Method in processing the smoke opacity video data. I. INTRODUCTION Vehicle inspection process as means of transportation is periodically conducted.It begins before mass production, before being issued by the manufacturers, up to being used by the customers.It is for the safety of human being who take this mean of transportation. Nevertheless it is benefitful for human, this vehicle machine has negative impact as well.Consequently, it is done by minimizing this matter through vehicle testing process.One of the negative impacts and is the most popular issues is the pollution impact resulted from vehicle exhaust emission.Generally, the vehicle that produces exhaust emission is oilfueled engine.There are two types of engine used, they are gasoline engine and diesel engine.Pollution or emission produced by these two engines has different content, therefore their measuring instrument is different as well. In line with technology development, vehicle engines are designed to be more eco-friendly, and in order to support it some regulations are established by government.International emission standard is commonly known as EURO STANDARD.Meanwhile, in national level there is a decree issued by Ministry of Environtment (KLH). In order to guarantee that the testing on exhaust emission is valid, thus emission measuring instrument is developed in accordance with the appropriate standard mentioned above.In addition, this instrument should have a good precision.This article is trying to show an optimization of Digital Image Processing (DIP) method used to test the smoke of diesel engine.Result of previous study viewes that DIP can be utilized to measure the smoke opacity level of diesel engine [1].This article focuses on DIP optimization in obtaining better measuring result than before.The method is by multiplying the tested images object taken from video data. II. THEORY FOUNDATION AND RESEARCH METHOD Several scientific studies and basic theory applied in this paper are smoke opacity, type of measuring instrument for smoke opacity, DIP and several DIP studies related to smoke. A. Smoke Opacity Smoke opacity is a measurement to quantify the density of smoke particle in the air that can absorp amount of light beam so that the light obscured by smoke.Smoke density is measured in percentage between 0 up to 100 percent percent [2].Hence, this paper describes the smoke density of diesel engine vehicle. B. Measurement Instrument of Smoke Opacity Generally, measuring instrument for smoke opacity conducted by utilizing a light beam source and a receptor.As seen in Figure 1, if light intensity received by the receptor is bigger, thus smoke opacity is smaller (reaching 0%).However, when the receptor receives a small amount of light beam intensity, thus smoke opacity is big (reaching 100 %). C. Digital Image Processing Digital images are electronic snapshots taken from a scanned from documents or a scene such as photographs, manuscripts, printed texts, and artwork [4].Digital Image Processing is a digital process of picture/ image, and formed in matrix arrangement.Matrix is an array of numbers arranged into certain rows and colums.Image matrix component contains of numbers with particular value representing image on each pixel as shown in Figure 2. D. Previous Study There have been many studies on smoke images.Some of them expose that the smoke color is grey [3].In contrast, in other studies found that smoke detection can be done by accumulating moving images model in order to find the smoke characteristic [4].To add, previous study conducted by the writer has successfully designed a smoke opacity measuring instrument on diesel engine utilizing DIP method.Despite the result of it has yet been maximal [1].The scheme in collecting the smoke data on diesel engine vehicle follows the sequences as presented in Figure 3. Main data is taken from sample-shell.This space is designed to such an extent resembling the one in Opacity smoke meter OPA 101 by changing the data collector from Light Source to Light Sensor, and turned it into indoor light and digital camera.Camera is utilized to collect smoke image data, within this article, it is done by taking the video data.Then the data is extracted becoming several frames and processed using DIP [1], and finally averaged the reading result. The algorithm of applied DIP process is as follow: 1. Collecting the data of smoke video record.The video data was successfully recorded using developed instrument.Then this data was extracted and processed utilizing DIP, and finally was averaged.From the data processing and algorithm executing mentioned above, the data obtained was presented in Table 1. Taken into account, through smoke opacity percentage value and graphic formed by the data, there were several important points could be considered. First, developed algorithm was able to read the differences on several frames in more detail. Second, developed algorithm was proven showing percentage increasing tendency when smoke opacity occured during engine acceleration process.Furthermore, if the data was made into graphic, then the graffic was formed as seen in Figure 6.Despite, when comparing it with the existing data from the previous study [1], so that high percentage of acceleration using this algorithm was categorized far below the value provided by other measurement instruments which were able to reach up to 46.39 % smoke opacity value.It was due to preliminary image initiation was based on earliest images.In contrast, the preliminary image in the previous study was images in white color. IV. CONCLUSION Applied DIP method optimization in this paper generates evidence that video base analysis can elevate the reading process of smoke opacity. 2 . Initiating preliminary image as reference before the smoke gets inside.3. Extracting video record data becoming image sequence in n image amount.4. Starting from the first image data to the n, and then a. Reading DIP value on each image.b.Saving the % smoke opacity data. 5. Presenting the graphic.6. Calculating the average value III.RESULT AND DISCUSSION Fig 5 . Fig 5.The sample of video extracting result First and most earnest gratitude to Padang State University, particularly to Unit for Research and Community Service (LP2M, UNP), for granting this study. TABLE 1 EXECUTION RESULT ON SMOKE OPACITY PERCENTAGE PROGRAM IN ACCORDANCE WITH FIGURE5
1,535.8
2018-03-03T00:00:00.000
[ "Computer Science" ]
The fracture toughness of martensite islands in dual-phase DP800 steel In situ microcantilever bending tests were performed on martensite islands in a dual-phase (DP) steel to extract the fracture toughness of martensite at the microscale and to understand damage initiation during forming of DP steels. All microcantilevers were produced through FIB milling. The martensite islands do not exhibit linear elastic brittle fracture; instead, significant ductile tearing is observed. The conditional fracture initiation toughness extracted by definition and by Pippan’s transfer criterion is Ki = 6.5 ± 0.4 MPa m1/2 and Ki,2% = 10.1 ± 0.3 MPa m1/2, respectively. The obtained value is well-represented by the strength-toughness trend of other ferritic steel grades. Considering the yield stress of the same martensite island, we found that crack initiation can occur only in very large martensite islands or in a banded or agglomerated martensite structure. Introduction The class of dual-phase (DP) steels is widely applied in automotive industries. Continuous yielding behavior, easily adjustable mechanical properties and low alloying contents characterize their attractive features [1][2][3][4]. A main issue still raising large research interest is their damage initiation and evolution mechanisms, which mainly arise from the two phases and their huge mechanical heterogeneity. Martensite islands are believed to be one of the most susceptible damage initiation sites primarily recognized by post-mortem morphology observation through microscopes [5][6][7]. However, a quantitative assessment of their fracture toughness is still pending. Micromechanical testing became an important tool to locally investigate mechanical properties by extracting the targeted microconstituents [8,9] with focused ion beam (FIB) milling. This also applies for fracture properties. Most of the previous small-scale fracture mechanical studies focused on brittle materials, in particular thin films or layered structure [10][11][12]. Due to the small sample size, the assumptions of the linear elastic fracture models are often not met and small-scale elastic plastic fracture mechanics (EPFM) needs to be applied [13][14][15]. So far, most materials investigated with small-scale EPFM are model materials (e.g., ultrafine-grained tungsten or The fracture toughness of martensite islands in dual-phase DP800 steel Microstructure and chemical composition The DP steel microstructure is comprised of two phases, namely the matrix ferrite and the dispersed martensite island as shown in Fig. 1a. The latter has a much smaller grain (colony) size compared to the former and exhibits irregular shapes (Fig. 1b). Further, the colored inverse pole figure (IPF) of an EBSD (electron backscattered diffraction) mapping clearly illustrates that the martensite islands exhibits a complex substructure with subboundaries called packets, blocks and laths. As expected, they are following the K-S orientation relationship with the prior austenite grain [16]. Theoretically, 24 variants with six in each of four packets should be formed inside one prior austenite grain. However, in our case the martensite islands consist typically of one or two packets-which is also common in DP steels [17]. Invited Feature Paper Besides the microstructure, the chemical composition is also characterized. Mn, Cr and Si are predominantly homogeneously distributed in both phases, as can be seen from APT (atom probe tomography) measurements ( Fig. 2) of an area containing both martensite and ferrite. The chemical content of ferrite is 1.90 ± 0.14 at.% Mn, 0.74 ± 0.09 at.% Cr and 0.41 ± 0.08 at.% Si. Here, the error bar is defined as the standard error of the mean. Martensite has a comparable content of the three elements, with 2.19 ± 0.20 at.% Mn, 0.97 ± 0.04 at.% Cr and 0.54 ± 0.04 at.% Si. However, a large difference exists for the carbon content. Carbon locates mainly in martensite with 3.77 ± 0.20 at.% while very scarcely in ferrite with only 0.06 ± 0.03 at.% (see Fig. 2a, b). It tends to segregate at defects like dislocations, subboundaries, in particular along the phase boundary. No carbides formation is observed in the martensite of this particular DP800, while it is clearly noted that depending on manufacturer and process this can change considerably. Figure 3 shows a representative microcantilever exhibiting fracture of the martensite island and negligible deformation of the softer ferrite. The force initially shows a linear (and elastic) increase, pronounced plasticity and subsequently the force decreases with displacement (Fig. 3a). The snapshots in Fig. 3c obtained from in situ SEM imaging are labeled in the load displacement curve in Fig. 3a. The FIB-notch gradually grows to a natural crack exhibiting extensive crack blunting (see Fig. 3c.5). Hence, the observed fracture behavior is stable with pronounced ductility near the crack tip. This is consistent with macroscopic observations of lath martensite fracture, which exhibits brittle transgranular cleavage behavior only at low temperature, while it shows a typical dimple ductile fractography at room temperature [18][19][20]. Massive plastic deformation is evident, for instance by slip trace aligned approximately 45° to the horizontal direction near the crack tip. In some cases, the plastic deformation of the softer ferrite cannot be neglected anymore. Then, the force-displacement curve ( Fig. 4) does not show a drop and the unloading stiffness generally remains constant. The notch-tip is blunting but no crack extension is visible in the SEM. Also, a significant amount of plasticity is observed in ferrite close to the clamping end ( Fig. 4c, arrow). Fracture properties After carefully screening for ferrite plasticity we discard 25 out of 30 samples because of negligible crack growth but extensive ferrite plasticity, i.e., only in 5 out of 30 cantilevers are further analyzed to assess the fracture toughness of martensite. The crack extension is measured from in situ snapshots and plotted versus displacement (see Fig. 5a). The red points are the measured crack length at the end of the unloading sequence. The crack extension is fitted by a polynomial fit (black solid line). The crack length remains almost constant to a displacement of ~ 1 µm, as indicated by the arrow in Fig. 5a. Based on the load-displacement data and the crack extension curve, the J-integral was obtained following Eqs. (4)-(6) in "Experimental procedure" section (see crack resistance curve in Fig. 5b). The R-curve is used to extract the crack initiation toughness J i (Fig. 5b)-which is, by definition, the transition point from crack blunting to crack growth stage and known to be less geometry dependent than the subsequent R-curve [21]. However, an unambiguous identification of the initiation toughness is in most cases not possible. Therefore, we additionally use Pippan's transfer criterion of the 0.02 W blunting line offset to determine the crack initiation toughness as J i,2% [22,23]. This transfer criterion can further minimize the influence of polynomial fit degree applied for the crack length versus displacement (see Fig. 6). For instance, the J i,2% determined by 0.02 W transfer criterion equals 359 N/m for a polynomial fit of degree 2 (used in analysis) and 361 N/m for degree 3, respectively. By contrast, the J i through intersection of fitting line with initial crack length is affected much more, comparing 147 N/m for polynomial fit of degree 2 and 114 N/m for degree 3. We summarized all J-R curves of the five successfully tested beams in Fig. 7. To a large extent, they coincide with each other. In particular, the crack initiation seems to appear at a similar value for all five cantilevers, and the curve deviates during subsequent crack growth (see Fig. 7a, b). Finally, for comparison, we converted the J-integral to the stress intensity K using Eq. (1). Table 1 summarizes all five beams including both geometrical dimensions and fracture properties. Note that the a/W ratio for our samples is mainly between 0.2 and 0.3, smaller than 0.4-0.5 proposed in ASTM 1820. There are two reasons choosing a smaller a/W ratio. First, due to the FIB milling technique, a certain limited aspect ratio of milling depth to milling width can be achieved. Already well before this maximum aspect ratio one deviates from a sharp notch. We decided to sacrifice the a/W ratio in order to get sharp notches [15]. Second, the small martensite islands do not allow extensive crack. We try to keep the initial notch small in order to see crack blunting and stable crack growth. It is evident that the crack initiation toughness neither by K i nor by Pippan's transfer criterion varies significantly among the five beams. During the remainder of this paper, if not specifically pointed, we discuss and compare only K i,2% , as the discussion would be identically for K i . On average, the crack initiation toughness of martensite island is J i,2% = 423 ± 22 J/m 2 and K i,2% = 10.1 ± 0.3 MPa m 1/2 . Discussion Did we obtain a geometry-independent plane-strain fracture toughness? Within this work, we aimed for the fracture toughness of martensite as material property, i.e., as geometry-independent plane-strain critical stress intensity factor K IC . Macroscopically, stringent requirements are listed both in E399 and in E1820 [21,24,25] to ensure a plane-strain state. For instance, a high-triaxiality region should be considerably larger than the plastic zone size and the ductile tearing section at the two beam edges. The former is mainly guaranteed by the beam thickness, while the latter by beam width according to the definition of our work. To assure plane-strain conditions, a critical sample dimension D EPFM (Eq. 2) needs to be present, also for the micron scale [15,26]. where J Ic is the critical J-integral for Mode I fracture and σ y is the yield strength of the tested material. If we consider the obtained J-integral (423.0 ± 22.1 J/m 2 ) and the yield strength of martensite islands in our DP800 steel (2880 ± 49 MPa) [27], the critical sample dimension D EPFM ranges from 1.4 to 6.8 µm that sets the lower limit of the sample thickness W and width B. As shown in Table 1, this condition is not fulfilled. What is obtained here can be rather considered as conditional fracture toughness for this dimension. Unfortunately, due to the limited martensite island size and the considerable large fracture toughness of martensite, one cannot obtain a geometry-independent fracture toughness under plane-strain conditions in this DP800 steel grade. Still, the results obtained here could be used as an input parameter for modeling damage initiation [28]. Although the conditional fracture initiation toughness of five beams is consistent with one another, the crack resistance curves deviate with further crack extension. One possible explanation could be the slightly different a/W ratio of the investigated cantilevers. The top most curve in Fig. 7a was measured on the cantilever with a high a/W ratio, in other words, shorter left ligament. While the crack initiation is less influenced by the ligament length, the crack resistance curve depends strongly on initial crack depth at macroscopic investigations [29,30]. However, this trend was recently not observed at the micrometer length scale [15], where shorter ligaments lead to higher crack resistance. Hence, the more likely explanation of the strong variation in crack resistance curve is the strong variation of microstructure in the 5 tested beams: Neither the number of probed variants nor the orientation of the martensite island is identical for all the samples. While the influence of the local microstructure seems to be negligible for crack initiation, crack growth is obviously significantly influenced by the hierarchical microstructure of the martensite. Another factor that might lead to the deviation of crack growth resistance curve is the roughness of the crack front due to the heterogenous microstructure in the martensite island, as shown in Fig. 8. The data were obtained by FIB serial sectioning of the tested cantilever. In Fig. 8, the crack length exhibits a minimum length of 717 nm and a maximum length of 974 nm. In the in situ SEM micrographs, the crack length was measured at the front face of the cantilevers and is 840 nm. It is expected and well-known from the literature [15] that the variation of the crack length at crack initiation is smaller compared to the region showing pronounced crack growth (Fig. 8). Hence, the crack front roughness has a larger influence on the R-curve behavior than on the crack initiation toughness value. Comparison with other Fe-based materials The obtained fracture toughness of DP800 martensite islands is substantially lower than that of tested bulk martensite (can reach dozens of MPa m 1/2 ) which has a similar carbon content but much larger substructure size [20,31]. Recently, the toughness of different steels at the micron scale including white etching layers-which might be similar to martensite in terms of carbon supersaturation, but not in terms of microstructure-was correlated with the hardness via an empirical equation K Q = 10 4 HV , where HV is the Vickers hardness [23]. The Vickers hardness of martensite was statistically reported following H V = 0.4(σ Y − 100) [32]. In this empirical way, the estimated fracture toughness K IQ is 8.8-9.2 MPa m 1/2 , which is close to our experimental results of 10.1 ± 0.3 MPa m 1/2 . Hence, the martensite in DP800 follows the expected trend for steels. Unraveling the reason for the observed toughness is more complicated and only a brief speculation is presented here. The carbon content of martensite and its distribution plays a critical role in the fracture toughness of Fe-based alloys [33,34]. Supersaturated carbon has an adverse effect on fracture toughness, such as in severely deformed pearlite. The lamellar shape cementite in pearlitic steels is dissolved into the ferrite matrix upon severe plastic deformation (e.g., wire drawing), reducing strain hardening ability of soft ferrite. This partly results in an inferior fracture toughness [33,34]. The nominal carbon content in our material (0.13 wt%) is much lower than that in the pearlitic rail steel (0.72 wt%) where WELs are formed [23,35]. However, in our case almost the entire carbon is present in the martensite (see Fig. 2) with an average amount similar to carbon content of 3 at.% present in WELs [35]. Besides, both show a heterogeneous distribution of carbon segregating at defects like dislocations and boundaries. No obvious carbides are formed in both cases, which are believed to play a critical role in microcrack or microvoid initiation and deteriorate fracture toughness [18]. Hence, carbon should not be the key reason inducing more brittleness of martensite islands compared with WELs. Another important factor is the grain size of the two microstructures. For lath martensite, containing abundant substructures as in our case, block boundaries act as the most efficient obstacles for dislocation motion [36,37]. The block size is in the order of 100 nm in our sample, substantially finer than the one in the literature [20,35], which strongly impedes dislocation motion in martensite islands and deteriorates ductility. In addition, grains of martensite in WELs exhibit almost equiaxed morphology, while martensite islands in dual-phase steels have a hierarchical structure with lath, blocks and packets arranging themselves complying with orientation of the prior austenite grain. This kind of structural ordering might be very detrimental for toughness. A significantly lower fracture toughness (5 MPa m 1/2 ) of nanostructured pearlitic steels was reported when the loading direction is in parallel with lamellar microstructures. By contrast, under a perpendicular loading, fracture toughness up to 40 MPa m 1/2 was found [38]. Damage initiation at martensite islands in DP steels The low toughness of martensite islands, as quantitatively proven by the micro cantilever bending test, is responsible for the crack initiation in martensite. This was also shown by in situ macroscopic tensile testing on the same DP steel combined with machine learning to statistically identify the main damage initiation sites [5]. Martensite cracking at lower strain was also found by Calcagnotto et al. [3], where a coarse-grained DP steel grade showed cleavage fractography. Based on the measured initiation fracture toughness value, we can estimate the critical defect size for crack initiation according to Eq. (3) [21]. Here, Y is dimensionless geometrical factor, varying with different geometries and σ y is the yield strength 2880 MPa. Assuming that we have a penny shape crack with Y as 2 π , the estimated critical defect size is approximate 4 µm. Note that for a more conservative value, we take the smaller K i instead of K i,2% to calculate. As the critical defect size is larger than the mean martensite island size, it is suggested that most isolated martensite islands would rather deform plastically than initiate a crack. However, large martensite islands or a banded martensite structure are sufficiently large to show crack initiation. This observation is in agreement with [39,40], showing that crack initiation is preferably found at martensite bands or closely agglomerating martensite regions. Having said that, it is clear that a damage tolerance of DP steels can only be obtained by avoiding a banded microstructure. Conclusions We investigated the fracture behavior of martensite islands in DP800 steel and can conclude the following: • Our martensite islands have a hierarchical substructure. Most of the carbon is located at substructure boundaries. • Due to the small martensite, we often provoke plastic deformation of surrounding ferrite, which renders the measurement of crack initiation and growth challenging. Metallography preparation The material used in this work is a dual-phase steel DP800 with an ultimate tensile strength 800 MPa, which is a low-carbon steel with a few alloying elements. The chemical composition is Fe-0.13C-1.69Mn-0.19Si-0.72Cr (in wt%). The initial sheet was first cut into 8 × 5 × 1.5 mm 3 sized pieces. The 5 × 1.5 mm 2 sized cross section was grinded and polished by oxide polishing suspension (OPS), to prepare for the consequent microcantilever production by FIB milling, APT and EBSD. In order to reduce the FIB milling time, we targeted martensite islands which were located at the very edge of a polished surface. This required the preparation of another surface until 4000#, aiming at minimizing the roughness at the sample edge. Besides, for microstructural investigations one additional sample was polished and subsequently etched by 1% Nital solution for 5 s. APT analysis APT measurements were conducted to assess the elemental distribution in the martensite islands-particularly the C concentration and location-and further assist in understanding the fracture behavior. The targeted feature was lifted out onto a silicon coupon and sharpened through FIB milling (FEI Helios NanoLab 600TM) until a needle-like sharp tip was obtained. Besides, the tip was cleaned with 5 kV and a current of 15 pA to minimize the contamination of Ga + ions. Consecutively, the sharpened tip was excited atom by atom on a CAMECA instrument LEAP™ 5000XR using the voltage mode. The operation parameters were set as follows: the base temperature was 60 K, the detection rate 0.5%, pulse fraction 20% and pulse rate 250 kHz. Finally, a reconstruction of a three-dimensional sample tip was performed with the software package IVAS®. Microcantilever production and bending test The microcantilevers were produced using a Zeiss Auriga® Dual beam FIB, for which the targeted feature (the martensite island) needs to be located. From the top view of the sample edge, it can be identified through topographical contrast caused by light etching effect of OPS polishing under secondary electron detector (SE). Consecutively, a small area was FIB cut carefully using a fine current (120 pA) at the boundary of a presumably large martensite island (as imaged at the surface) to determine the three-dimensional size of the island. Only martensite islands larger than 1 µm in depth are further milled by FIB. Subsequent coarse (16 nA), intermediate (2 nA) and fine milling (240 pA) steps at 30 keV ion energy were used to finish the cantilever shape as shown in Fig. 9a. Finally, a through thickness notch was milled using a current of 15 pA. Two aspects motivate the specific geometry compared to standard cantilever beam geometry: One is the very limited martensite island size. The other is to prevent plastic deformation in the remarkably softer ferrite (compare 2880 ± 49 MPa compressive yield strength for martensite and 147 ± 6 MPa CRSS for ferrite [27]). This is required to link the force-displacement curve directly to processes during crack initiation and growth at the harder martensite without being obstructed by ferrite plasticity. The neck area ensures, to the largest extent, a full martensite microstructure in the highly stressed gauge section, while the ferrite suffers considerable low stresses due to the increased sample thickness. In Fig. 9a, M denotes martensite while F ferrite. L is the length of the beam, from the notch to the loading point. W is the thickness, a 0 the initial crack length and B the cantilever width. The aspect ratios are kept constant at W:B:a 0 :L = 1:1:0.2:5 with a nominal cantilever width of B = 1 µm. The in situ fracture tests were performed in a Zeiss Gemini 500 field emission scanning electron microscope (SEM) equipped with a Hysitron indenter system. A wedge-shaped indenter is used to ensure a line-contact. It is made of tungsten carbide. We have conducted the bending test in a displacementcontrolled mode with a displacement rate of 5 nm/s. Loading and unloading segments were applied for the convenience of measuring crack growth through SEM snapshots, for which the stage tilt angle was always corrected. The snapshots during unloading segments were used to measure the crack length a according to the definition provided schematically in Fig. 9b. Analysis of fracture toughness We apply EPFM to analyze the fracture toughness of martensite islands, because we expect a plastic zone size in the order of 3 µm following Irwin's model and considering a reference fracture toughness of white etching layer with martensite structure [21,23,26]. Hence, both the elastic and plastic contributions are taken into account (see Eq. (4) [25]): where J (i) is the J-integral of the cracked specimen upon the ith loading sequence that comprises the elastic energy and the dissipated plastic energy. K IQ(i) is the conditional stress intensity factor calculated based on the linear elastic fracture as expressed by Eq. (5): F Q(i) is the ith loading force and L, B, W are the geometrical dimensions of the tested samples as clarified in "Microcantilever production and bending test" section. Besides, f a W represents the dimensionless geometrical factor with an expression in Eq. (6) [11]: In the plastic part, η is a constant normally taken as 2. A pl(i) is the area underneath the load versus displacement curve until the ith loading, representing the integrated plastic work. Details can be found in [25]. project B03 "Understanding the damage initiation at microstructural scale". Funding Open Access funding enabled and organized by Projekt DEAL.. Data availability The raw data used in this work can be made available upon request. Declarations Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
5,448.8
2021-03-17T00:00:00.000
[ "Materials Science" ]
Analytic Approximations for the Primordial Power Spectrum with Israel Junction Conditions This work compares cosmological matching conditions used in approximating generic pre-inflationary phases of the universe. We show that the joining conditions for primordial scalar perturbations assumed by Contaldi et al. are inconsistent with the physically motivated Israel junction conditions, however; performing general relativistic matching with the aforementioned constraints results in unrealistic primordial power spectra. Eliminating the need for ambiguous matching, we look at an alternative semi-analytic model for producing the primordial power spectrum allowing for finite duration cosmological phase transitions. I. INTRODUCTION The standard model of cosmology consists of a Universe filled with cold dark matter and a cosmological constant known as ΛCDM [2][3][4].Inflation is a period of exponential expansion in the very early Universe which is an additional ingredient to the current paradigm that solves several issues of standard big bang cosmology such as the cosmological horizon, flatness, and monopole problems [5][6][7].Most notably, inflation provides a causal theory of structure formation whereby quantum fluctuations deep inside the comoving horizon grow to macroscopic scales with the accelerated expansion of the Universe [8,9].The primordial power spectrum provides a statistical measure of these scalar fluctuations and is found to be nearly scale invariant by current observations [3,4]. We consider a cosmological scenario in which the Universe evolves from the initial singularity into a noninflating state, termed kinetic dominance, where the potential energy of the inflaton is exceeded by its kinetic energy [1,[10][11][12][13].Generating a power spectrum of scalar primordial perturbations generally requires numerical solutions to the equations describing the background evolution of the Universe which in turn demands a choice of the functional form of the inflationary potential.The ability to produce primordial power spectra which do not require a selection of inflationary potential is useful in that it allows for generic analyses of the early Universe [14][15][16].Contaldi et al. [1] present a model of this form wherein the background Universe is approximated by an instantaneous transition between a primordial phase of kinetic dominance and inflation. Our focus will be on formulating physically acceptable matching conditions which join scalar perturbations across cosmological phase transitions defined by a jump in the equation of state of the scalar field.We are concerned both with the primordial power spectrum for the cosmological scenario of interest and coming to general conclusions as to the effects of instantaneous phase transition on the primordial power spectrum.A theory on the propagation of primordial perturbations through a cosmological transition is present in the literature with application to three scenarios.These are, transitions between inflation and a slow-roll violating phase [17][18][19][20][21][22][23][24][25], the change from contraction to expansion in an inflationary alternative known as a bouncing universe [26][27][28][29][30][31], and finally, to evolve the primordial power spectrum to current observations, the transition between inflation and reheating is considered [32][33][34][35][36].These references provide a starting point for the novel analysis contained in this work which applies Israel junction conditions to the matching of primordial scalar perturbations in the Contaldi approximation. We subsequently introduce an alternative model which smoothly joins the analytic scaling of the comoving horizon for a phase of kinetic dominance preceding inflation, which can be used to generate the primordial power spectrum from finite duration cosmological phase transitions.Power spectra produced from arbitrarily sudden cosmological phase transitions will prove fruitful in comparing to those arising from instantaneous transitions in the Contaldi approximation.Although this method does not demand a choice in functional form of the inflationary potential, the Hamilton-Jacobi formalism presents the opportunity for phenomenological study.This paper is organized as follows.Section II details theoretical background and establishes notation for gauge invariant variables and the primordial power spectrum.In Sec.III the Contaldi approximation is introduced as a potential independent method for producing an analytic primordial power spectrum.Sec.IV proposes the use of Israel junction conditions to derive cosmological matching conditions for primordial scalar perturbations.In Sec.V the primordial power spectra produced from applying cosmological matching conditions to the Contaldi approximation are shown.Sec.VI presents an alternative model for generating the primordial power spectrum from a smooth analytic background.Conclusions and directions for future work are presented in Sec.VII. II. BACKGROUND Cosmic time derivatives, dt, will be represented by overdots and conformal time derivatives, dτ , by primes unless otherwise specified.As well V ,ϕ = dV dϕ and partial derivatives are denoted by commas.All equations are given in natural units such that c = ℏ = 8πG = 1.We work in the case of a flat universe where the curvature of background space is K = 0.The metric signature used is (+, −, −, −). The background theory developed in this section uses Refs.[8,9] unless otherwise stated. A. Single-field inflation The simplest models of inflation involve a single scalar field, ϕ, known as the inflaton, whose self-interactions are characterized by the inflationary potential, V (ϕ).The action is composed of the summation of the Einstein-Hilbert action and the action of a scalar field with a canonical kinetic term, where g µν is the metric and R is the Ricci scalar.Under the assumptions of the cosmological principle of homogeneity and isotropy the Friedmann-Robertson-Walker (FRW) metric is utilized, Using the stress-energy tensor, T µν , for a perfect fluid in thermodynamic equilibrium and applying the FRW metric to the Einstein field equations, the Fridemann and Klein-Gordon equations can be obtained which comprise the background expressions governing the dynamics of the geometry and evolution of the scalar field.These are Eqs.( 3)-(4) and Eq. ( 5), respectively, where a is the scale factor and H = ȧ a is the Hubble parameter.Specifying initial conditions on ϕ, φ and knowing the form of the scalar field potential, Eqs. ( 3) and ( 5) can be solved to fully specify the evolution of a flat universe. B. Mukhanov-Sasaki equation The early Universe was very nearly homogeneous; therefore, it is sufficient to consider linear perturbations of the scalar field about its homogeneous background, and linear perturbations of the metric about its background, In real space, the scalar vector tensor (SVT) decomposition of the metric perturbations is In the case of linear perturbations, scalar, vector, and tensor components do not dynamically mix, and hence, we can neglect vector and tensor perturbations in the following derivations.Threading and slicing of perturbed spacetime is not unique, and thus, it is useful to define a gauge invariant combination of the scalar type metric and scalar field perturbations to ensure fluctuations cannot be removed by a coordinate transformation.The comoving curvature perturbation, R, is defined as which can be geometrically interpreted as a measure of the spatial curvature of comoving or constant scalar field value (δϕ = 0) hypersurfaces.Using Eq. ( 6) for the perturbed scalar field and Eq. ( 9) for the gauge invariant comoving curvature perturbation, the action to second-order in R is where the Mukhanov variable, v, is assigned as specifying z as the following function of cosmic time: Changing to conformal time, τ = dt a , gives Taking the Fourier transform in spatial coordinates followed by extremizing the resulting action gives the Mukhanov-Sasaki (MS) equation in terms of the Mukhanov variable and derivatives with respect to conformal time, This equation in the form of a simple harmonic oscillator with time dependent mass, z ′′ z , describes the evolution of comoving curvature perturbations with comoving wave number k. Solutions to the MS equation in general require numeric integration of the coupled background expressions (3) and ( 5) in order to determine the evolution of the scale factor and Hubble parameter which control the behavior of z ′′ z . C. Primordial power spectrum The primordial power spectrum is the Fourier transform of the two-point function of comoving curvature perturbations.This is Recalling Eq. ( 11) for the Mukhanov variable in terms of the comoving curvature perturbation, the dimensionless power spectrum can be written as where the limit k ≪ aH indicates that modes are evaluated upon exiting the comoving horizon and freezing-out. In the six parameter ΛCDM cosmology the primordial power spectrum is parametrized by n s , the scalar spectral index and A s , the amplitude of fluctuations, through the following power law: Here, k * is an arbitrary reference scale referred to as the pivot scale which sets the location of the cutoff in the power spectrum [4]. III. CONTALDI APPROXIMATION As previously mentioned, the MS equation defined by (14) has in general no analytic solutions; however, analytic primordial mode functions can be obtained in a number of special cases such as when the approximation a holds [1].The first slow-roll parameter, ε, in terms of the scalar field and cosmic time derivatives is [8] Using the above expression and recalling Eq. ( 12), we can equivalently define z as where working with the positive root corresponding to the choice that φ > 0. From Eq. ( 19), it is clear that z ∝ a and thus, z ′′ z ≈ a ′′ a when ε is constant in time.When these conditions hold, the following expression may be taken as an approximation for the perturbation evolution equation: This MS equation approximation has analytic solutions during kinetic dominance and inflation which will be used to define the analytic primordial power spectrum in the Contaldi approximation [1]. Figure 1 shows the analytic evolution of the background in the Contaldi approximation with a comoving horizon which instantaneous transitions between an era of kinetic dominance and de Sitter inflation.We set for mathematical convenience the transition to be at τ = 0.The comoving horizon is matched at the transition taking the value of 1 kt ≡ 1 atHt as a and H are required to be matched in this model [1].The Contaldi approximation demands a jump in the first-slow roll parameter, ε, and equally, a discontinuity in the equation of state of the scalar field, A. Kinetic dominance We refer to a slow-roll violating phase obeying φ2 ≫ V (ϕ) as kinetic dominance [1,[10][11][12][13].The motivation for including this preinflationary phase follows from the original construction by Contaldi et al. [1] so to provide an early Universe mechanism for suppression of the CMB power spectrum at low multipoles, ℓ, as compared to that predicted by ΛCDM [3,37].This is, reduction in power at large scales is introduced via the primordial spectrum with a period of kinetic dominance.During such an epoch [1], FIG. 1. Instantaneous transition in the comoving horizon between a period of kinetic dominance and de Sitter inflation as used in the Contaldi approximation.We have set a = 0 at the Planck epoch; however, the convention that a = 1 at the present epoch is not used but instead denotes the time of the instantaneous phase transition.Based on Fig. 3 in [14]. This implies the following scaling of the comoving horizon: Rearranging and changing to conformal time, one can obtain Solving the MS equation approximation defined by (20) using the scale factor given above, results in the following primordial mode equation during kinetic dominance: where A k and B k are coefficients of integration representing the nonuniqueness of the primordial mode functions, and H (1) 0 , H denote Hankel functions of the first and second kind. B. de Sitter inflation de Sitter inflation is a regime defined by a constant Hubble parameter, Ḣ = 0 [8], which immediately gives The comoving horizon then scales as Rearranging and changing to conformal time, during de Sitter inflation the scale factor can be expressed as Again solving the MS equation approximation expressed by (20) using the scale factor defined by Eq. ( 28), the primordial mode equation during de Sitter inflation is where C k and D k are coefficients of integration. C. Analytic primordial power spectrum The primordial power spectrum is formed during the inflationary epoch when comoving curvature perturbations exit the comoving horizon and cease to evolve.The analytic primordial power spectrum in the Contaldi approximation can be derived from the dimensionless primordial power spectrum defined by Eq. ( 16) with use of the analytic functions for v and z during de Sitter inflation.The power spectrum becomes where the condition for the modes to be superhorizon, k ≪ aH, is implemented by determining the value of τ at late times when all relevant modes are far outside the horizon [1,14].From Eq. ( 28), for the scale factor in terms of conformal time during de Sitter inflation, the condition for a → ∞ corresponds to τ → 1 kt .Taking this limit in Eq. (30), the resulting analytic primordial power spectrum is expressed in terms of constants of integration for the primordial mode functions during de Sitter inflation which will be shown to depend on those during the kinetic dominance era.This is TABLE I. Definitions of quantum vacuum perturbation mode initial conditions for Bunch-Davies vacuum (BD), Hamiltonian diagonalization (HD), renormalized stress energy tensor (RSET), and right-handed mode (RHM) [14]. where ε I is the first slow-roll parameter during inflation.ε I should be set to zero to be consistent with the solution to the MS equation solved using approximations valid during a period of pure de Sitter inflation.Although, this would make the power spectrum divergent and in the case that ε ≪ 1, H ≈ const., Eqs. ( 27)-( 28) describing the background for de Sitter inflation are still approximately true thus the perturbation mode functions defined by Eq. ( 29) may be used as a valid approximation [8].In consequence, the Contaldi approximation will evaluate the power spectrum where choice of ε I affects the amplitude of the power spectrum but not the scale dependence.The amplitude of the power spectrum can be absorbed into the parameter A s = k 2 t 4π 2 εI , with k t = k * .In order to obtain an expression for the analytic primordial power spectrum defined by Eq. ( 31), the functions A k , B k , C k , and D k must be determined.The coefficients of integration for the kinetic dominance mode functions, A k and B k , are solved by setting v and v ′ to quantum vacuum initial conditions such as those in Table 1 [1,14].Initial vacuum states are generally set far back in the inflationary epoch when all relevant modes are subhorizon.If scales have initial conditions set at a time when they are not sufficiently deep within the comoving horizon, the choice of quantum vacuum may generate observationally distinguishable primordial power spectra [14,38].Introducing a preinflationary phase of kinetic dominance such a consideration becomes important and lends itself to the decision of setting perturbation mode initial conditions at the time in which the comoving horizon is at a maximum using the equations for the kinetic dominance regime.A detailed treatment of observational consequences of choice of initial conditions is emphasized in the work of Gessey-Jones and Handley [14]. To obtain the coefficients of integration of the primordial mode equations during the phase of de Sitter inflation, the scalar perturbations must be matched to the kinetic dominance era.These are fixed in the Contaldi approximation by imposing continuity of v and v ′ across the transition in regimes [1].The coefficients of integration C k and D k are then determined by equating the expression for v and v ′ in each era at the time of the transition.The absence of theoretical justification for propagating FIG. 2. Analytic primordial power spectra generated from the Contaldi approximation for BD, RHM, HD, and RSET vacuum initial conditions.εI is set to 0.0127 in order to normalize the power spectrum to 1. Based on Fig. 4 from [14]. scalar primordial perturbations through a cosmological phase transition in this way initiates the need to derive physically acceptable cosmological matching conditions for the Contaldi approximation. Figure 2 shows the analytic primordial power spectra generated from the Contaldi approximations for BD, RHM, HD, and RSET vacuum initial conditions.The analytic expression for C k and D k are written out in Appendix C by Eqs.(C1)-(C8).A low k cutoff exists around k kt ≈ 2 for BD, RSET, and HD and at k kt ≈ 1 for RHM.Below the low k cutoff the spectra experience power law distributions with BD and RSET ∝ k 2 , HD ∝ k 3 and RHM ∝ k 3 log(k) 2 .In addition, there exists an intermediate region of damped oscillations before the spectrum plateaus at high k values.The behavior at intermediate and high k is all very similar for the initial conditions in consideration with the exception of RSET whose oscillations die down much more slowly before plateauing [14].The scale invariance (zero tilt) of the power spectrum is the result of the inflation phase being derived from approximations for a pure de Sitter regime.Note that it is intermediate values of k, which correspond to scales in the observable range [3,4,14]. A full numerical evolution of the background equations and perturbation evolution equation allows for a comparison to the spectra produced above which does not require matching of the scalar perturbations across a jump in approximated background.Figure 3 shows primordial power spectra with background evolution arising from a Starobinsky inflationary potential where initial conditions for ϕ, φ have resulted in a preinflationary phase of kinetic domination.BD, RHM, HD, and RSET initial conditions for the perturbation equation are set at the maximum of the comoving horizon.The distinct behavior of each power spectra produced from applying the various initial conditions is comparable to those in Fig. 2, with the exception of RHM which looks much more similar to BD.The three regions of behavior of the power spectrum obtained by the Contaldi approximation are present in all spectra.These are, a power law at low k, damped region of oscillation in an intermediate regime and a plateauing at high k. Figure 4 compares the primordial power spectrum produced from the Contaldi approximation and the numerical spectrum produced from the Starobinsky background for BD initial perturbation conditions.The spectra are very similar for small k and both experience a low k cutoff at k ≈ 1.The distinction emphasized in this plot is that the power spectrum produced from the Starobinsky inflationary potential with the given background initial conditions, ϕ, and φ, results in a small tilt to the power spectrum which corresponds to a period of inflation that is not pure de Sitter with H slowly varying. The similarity of the behavior of the analytic power spectra produced by the Contaldi approximation to the full numerical solutions would suggest that the joining of scalar perturbations across the instantaneous phase transition as done in the Contaldi model is the correct approach.We will nonetheless proceed with a precise analysis of acceptable general relativistic matching conditions to show this is not the case. IV. COSMOLOGICAL MATCHING CONDITIONS Despite the resemblance between the analytic and numerical primordial power spectra produced above from the Contaldi approximation and specification of an inflationary background, respectively, we wish to verify the use of physically consistent matching conditions for primordial perturbations which experience a jump in equation of state of the scalar field.We begin by introducing the junction conditions originally outlined by Israel [39], which look at boundary surfaces and thin shells in general relativity to gain clarity regarding the appropriate treatment of surfaces of discontinuity.The proposed constraints allow the union of spacetime described by distinct metrics to smoothly join forming valid solutions to the Einstein field equations.The Israel junction conditions may be summarized as continuity of the first and second fundamental forms across the hypersurface, assumed not to be null, in the absence of a surface stressenergy tensor.For a complete derivation of the Israel junction conditions, one may refer to Appendix A. A. Contaldi matching conditions In Contaldi et al. [1], the coefficients for the primordial mode functions during the de Sitter inflation era are obtained by requiring continuity of v and v ′ across the phase transition.We will refer to these as Contaldi matching conditions and in our notation are as follows. Defining a spacelike hypersurface for the transition Σ : τ = 0, continuity of the Mukhanov variable, v, across the hypersurface is For continuity of the first derivative of the Mukhanov variable in terms of conformal time, v ′ , across the hypersurface, we have It should also be noted that the scale factor and the Hubble parameter are matched at the transition in the model, giving the additional constraints, We will start with the formation of general cosmological matching conditions from the Israel junction conditions and then interpret the Contaldi matching conditions in light of such conclusions. B. Perturbation matching conditions from the Mukhanov-Sasaki equation We begin by deriving matching conditions for scalar perturbations with a method for effectively implementing the Israel junction conditions as has been done in much of the literature concerning propagating primordial perturbations through a jump in equation of state of the scalar field [20][21][22].Demanding the equation of motion for the comoving curvature perturbations does not contain singularities at the transition, the Israel junction conditions are assumed to be satisfied and cosmological matching conditions for the scalar perturbations can be obtained.The first requirement is continuity of the curvature perturbation itself, The MS equation in terms of conformal time derivatives and the comoving curvature perturbation is In Sturm-Liouville form, this becomes Integrating both sides of Eq. ( 37) around the transition at τ = 0, where δ is a small displacement, we have Recalling z as defined in Eq. ( 19), The following change of variables can be made using the definition of the first-slow roll parameter, This substitution applied to Eq. ( 38) eliminates ε, which is the single parameter that jumps across the transition, and we arrive at the second cosmological matching condition, The two linearly independent matching conditions derived from this integral formulation are then It should be emphasized that matching conditions (41a)-(41b) do not directly correspond to the first and second junction conditions respectively but are required to fulfil the conditions of continuity of the induced metric and extrinsic curvature, that which is not made clear in previous literature [20][21][22].In addition, a more careful investigation in the next section will show that assuming continuity of the comoving curvature perturbation given by condition (41a) amounts to making a choice for the definition of the hypersurface at the phase transition, which should not be held as trivial. C. Perturbation matching conditions defining a hypersurface at the transition We now implement the Israel junction conditions to obtain cosmological matching conditions for scalar primordial perturbations by explicitly defining a spacelike hypersurface at the transition and determining the functions which must be smooth in order for continuity of the induced metric and extrinsic curvature as demanded by the relevant constraints.We begin by making use of the work of Deruelle and Mukhanov [17], who sketch a procedure by which to derive cosmological matching conditions on a generic hypersurface . The general perturbed FRW metric in conformal time is Suppose the stress-energy tensor which governs the evolution of Eq. ( 42) undergoes a finite discontinuity at a spacelike hypersurface Σ : φ(τ, x i ) = φ(τ ) + δφ(τ, x i ) = const.,where φ(τ, x i ) is an arbitrary four-scalar with a homogeneous part, φ and a small inhomogeneous part, δφ.Under the coordinate transformation, The perturbation δφ transforms as Going into the tilde coordinate system, τ = const.where δφ = 0, Immediately following from this is that the scale factor, a, and its first time derivative must be continuous across the hypersurface.From the first Israel junction condition, continuity of the metric defined by Eq. ( 42) implies the following two conditions in the tilde coordinate system: From the second Israel junction conditions, continuity of the extrinsic curvature reads where the conformal Hubble parameter is H = a ′ a .Moving back into the original coordinate system gives matching conditions on Σ : φ + δφ = const.in an arbitrary coordinate system, In the absence of anisotropic stress, the ij Einstein equations give Φ = Ψ.The following analysis will be done in the Newtonian/longitudinal gauge (E = B = 0) where the linearly independent conditions (49a)-(49c) for a hypersurface defined by an arbitrary scalar become Recovering cosmological matching conditions that can be applied to the joining of scalar primordial perturbations in the Contaldi approximation requires specification of the scalar, φ, defining the hypersurface at the transition between kinetic dominance and inflation.We now will consider the joining conditions emerging from two choices of φ. Hypersurface of constant energy density A hypersurface defining the cosmological phase transition in which the energy density, ρ, is constant expressed as Σ : ρ + δρ = const.has been motivated in previous literature [17,33,34].Equations (50a)-(50c) become Working in the Newtonian gauge, Eq. (51a) may be written as Equation (51b) can be rewritten so that ρ′ and δρ are in terms of a and H. Using the 00 linearized Einstein equations in the Newtonian gauge, Equation (51b) is then One can additionally obtain Eq. (56a) for cosmological perturbations in a universe dominated by a scalar field and the background Eq. (56b) from the Friedmann equations written in conformal time [9], Redefining the comoving curvature perturbation from Eq. ( 9) using relations (56a)-(56b) gives From condition (55) and R as defined above, Equation ( 51c) is then redundant and the linearly independent matching conditions for a hypersurface defined by constant ρ are Writing conditions (59a)-(59b) in terms of R and z, the following expression derived in Appendix B is required: Using Eq. (56b) gives Additionally in Appendix B, the following relation is derived: With k, H, and a matched across the transition, the cosmological joining conditions defined by (59a)-(59b) become Comparing with the matching conditions arrived at though the integral formulation in the previous section, condition (63a) differs from condition (41a) except in the long wavelength limit where the second term of Eq. (63a) may be ignored as R ′ is conserved.Condition (63a) is also equivalent to requiring the uniform-density curvature perturbation, ζ, be continuous across the transition, where ζ = −Ψ+ H ρ δρ [8,34].This is physically consistent as the hypersurface in consideration is one of uniform energy density. Hypersurface of constant scalar field An alternative choice of scalar defining the hypersurface at the transition is taking a surface of constant scalar field value [19-21, 31, 33].Expressing the hypersurface at the transition as Σ : φ + δϕ = const., the matching conditions take the form, Equation (64a) may again be written as Noting the definition of R from Eq. ( 9), condition (64b) can conveniently be taken in linear combination with constraint (65) as Additionally, recalling Eqs.(56a)-(56b) it is clear that Eq. ( 64c) is trivially satisfied.The resulting cosmological matching conditions for a hypersurface defined by constant ϕ are In terms of R and z, these are These conditions are the same as that constructed via the MS equation expressed by constraints (41a)-(41b). D. A hypersurface for Contaldi matching In further consideration of the Contaldi matching conditions, one may look to see if the conservation of v and v ′ across the phase transition can be assigned as cosmological matching conditions for some choice of hypersurface.This will be done by working backwards from conditions (32)- (33) in the Newtonian gauge to determine δφ φ′ for the generic cosmological matching conditions defined in Eqs.(50a)-(50c) with corresponding hypersurface Σ : φ + δφ = const.. Noting the definition of the Mukhanov variable, v, the Contaldi matching conditions in terms of R become Recalling Eq. (62) for Φ in terms of R ′ , Eq. (69b) may be written as Taking Eq. (69a) in linear combination with Eq. ( 70) with z ′ z , a, H, and k conserved across the hypersurface in the Contaldi approximation, the Contaldi matching conditions become The cosmological matching conditions for any choice of hypersurface requires Φ be matched across the transition as stated by condition (50a). [Φ] ± = 0 can only be trivially satisfied for the matching of v and v ′ as there does not exist a condition for z to independently be conserved across the transition.It is then that Contaldi matching does not correspond to a choice of hypersurface with cosmological matching conditions as it fails to satisfy the requirements of the Israel junction conditions.Importantly, this illustrates that the Contaldi matching conditions are not physically acceptable for reason that they do not account for the jump in z from one regime to another, which is the result of a jump in the first-slow roll parameter, ε, or equally the scalar field equation of state, w ϕ , from kinetic dominance to de Sitter inflation. E. On the choice of hypersurface This analysis has carefully considered the quantities which must not jump across a spacelike hypersurface defining a cosmological phase transition in order to ensure continuity of the first and second fundamental forms as demanded by the Israel junction conditions.By choosing different physical parameters to define the hypersurface of discontinuity the quantities that must be continuous across the transition differ on subhorizon scales.The matching conditions all reduce to Contaldi matching in the case that there is no jump in scalar field equation of state.This is clear in that z becomes a conserved quantity across the transition; however, we are concerned with a cosmological scenario, which includes a phase transition defined by a jump in the equation of state of the scalar field and so the choice of hypersurface for the transition, which emits different matching conditions becomes crucial to constructing an accurate primordial power spectrum. Justification for the choice of scalar defining the hypersurface of discontinuity is present in previous literature investigating the propagation of scalar perturbations through phase transitions [19-21, 31, 33].It is conveyed in [17,33,34], that if the scalar field is an adiabatic perfect fluid, a jump in equation of state implies a jump in pressure and from the Friedmann equations the energy density remains constant.This lends itself to the choice that the hypersurface of discontinuity be Σ : ρ + δρ = const.From [19,21,31], it is stressed that if a transition in equation of state is triggered by a local physical quantity, the hypersurface must be a function of of ϕ, suggesting Σ : φ+δϕ = const.Although both choice of scalars defining the hypersurface look to be allowable, there remains no theoretical motivation for a canonical definition for the hypersurface of transition. V. PRIMORDIAL POWER SPECTRUM WITH COSMOLOGICAL MATCHING CONDITIONS We now consider the behavior of the primordial power spectra produced by applying the two sets of cosmological matching conditions derived in the previous section to the Contaldi approximation. Figure 5 shows the primordial power spectrum resulting from the Contaldi approximation using cosmological conditions (63a)-(63b) arrived at by applying Israel junction conditions to a hypersurface of constant energy density, ρ, defining the transition between kinetic dominance and inflation.The coefficients of integration C k and D k are written out in Appendix C in Eqs.(C9)-(C16).The behavior of the power spectrum is clear through looking at these expressions.The amplitude of the power spectrum is modified, where there exists a scaling of ε −2 I as compared to ε −1 I which is present in Contaldi matching.This alters the normalization of the power spectrum.Enhancement of oscillations which are no longer damped at high k correspond to v I and v ′ I being in phase.Moreover, the power spectrum is no longer scale independent as leading order in k has become √ k rather than 1 √ k as in Contaldi matching.This gives a k 2 dependence of the primordial power spectrum recalling Eq. (31). Figure 6 gives the primordial power spectrum generated by the Contaldi approximation using cosmological conditions (68a)-(68b) resulting from Israel junction conditions applied to a hypersurface of constant scalar field value, ϕ, defining the transition which coincide with those arrived at through considering singular terms in the MS equation.The coefficients of integration C k and D k are written out in Appendix C in Eqs.(C17)-(C24).As with the choice of a constant energy density hypersurface, the amplitude of the power spectrum changes due to a ε −2 I dependence.This agrees with conclusions from Carney et al. [19].Oscillations no longer plateau at high k due to v I and v ′ I being in phase.Importantly, scale invariance is retained as leading order in k remains 1 √ k .Although the behavior of the power spectra produced from applying the cosmological matching conditions to the Contaldi approximation falls directly from the joined coefficients, C k and D k , unphysical features in the spectra that result from applying such physically motivated conditions suggests a closer look should be taken at the impact of instantaneous transitions on the primordial power spectrum.We do this by presenting an alternative model to the Contaldi approximation which generates primordial power spectra from background evolution permitting arbitrary sharp cosmological phase transitions. VI. A SMOOTH SEMIANALYTIC MODEL FOR THE PRIMORDIAL POWER SPECTRUM The following section details a novel semianalytic method for computing the primordial power spectrum.This is done by smoothly joining the approximations to the comoving horizon for a phase of kinetic dominance and inflation.With analytic solutions to the background Universe one may express the MS equation analytically giving an expression which can be solved numerically.The motivations for this approach are threefold; the produced primordial power spectrum remains independent of a choice of the inflationary potential, it does not require matching conditions for the scalar perturbations, and finally, control is gained over the duration of the cosmological phase transition.The catalyst for this model is both in an alternative to the Contaldi approximation and the ability to produce a power spectrum from an arbitrarily sudden cosmological transition which will prove useful for comparing to spectra produced from applying cosmological matching conditions to an instantaneous transition in the Contaldi approximation. A. Constructing a solution with pure de Sitter inflation The MS equation in terms of analytic functions of H(N ) and z(N ) and derivatives with respect to number of e-folds, N = loga, is The expressions for H(N ), H ′ (N ) H(N ) , z(N ) and z ′ (N ) z(N ) may be obtained by setting an analytic equation for the comoving horizon.This is done via the following procedure: The comoving horizon during kinetic dominance scales as and that during de Sitter inflation is The smooth comoving horizon obtained by combining the scaling of the horizons in kinetic dominance and de Sitter inflation may be generalized to produce increasingly sharp transitions by introducing a parameter, α ∈ R >0 , giving a comoving horizon with the resulting functional form, In terms of e-folds, the comoving horizon is defined as H(N ) can be obtained by rearranging Differentiating with respect to e-folds gives The first slow-roll parameter defined in terms of derivatives with respect to e-folds is Therefore, the analytic expression for the first-slow roll parameter with the background specified by Eq. ( 74) is Noting that Eq. ( 78) is divergent, resulting from the stage of pure de Sitter inflation, a primordial power spectrum cannot be constructed using this background.We proceed by implementing a non-de Sitter inflationary stage obtained through modifying the functional form of the first slow roll-parameter. B. Constructing a solution with modified de Sitter inflation An equation for ε(N ) which does not tend to zero during the period of inflation can be determined by using the following reverse procedure: giving a comoving horizon which smoothly connects an epoch of kinetic dominance to a period of modified de Sitter inflation.This so-called modified de sitter inflation era is characterized by the slow-roll parameters ε I , |η(N )| ≪ 1, where η is the second slow roll parameter.These conditions capture a slowly decreasing Hubble parameter. We modify Eq. ( 78) to take the following form: such that a nonzero first slow-roll parameter is attained at the end of the finite duration phase transition, lim N →∞ ε(N ) = ε I ̸ = 0.The general form of the background equations can be solved starting from the following equation, which has derivatives in terms of e-folds, Solving for H(N ), the relevant equations become ) . (85) Equations ( 81), (83), and (85) comprise the analytic equations necessary to write Eq. (72) fully analytically; however, the functional forms of these expressions require the MS equation be solved numerically.Evolving perturbations until they are superhorizon, a numeric primordial power spectrum can be generated from the constructed background. The second slow roll-parameter in terms of derivatives with respect to e-folds is After the transition from kinetic dominance, this model assumes a constant first-slow roll parameter, ε I .That is ε(N ) = 0, which demands the second slow-roll parameter evaluated at observationally relevant k follow from the choice of ε I and α where lim N →∞ η(N ) = ε I .Allowing for a time varying first slow-roll parameter during inflation gives an end to inflation and results in a model for producing the primordial power spectrum where the first and second slow-roll parameters at the pivot scale can be set.A sketch of the procedure by which to obtain such a model is present in Appendix D; however, the analytic background presented in this section is sufficient for our considerations which concentrate on attaining spectra from sudden finite transition to compare to those of instantaneous cosmological transition produced in the Contaldi approximation. C. Cosmological phase transition duration In the semianalytic model we have presented, the duration of the cosmological phase transition can be approximated by looking at Eq. (80) giving the analytic equation for the first slow-roll parameter and determining the number of e-folds it takes to change from ε KD to ε I .We shall define the cosmological transition as when ε(N ) is further than 1% away from the associated value of the first slow-roll parameter during the kinetic dominance and inflation epochs.This choice captures the difference in the length of the transition with a change in ε I , which is not encompassed by simply requiring the slow-roll conditions are met.The end of the period of kinetic dominance corresponds to the start of the phase transition, and the beginning of the modified de Sitter inflation period is equally the end of the cosmological phase transition, The cosmological phase transition then occurs when ε I + 0.01ε I < ε(N ) < ε KD − 0.01ε KD .Figure 7 shows primordial power spectra produced from the model presented in this section with a change in the duration of the cosmological transition which is characterized by α in Eqs. ( 81)-(85) for the background.It is evident that the length of the phase transition has a large effect on the resulting spectra.An approximate duration of the cosmological transition for each background can be calculated by taking the difference between the end of the kinetic dominance period and the beginning of inflation defined by Eqs.(87) and (88).The spectra are all identical at sufficiently large k; however, distinct behavior is particularly noticeable at intermediate k, where for a sufficiently fast transition there exists an enhancement of oscillations at some scales.Importantly, this intermediate region of k corresponds to the observationally relevant scales.The change in behavior occurs for a greater range in scale, the shorter the transition duration.That is, oscillations begin to be enhanced at the same scale but are effected up to higher k when the cosmological phase transition occurs over a shorter duration.Additionally, although we only show the primordial power spectrum with BD conditions in Fig. 7, the affects of choice of initial conditions for the perturbation modes is more pronounced at observationally relevant scales the shorter the duration of the transition. Figure 8 compares the primordial power spectrum produced from an instantaneous transition in the Contaldi approximation using cosmological matching conditions for a transition hypersurface defined by constant ϕ and the power spectrum produced from a sufficiently fast finite duration phase transition.For a specified range of k, the power spectra differ by less than 1%.The scales at which this occur are those corresponding to enhanced scales in the sudden finite duration transition power spectrum.The conditions for which a power spectrum resulting from a finite duration transition looks like that produced from an instantaneous transition has been consider by Carney et al. [19] and Aravind et al. [21] as follows. A primordial power spectrum produced from a cosmological scenario which transitions between inflation and a slow-roll violating phase over a timescale, T , can be approximated by a primordial power spectrum produced from an instantaneous transition for scales obeying where a t is the scale factor at the maximum of the analytic comoving horizon defined by Eq. (83).In the limit of an instantaneous transition, all modes will be enhanced by the phase transition.Primordial power spectra produced with no enhancement in oscillations at intermediate k occur for transition lengths on the order of a single e-fold depending on choice of ε I ; thus, if the cosmological phase transition is not sudden it does not imprint on the primordial power spectrum.This may explain why the power spectra produced by the Contaldi approximation using Contaldi matching in Fig. 2 looks similar to numerical power spectra computed with O(1) transitions specified by an inflationary potential in Fig. 3. Specifically, it has been concluded that Contaldi matching does not join the primordial scalar perturbations in a way that takes into account the instantaneous jump in equation of state of the scalar field.The resulting primordial power spectrum that does not encode the effects of an instantaneous phase FIG. 8.The upper plot compares primordial power spectra produced from an analytic background with a sufficiently sudden duration phase transition defined by Eq. ( 83) taking α = 60 and that from cosmological matching conditions with hypersurface of constant ϕ applied to the instantaneous phase transition in the Contaldi approximation.εI is set to 0.0001 and kt is set to 0.98945 in the analytic power spectrum to align with the background of the numerical solution.RSET initial conditions have been used.The lower plot shows the percent error which remains small for scales obeying the condition expressed by (89). transition could reasonably be expected to exhibit similar behavior to a power spectrum produced by a transition which is too slow to enhance the power spectrum at any relevant scale, as quantified by Eq. (89).Cosmological phase transitions are thought to occur over the duration of several e-folds [33].The difference between primordial power spectrum produced from slow and sudden transitions suggests that producing a primordial power spectrum from a background which is described by an instantaneous transition should be done with caution if cosmological phase transitions are thought to happen over longer timescales. We then suggest that the procedure for producing the primordial power spectrum presented here may be used as an alternative model to the Contaldi approximation which allows for greater control over both the scale dependence of the power spectrum through specification of ε I and the duration of the cosmological phase transition controlled by α.Although we have introduced this model as a potential independent method for computing the primordial power spectrum, the following section considers the implicit inflationary potential of the background model. D. Scalar field potential reconstruction The Hamilton-Jacobi formalism treats the Hubble parameter as the fundamental quantity changing with time. This approach allows for reconstruction of a scalar field potential, V (ϕ), for a specified H(ϕ).In terms of derivatives with respect to number of e-folds, this is Specifying H(N ) for a cosmological evolution smoothly joining an era of kinetic dominance with inflation as denoted by Eq. ( 82), gives the reconstructed potential which in turn admits the equation H(N ) as an exact inflationary solution.A function for ϕ(N ) can be used in order to write the potential as a function of the scalar field.Changing Eq. ( 18) into derivatives with respect to number of e-folds gives the following equation which may be solved to obtain ϕ(N ), Plotting the potential expressed by Eq. ( 90) parametrically as a function of the solution to Eq. ( 91) gives the behavior of the associated V (ϕ) for a background specified by Eq. (83). Figure 9 shows the reconstructed potential for this model with ε I = 0.0001 and α = 1.Taking note of the region of the potential in which inflation occurs, the potential produced from these parameters is a convex (V ,ϕϕ > 0), small field potential where the conditions for inflation are met in that the magnitude of the first and second derivatives of the potential with respect to ϕ, V ,ϕ and V ,ϕϕ , are small [4,8].The convex form of this potential is characteristic for the associated background for ε I ≪ 1 and α < 3, which corresponds to transitions of at least one e-fold in duration.The form of this potential is not ruled out observationally as in the case of usual convex large field inflationary potentials through high values of the tensor-to-scalar ratio, r [4,8]. Figure 10 shows the reconstructed potential for this model with ε I = 0.0001 and α = 10.The corresponding primordial power spectrum is seen in Fig. 7.This example again results in a small field inflationary potential; however, the form of the potential is close to that of a step function which is not supported by current observation.Moreover, it is unphysical to require the scalar field be pushed up the potential towards inflation.Potentials of this form are generic for ε I ≪ 1 and α > 3, where the steplike potential is required to achieve a transition on the timescale of fractions of an e-fold.Note that the Contaldi approximation has an implicit inflationary potential which is a Heaviside step function. Through the Hamilton-Jacobi formalism, one can see that although the semianalytic model we have presented in this section does not demand a choice for the inflationary potential to emit the desired background evolution, the generic construction has associated an implicit potential.This allows for phenomenological analyses in which an observationally constrained primordial power spectrum admits an acceptable functional form for the inflationary potential. VII. CONCLUSION We have considered analytic and numeric procedures for generating the power spectrum of primordial scalar perturbations in the case of a background Universe which undergoes a jump in equation of state of the scalar field.Although there exits interest in a closed universe [40], we consider the simplest case of a flat universe for the purpose of isolating our conclusions wherein further analysis can be used to extended the results to the case of a curved universe.The Contaldi approximation provides an inflationary potential independent method for producing the primordial power spectrum by implementing an instantaneous transition between a phase of kinetic dominance and de Sitter inflation where approximate primordial mode equations exist.The aim of this analysis is the application of Israel junction conditions to determine the physically acceptable way in which to propagate primordial scalar perturbations across cosmological phase transitions allowing for clarification of previous work.The resulting joining conditions are seen to require specification of the scalar defining the spacelike hypersurface at the transition.Cosmological matching conditions corresponding to hypersurfaces of constant scalar field value and energy density were theoretically motivated.Both conditions derived from the MS equation and numerical studies suggest a hypersurface of constant scalar field may be the appropriate choice; however, future work should look to clarify a canonical hypersurface for the transition.Furthermore, the joining of v and v ′ as originally prescribed in the Contaldi approximation has been shown to be insufficient to allow continuity of the first and second fundamental forms describing regions of spacetime separated by a jump in equation of state of the scalar field. A novel semianalytic approach for producing the primordial power spectrum, which smoothly transitions from a phase of kinetic dominance to inflation over a finite duration, was subsequently introduced.The difference between primordial power spectra produced from slow and sudden transitions suggests that models describing an instantaneous transition may not adequately characterize primordial power spectra resulting from transitions occurring over several e-folds, as is thought to arise in nature.This is supported by the unphysical spectra produced from cosmological matching conditions applied to the Contaldi approximation and the steplike form of the implicit inflationary potential which is demanded to produce sufficiently sudden finite cosmological phase transitions.That is, the alternative model for generating the primordial power spectrum presented in this work whilst it does not require a choice of the form of the inflationary potential, it has an associated potential which can be reconstructed.Further work must be done to constrain primordial power spectra produced from this model and the extended model in Appendix D for low α corresponding to a small field implicit inflationary potentials which may be bound observationally via (n s , r) [3,4]. VIII. ACKNOWLEDGMENTS We would like to thank Enrico Pajer and Carlo Contaldi for valuable feedback on this work as well as Thomas Gessey-Jones for many useful conversations. FIG. 4 . FIG.4.Comparison of analytic primordial power spectrum from the Contaldi approximation and numerical primordial power spectrum from a Starobinsky potential for BD initial perturbation conditions.kt is set to 1 in the Contaldi approximation. FIG. 5 . FIG. 5. Analytic primordial power spectra produced from the Contaldi approximation using cosmological matching conditions for a hypersurface defined by Σ : ρ + δρ = const.BD, RHM, HD, and RSET initial conditions are shown.εI has been set to 0.0127 for comparison with Contaldi matching FIG. 6 . FIG.6.Analytic primordial power spectra produced from the Contaldi approximation using cosmological matching conditions for a hypersurface defined by Σ : φ + δϕ = const.BD, RHM, HD, and RSET initial conditions are shown.εI has been set to 0.0127 for comparison with Contaldi matching. FIG. 7 . FIG. 7. Numerical primordial power spectra for increasingly sharp cosmological phase transitions generated by the semianalytic model presented in this section.εI = 0.0001 and α = 2, 10, 60 with BD initial conditions for perturbation modes set at the maximum of the analytic comoving horizon.The duration of the transitions are 3.25, 0.65, and 0.12 e-folds, respectively. FIG. 9 . FIG.9.Characteristic convex inflationary potential from parametric reconstruction for a smooth comoving horizon defined by Eq. (83) with a slow transition corresponding to εI ≪ 1 and α < 3.This example has α = 1 and εI = 0.0001.The regions of the potential corresponding to kinetic dominance and inflation specified using Eqs.(87) and (88). FIG. 10 . FIG.10.Characteristic steplike inflationary potential from parametric reconstruction for a smooth comoving horizon defined by Eq. (83) with a sudden transition corresponding to εI ≪ 1 and α > 3.This example has α = 10 and εI = 0.0001.The regions of the potential corresponding to kinetic dominance and inflation are specified using Eqs.(87) and (88).
11,739
2023-09-27T00:00:00.000
[ "Physics" ]
Constructing cubic curves with involutions In 1888, Heinrich Schroeter provided a ruler construction for points on cubic curves based on line involutions. Using Chasles’ Theorem and the terminology of elliptic curves, we give a simple proof of Schroeter’s construction. In addition, we show how to construct tangents and additional points on the curve using another ruler construction which is also based on line involutions. As an application of Schroeter’s construction we provide a new parametrisation of elliptic curves with torsion group Z/2Z×Z/8Z\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {Z}/2\mathbb {Z}\times \mathbb {Z}/8\mathbb {Z}$$\end{document} and give some configurations with all their points on a cubic curve. Introduction Heinrich Schroeter gave in [2] a surprisingly simple ruler construction to generate points on a cubic curve. Since he did not provide a formal proof for the construction, we would like to present this here. Schroeter's construction can be interpreted as an iterated construction of line involutions. Thus, we first define the notion of a line involution with cross-ratios, and then we show how one can construct line involutions with ruler only. For the sake of simplicity, we introduce the following terminology: For two distinct points P and Q in the plane, P Q denotes the line through P and Q, P Q denotes the distance between P and Q, and for two distinct lines l 1 and l 2 , l 1 ∧ l 2 denotes the intersection point of l 1 and l 2 . We tacitly assume that the plane is the real projective plane, and therefore, l 1 ∧ l 2 is defined for any distinct lines l 1 and l 2 . For the cross-ratio of four lines a, b, x, y of a pencil we use the notation cr(a, b, x, y). Line involution. Given a pencil. A line involution Λ is a mapping which maps each line l of the pencil to a so-called conjugate linel of the pencil, such that the following conditions are satisfied: • Λ is an involution, i.e., Λ • Λ is the identity, in particular we have Λ(l) = l. Notice that any line involution is defined by two different pairs of conjugate lines. We shall use the following construction for line involutions (for the correctness of the construction see Chasles [1, Note X, §34, (28), p. 317]): Given two pairs a,ā and b,b of conjugate lines which meet in P . Suppose, we want to find the conjugate lined of a line d from the same pencil. Choose a point D = P on d and two lines through D which meet a and b in the points A and B, andā andb in the pointsĀ andB, respectively (see Fig. 1). LetD = AB ∧ĀB. Then the conjugate lined of d with respect to the line involution defined by a,ā, b,b is the line PD. • Given three di↵erent pairs of conjugate lines a,ā, b,b, c,c, and let l 1 , l 2 , l 3 , l 4 be four lines among a,ā, b,b, c,c from three di↵erent pairs of conjugate lines, then cr (l 1 , l 2 , l 3 , l 4 ) = cr l 1 ,l 2 ,l 3 ,l 4 . Vice-versa, let Notice that any line involution is defined by two di↵erent pairs of conjugate lines. We shall use the following construction for line involutions (for the correctness of the construction see Chasles [1, Note X, §34, (28), p. 317]): Given two pairs a,ā and b,b of conjugate lines which meet in P . Suppose, we want to find the conjugate lined of a line d from the same pencil. Choose a point D 6 = P on d and two lines through D which meet a and b in the points A and B, andā andb in the pointsĀ andB, respectively (see Fig. 1). LetD = AB^ĀB. Then the conjugate lined of d with respect to the line involution defined by a,ā, b,b is the line PD. Notice that this construction can be carried out using only a ruler. Schroeter's Construction for Cubic Curves Based on line involutions, Schroeter provided in [2] a simple ruler construction for cubic curves. Notice that this construction can be carried out using only a ruler. Schroeter's Construction for Cubic Curves Based on line involutions, Schroeter provided in [2] a simple ruler construction for cubic curves. Schroeter's Construction. Let A,Ā, B,B, C,C be six pairwise distinct points in a plane such that no four points are collinear and the three pairs of points A,Ā, B,B, C,C are not the pairs of opposite vertices of the same complete quadrilateral. Now, for any two pairs of points P,P and Q,Q, we define a new pair S,S of points by stipulating Then all the points constructed in this way lie on a cubic curve. At first glance, it is somewhat surprising that all the points we construct lie on the same cubic curve, which is defined by three pairs of points (recall that a cubic curve is defined by 9 points). The reason is that we have three pairs of points and not just 6 points. In fact, if we start with the same 6 points but pairing them differently, we obtain a different cubic curve. It is also not clear whether the construction generates infinitely many points of the curve. Schroeter claims in [2] that this is the case, but, as we will see in the next section, it may happen that the construction gives only a finite number of points. A Proof of Schroeter's Construction It is very likely that Schroeter discovered his construction based on his earlier work on cubics (see [3,4]). However, he did not give a rigorous proof of his construction, and the fact that he claimed wrongly that the construction generates always infinitely many points of the curve might indicate that he overlooked something. Below we give a simple proof of Schroeter's construction using Chasles' Theorem (see Chasles [1, Chapitre IV, §8, p. 150]) and the terminology of elliptic curves. Theorem 1 (Chasles' Theorem). If a hexagon ABCĀBC is inscribed in a cubic curve Γ and the points AB ∧ĀB and BC ∧BC are on Γ, then also CĀ ∧CA is on Γ (see Fig. 2). With Chasles' Theorem we can prove the following q.e.d. As an immediate consequence of Proposition 2 we get Corollary 3. The unique cubic curve passing through the 9 points A,Ā, B,B, C,C, D, E, F contains also the 3 pointsD,Ē,F . In order to show that all the points constructed by Schroeter's construction lie on the same cubic curve, we interpret the construction in the setting of elliptic curves. For this, let be a cubic curve and let O be a point of inflection of -recall that every cubic curve in the real projective plane has at least one point of inflection. For two points P and Q on let P # Q be the third intersection point (counting multiplicities) of P Q with , where for P = Q, P Q is the tangent on with contact point P . Furthermore, for each point P on , let P := O # P . As usual, we define the binary operation + on the points of by stipulating P + Q := (P # Q) . q.e.d. As an immediate consequence of Proposition 2 we get Corollary 3. The unique cubic curve Γ passing through the 9 points A,Ā, B,B, C,C, D, E, F contains also the 3 pointsD,Ē,F . In order to show that all the points constructed by Schroeter's construction lie on the same cubic curve, we interpret the construction in the setting of elliptic curves. For this, let Γ be a cubic curve and let O be a point of inflection of Γ -recall that every cubic curve in the real projective plane has at least one point of inflection. For two points P and Q on Γ let P # Q be the third intersection point (counting multiplicities) of P Q with Γ, where for P = Q, P Q is the tangent on Γ with contact point P . Furthermore, for each point P on Γ, let −P := O # P . As usual, we define the binary operation + on the points of Γ by stipulating P + Q := − (P # Q) . (b) Vice versa, if P := P #P =P #P ∈ Γ for two points P,P ∈ Γ, then we have for all Q ∈ Γ the following: Notice that P + P = O and, since O is a point of inflection, we have O = O. It is well known that the operation + is associative and the structure ( , O, +) is an abelian group with neutral element O, which is called an elliptic curve. and Adding (1) and (2)and subtracting Q+Q yields P +P =P +P and hence P #P =P #P . Exchanging left and right hand in (1) and adding (2) gives, upon subtracting P +P , Q + Q =Q +Q and hence Q # Q =Q #Q. S # S =S #S follows by exchanging the pair Q,Q by the pair S,S. Proof. (a) By assumption we have P # Q =P #Q = S and P #Q =P # Q =S. With a point O ∈ Γ of inflection, we get and Adding (1) and (2)and subtracting Q+Q yields P +P =P +P and hence P #P =P #P . Exchanging left and right hand in (1) and adding (2) gives, upon subtracting P +P , Q + Q =Q +Q and hence Q # Q =Q #Q. S # S =S #S follows by exchanging the pair Q,Q by the pair S,S. (b) For the second part, we proceed as follows: By assumption, we have P #P =P #P and therefore P +P = O#(P #P ) = O#(P #P ) =P +P . We add S and subtract P +P to get S + P −P = S +P − P or (O # (S # P )) # (O #P ) = (O # (S #P )) # (O # P ). It follows that (S # P ) #P = (S #P ) # P , i.e., Q #P =Q # P =S. Finally, Q # Q =Q #Q = Q follows from the first part. q.e.d. For the sake of simplicity we write 2 * P for P + P . Let A,Ā be a pair of points with A # A =Ā #Ā on a cubic curve Γ, and with respect to a given point of inflection O, let T A :=Ā − A. Then A + T A =Ā, which implies that Now, by assumption we have 2 * A = 2 * Ā and therefore we get that 2 * T A = O. In other words, T A is a point of order 2. Now we are ready to prove the following It follows that all points we obtain by Schroeter's construction belong to the same curve Γ. q.e.d. The above proof shows that the Schroeter points have the following additional properties • If P,P is a pair of Schroeter points on Γ, then the tangents in P andP meet on Γ. • With respect to a chosen point O of inflection, we have thatP − P = T is a point of order 2 on Γ which is the same for all Schroeter pairs P,P . The following result shows that we can construct the tangent to Γ in each Schroeter point by a line involution (hence with ruler alone). Proposition 6. Let Γ be the cubic from Proposition 2. Assume that S,S, P,P , Q,Q are three of the pairs A,Ā, B,B, C,C, D,D, E,Ē, F,F or of the pairs which are constructed by Schroeter's construction, such that SP , SQ, SP , SQ are four distinct lines. Let s = SS ands its conjugate line with respect to the involution given by the lines SP , SQ, SP , SQ. Thens is tangent to Γ in S (see Fig. 4). Before we can prove Proposition 6, we have to recall a few facts about cubic curves. It is well-known that every cubic curve can be transformed into Weierstrass Normal Recall that A # B := (A + B). In particular, if C = A # A, then the line through C and A is tangent to a,b with contact point A. The following result gives a connection between conjugate points and tangents. q.e.d. In homogeneous coordinates, the curve y 2 = x 3 + ax 2 + bx becomes : Y 2 Z = X 3 + aX 2 Z + bXZ 2 . Recall that A # B := −(A + B). In particular, if C = A # A, then the line through C and A is tangent to Γ a,b with contact point A. The following result gives a connection between conjugate points and tangents. q.e.d. The next result gives a connection between line involutions and conjugate points. Lemma 8. Let A = (x 0 , y 0 ) be an arbitrary but fixed point on Γ α,β,γ . For every point P on Γ α,β,γ which is different from A andĀ, let g := AP andḡ := AP . Then the mapping I A : g →ḡ is a line involution. Proof. It is enough to show that there exists a point ζ 0 (called the center of the involution) on the line h : x = 0, such that the product of the distances between ζ 0 and the intersections of g andḡ with h is constant. SinceT = T + T = O, with respect to T we have g : y = y 0 andḡ : x = x 0 , which implies that ζ 0 = (0, y 0 ). Now, let P = (x 1 , y 1 ) be a point on Γ α,β,γ which is different from A,Ā, T, O, and let g := AP andḡ := AP . SinceP = α γx 1 , −y 1 , the slopes λ P and λP of g andḡ, respectively, are Thus, the distances s P and sP between ζ 0 and the intersections of g andḡ with h, respectively, are and using the fact that for i ∈ {0, 1}, y 2 i = α x i + β + γx i , we obtain which is independent of the particular point P = (x 1 , y 1 ). q.e.d. Since line involutions are invariant under projective transformations, as a consequence of Lemma 8 we obtain the following Fact 9. Let be the cubic from Proposition 2 with two pairs of Schroeter points P,P = T + P , Q,Q = T + Q, and let R be a point on such that RP, RP , RQ, RQ are four di↵erent lines. Let S be a further point on andS = T + S. Then the lines s = RS ands = RS are conjugate lines with respect to the line involution given by the lines RP, RP , RQ, RQ (see Fig. 6). Now we are ready to prove Proposition 6. Proof of Proposition 6. First notice that S andS are distinct, since otherwise,S = T + S = S, which implies that T = S S = O. Assume that the line s intersects in a point U which is di↵erent from S andS. ThenŪ := T + U belongs tos. If the lines intersects in a point V which is di↵erent fromŪ , then, with respect to the involution given by the linesŪS,ŪP ,ŪS,ŪP , the pointV belongs to s. Hence,V =S, which shows thats is tangent to in S. which is independent of the particular point P = (x 1 , y 1 ). q.e.d. Since line involutions are invariant under projective transformations, as a consequence of Lemma 8 we obtain the following Fact 9. Let Γ be the cubic from Proposition 2 with two pairs of Schroeter points P,P = T + P , Q,Q = T + Q, and let R be a point on Γ such that RP, RP , RQ, RQ are four different lines. Let S be a further point on Γ andS = T + S. Then the lines s = RS ands = RS are conjugate lines with respect to the line involution given by the lines RP, RP , RQ, RQ (see Fig. 6). Now we are ready to prove Proposition 6. Proof of Proposition 6. First notice that S andS are distinct, since otherwise,S = T + S = S, which implies that T = S − S = O. Assume that the line s intersects Γ in a point U which is different from S andS. ThenŪ := T + U belongs tos. If the lines intersects Γ in a point V which is different fromŪ , then, with respect to the involution given by the linesŪ S,Ū P ,ŪS,ŪP , the pointV belongs to s. Hence,V =S, which shows thats is tangent to Γ in S. Now, assume that the line s intersects just in S andS. Then, the line s is tangent to either in S or inS. We just consider the former case, the latter case is handled similarly. Let P n (for n 2 N) be a sequence of points on which are di↵erent from S and which converges to S, i.e., lim n!1 P n = S. Since for each n 2 N we havē P n = T + P n (whereP n := T + P ), by continuity of addition we have lim n!1Pn =S. For each n 2 N let t n := P n S. Then, for each n 2 N,t n =P n S. Since s is tangent to in S, by continuity, on the one hand we have lim n!1 t n = s, and on the other hand we have lim n!1tn = s, which implies thats = s and shows thats is tangent to in S. q.e.d. As a corollary of Proposition 6 and Lemma 4(a) we obtain the following: Corollary 10. Let be the cubic from Proposition 2. Then we have: (a) In each Schroeter point it is possible to construct the tangent by a line involution, i.e., with a ruler construction. (b) In addition to the Schroeter points on one can construct for each Schroeter pair P,P the point P # P =P #P 2 by ruler alone: These are the intersection points of the tangents in P and inP . Now, assume that the line s intersects Γ just in S andS. Then, the line s is tangent to Γ either in S or inS. We just consider the former case, the latter case is handled similarly. Let P n (for n ∈ N) be a sequence of points on Γ which are different from S and which converges to S, i.e., lim n→∞ P n = S. Since for each n ∈ N we havē P n = T + P n (whereP n := T + P ), by continuity of addition we have lim n→∞Pn =S. For each n ∈ N let t n := P n S. Then, for each n ∈ N,t n =P n S. Since s is tangent to Γ in S, by continuity, on the one hand we have lim n→∞ t n = s, and on the other hand we have lim n→∞tn = s, which implies thats = s and shows thats is tangent to Γ in S. q.e.d. As a corollary of Proposition 6 and Lemma 4(a) we obtain the following: Corollary 10. Let Γ be the cubic from Proposition 2. Then we have: (a) In each Schroeter point it is possible to construct the tangent by a line involution, i.e., with a ruler construction. (b) In addition to the Schroeter points on Γ one can construct for each Schroeter pair P,P the point P # P =P #P ∈ Γ by ruler alone: These are the intersection points of the tangents in P and inP . A priori it might be possible that Schroeter's construction does not yield all cubic curves. However, the next theorem says that in fact all cubic curves carry Schroeter's construction. Theorem 11. Let Γ be a non-singular cubic curve. Let A, B, C be three different arbitrary points on Γ. Then, there are pointsĀ,B,C on Γ such that D = AB ∧ĀB, E = BC ∧BC, F = CA ∧CĀ are points on Γ and so do all the points given by Schroeter's construction. Proof. ChooseĀ such that A # A =Ā #Ā andB :=Ā # (A # B). In particular, we have A # B =Ā #B, and, by Lemma 4, A #B =Ā # B and B # B =B #B. LetC :=B # (B # C). In particular, we have B # C =B #C, and, by Lemma 4, B #C =B # C and C # C =C #C. It follows from Chasles' Theorem 1 that A #C =Ā # C. From the above, we obtain by applying Proposition 2 with C andC exchanged, that A # C =Ā #C. Hence all points constructed from these points by Schroeter's construction lie on Γ. q.e.d. Remarks. Let Γ 0 be the cubic curve passing through A,Ā, B,B, C,C, D, E, F , let O be a point of inflection of Γ 0 , and let E 0 = (Γ 0 , O, +) be the corresponding elliptic curve. (1) If C n is a cyclic group of order n, then there is a point on Γ 0 of order n (with respect to E 0 ). This implies that if we choose the six starting points in a finite subgroup of E 0 , then Schroeter's construction "closes" after finitely many steps and we end up with just finitely many points. However, if our 6 starting points are all rational and we obtain more than 16 points with Schroeter's construction, then, by Mazur's Theorem, we obtain infinitely many rational points on the cubic curve Γ 0 . (2) If the elliptic curve E 0 has three points of order 2, then one of them, say T , has the property that for any point P on Γ 0 we haveP = P + T . In particular, we haveT = T + T = O. Furthermore, for the other two points of order 2, say S 1 and S 2 , we have S 1 = S 2 + T and S 2 = S 1 + T , i.e., S 1 =S 2 . (3) If we choose another point of inflection O on the cubic curve Γ 0 , we obtain a different elliptic curve E 0 . In particular, we obtain different inverses of the constructed points, even though the constructed points are exactly the same (see Fig. 8). respect to E 0 ). This implies that if we choose the six starting points in a finite subgroup of E 0 , then Schroeter's construction "closes" after finitely many steps and we end up with just finitely many points. However, if our 6 starting points are all rational and we obtain more than 16 points with Schroeter's construction, then, by Mazur's Theorem, we obtain infinitely many rational points on the cubic curve 0 . (2) If the elliptic curve E 0 has three points of order 2, then one of them, say T , has the property that for any point P on 0 we haveP = P + T . In particular, we haveT = T + T = O. Furthermore, for the other two points of order 2, say S 1 and S 2 , we have S 1 = S 2 + T and S 2 = S 1 + T , i.e., S 1 =S 2 . (3) If we choose another point of inflection O 0 on the cubic curve 0 , we obtain a di↵erent elliptic curve E 0 0 . In particular, we obtain di↵erent inverses of the constructed points, even though the constructed points are exactly the same (see Fig. 8).
5,689.6
2021-06-12T00:00:00.000
[ "Mathematics" ]
Morphological Evolution of Passive Soil Arch in Front of Horizontal Piles in Three Dimensions : The anti-slide pile is a primary method of landslide control. The effect of the passive soil arch in front of the embedded section of piles has a significant effect on the anti-slide pile’s bearing capacity. The upgraded model test scheme was used to conduct model tests with a pile spacing four times the width of the pile and a geometric scale ratio of 1:15. The anti-slide pile stress, pile bending strain, and soil stress in front of the pile were all studied in relation to the loading amount. In addition to the model test, the numerical simulation method was utilized to investigate the three-dimensional morphological change of the passive soil arch in front of the pile. The results indicated that: clearly, the side piles can eliminate the border effect. The distribution of pile bending strain along the pile after loading is referred to as a parabola. Bending failure occurred at a depth of 40 mm, approximately 0.9 m from the pile top. Under the condition that the pile spacing is four times the pile width, a passive soil arch occurs in front of the anti-slide pile’s fixed part, and its development can be split into four stages: formation, development, completion, and destruction. The passive soil arches in front of the piles are generated and destroyed gradually along the buried depth, and the three-dimensional surface of the space drops gradually along the buried depth with the loading amount and advances toward the loading direction until the anti-slide pile system fails. The research findings and experiences can serve as a basis for future research. Introduction The "soil arch effect" is a regular occurrence in geotechnical engineering. The creation of soil arches is mostly owing to the "wedge tightening" impact of uneven displacement on soil particles, resulting in the soil arching effect [1][2][3]. Following the turn of the twenty-first century, the application of the soil arch effect to the thrust research of landslides behind anti-slide piles has garnered widespread attention. Theoretical analysis, model testing, and numerical simulation have all been used to conduct fruitful research. Researchers investigated the creation, development, and collapse mechanisms of soil arches behind and between piles. The relationship between the soil arch effect between piles and the anti-slide pile design parameters is discussed. Numerous significant accomplishments have been made. Wang created a model for calculating pile spacing based on the static equilibrium condition of soil arches between piles [4]. Following academics have produced suitable adjustments to the aforementioned models from a variety of angles, including slope inclination angle [5], constitutive model [6,7], and balance condition of soil arch stress [8][9][10]. Generally, scholars assumed that the soil arch's geometry is parabolic (reasonable arch axis). The thrust force generated by a landslide behind a pile is seen as a weight that is spread uniformly. The link between thrust force and pile spacing (clear distance) is determined using the static equilibrium condition at the arch foot and the soil strength criterion at the arch foot and vault, and the appropriate pile spacing is determined using known or available landslide thrust. (1) Direct observation, as demonstrated by Jiang [11] and Dou [12], or scanning and observing the interior of soil using novel methods and technologies, as demonstrated by Jin [13] who used infrared imaging technology to study the soil arching effect from an energy perspective and Chen [14] who used transparent soil technology. This strategy is only appropriate for qualitative analysis; it is not appropriate for reasonable quantitative research. (2) A systematic research of the soil arching effect was conducted by combining model testing with theoretical analysis or numerical simulation [15][16][17][18][19][20]. This is the primary approach of research at the moment. (3) Soil stress was measured during pile-soil contact by embedding a soil pressure sensor, and the soil arch effect was investigated using the soil stress distribution law, in order to deduce the characteristics of soil arch [21][22][23]. Current research on the soil arch effect focuses primarily on the expansion of the soil arch behind the cantilever section of anti-slide piles, highlighting that the soil arch effect is a manifestation of pile-soil interaction and that its characteristics between two piles can be regarded as a manifestation of the pile group effect, which is primarily used to determine the maximum or optimal pile spacing under a specific sliding force. According to the formation mechanism and stress characteristics of the soil arch, the passive soil arch effect in front of the pile should exist and have a substantial impact on the bearing capacity. The current study on the passive soil arch effect in front of piles generated by the interaction between horizontally strained piles (such as an embedded anti-slide pile) and soil in front of piles is insufficient [21,22]. Based on prior research on soil arch in front of piles, this article enhances the test scheme in light of the shortcomings of the typical horizontal stress pile model test. We tested a passive soil arch in front of an embedded portion of anti-slide piles with a spacing four times the width of the pile. To identify the axis of the passive soil arch in front of piles, the contact stress in front of the pile, the bending moment of the pile body, and the soil stress in front of the pile are studied and fitted. To supplement the model test, a realistic three-dimensional numerical model was constructed, and the three-dimensional spatial distribution of passive soil arch in front of piles was investigated using the flexibility of the numerical simulation approach. The research results and experience will reveal the spatial form of passive soil arch in front of the piles, serve as a reference for future research on the bearing capacity of anti-slide piles taking passive soil arch into account, and enhance the theory of pile foundation calculation. Experimentation The test system is built primarily of three components: a model framework, a data collecting system, and a loading system (Figure 1). To assure the loading system's stability during the test, the existing model box was reinforced horizontally with H-section steel. The model box was 5 m in length, 2.5 m in width, and 2 m in height. Figure 2 depicts the model's schematic diagram. In general, an obvious boundary effect appears in the model test scheme for horizontal stressed piles (Luo, 2015(Luo, , 2017, which has a significant effect on the stress distribution of the soil in front of the pile and is easily responsible for reducing the dependability of test data. As a result, this research optimizes the present standard test scheme: This test's primary study objective is to evaluate three anti-slide piles and the change rule for soil stress in their vicinity. Five model piles are established, along with one side pile on each side of the primary research area (as shown in Figure 1). Buildings 2022, 12, x FOR PEER REVIEW 3 of 17 Model Similarity Ratio This test is mostly for the purpose of determining the changing law of soil stress in front of a pile, and the similarity between the soil and pile elastic modulus must be established first. Dimensional analysis is used to determine the similarity ratios of major physical quantities, and the specific similarity constants are listed in Table 1. Model Similarity Ratio This test is mostly for the purpose of determining the changing law of soil stress in front of a pile, and the similarity between the soil and pile elastic modulus must be established first. Dimensional analysis is used to determine the similarity ratios of major physical quantities, and the specific similarity constants are listed in Table 1. Model Similarity Ratio This test is mostly for the purpose of determining the changing law of soil stress in front of a pile, and the similarity between the soil and pile elastic modulus must be established first. Dimensional analysis is used to determine the similarity ratios of major physical quantities, and the specific similarity constants are listed in Table 1. Table 1. Similarity constants of main physical quantity. Physical Quantities Similarity Constant The length of the model pile is 2.0 m, the section size is a × b = 14 cm × 10 cm (Figure 3), the lengths of the cantilever and fixed segments are 50 cm and 150 cm, respectively, and the pile spacing is L = 4b = 40 cm, and the layout scheme is illustrated in Figure 4. The length of the model pile is 2.0 m, the section size is a × b = 14 cm × 10 cm (Figure 3), the lengths of the cantilever and fixed segments are 50 cm and 150 cm, respectively, and the pile spacing is L = 4b = 40 cm, and the layout scheme is illustrated in Figure 4. The concrete used in the model piles has a strength grade of C15 (fcu,k = 15 MPa). As shown in Table 2, the mixture ratio was calculated by a large number of experiments, and the compressive strength is 16.59 MPa. The reinforcement ratio of a concrete structure is chosen in accordance with the principle of equal strength (Equation (1)), as illustrated in Figure 3. Reinforcement has a yield strength of 235 MPa and a maximum strength of 310 MPa. ps py,k ms my,k where Aps is the reinforcement area of the prototype structure. Apc is the concrete area of the prototype structure. fpy,k is the standard value of tensile strength of the reinforced bars of the prototype structure. fpcu,k is the standard value of compressive strength of concrete cube of the prototype structure. Ams is the reinforcement area of the model structure. Amc The length of the model pile is 2.0 m, the section size is a × b = 14 cm × 10 cm (Figure 3), the lengths of the cantilever and fixed segments are 50 cm and 150 cm, respectively, and the pile spacing is L = 4b = 40 cm, and the layout scheme is illustrated in Figure 4. The concrete used in the model piles has a strength grade of C15 (fcu,k = 15 MPa). As shown in Table 2, the mixture ratio was calculated by a large number of experiments, and the compressive strength is 16.59 MPa. The reinforcement ratio of a concrete structure is chosen in accordance with the principle of equal strength (Equation (1)), as illustrated in where Aps is the reinforcement area of the prototype structure. Apc is the concrete area of the prototype structure. fpy,k is the standard value of tensile strength of the reinforced bars of the prototype structure. fpcu,k is the standard value of compressive strength of concrete cube of the prototype structure. Ams is the reinforcement area of the model structure. Amc The concrete used in the model piles has a strength grade of C15 (f cu,k = 15 MPa). As shown in Table 2, the mixture ratio was calculated by a large number of experiments, and the compressive strength is 16.59 MPa. The reinforcement ratio of a concrete structure is chosen in accordance with the principle of equal strength (Equation (1)), as illustrated in Figure 3. Reinforcement has a yield strength of 235 MPa and a maximum strength of 310 MPa. where A ps is the reinforcement area of the prototype structure. A pc is the concrete area of the prototype structure. f py,k is the standard value of tensile strength of the reinforced bars of the prototype structure. f pcu,k is the standard value of compressive strength of concrete cube of the prototype structure. A ms is the reinforcement area of the model structure. The model soil was created by crushing and screening the original loess and layering it, with the compression modulus serving as the primary controlling parameter. Compaction degree (compaction number) tests are used to evaluate the compression modulus of each layer during the filling process. Considering the real operating state and bearing mode of anti-slide piles, the model test does not account for the gap between the soil and the pile, and pile-soil contact is deemed to be satisfactory. Following loading, soil samples were collected at intervals of 10, 40, 70, 100, 130, and 160 cm to conduct additional geotechnical testing. After comparison and validation with the data collected during the filling process, the model soil's physical and mechanical indexes were computed, as shown in Table 3, and the profile of the model soil layer is illustrated in Figure 2. Existing research on the soil arch effect indicates that the axis of the soil arch is the reasonable axis of a three-hinged arch, and the direction of major stress at any location on the arch axis is tangent to that point, resulting in equal horizontal stress components at each point. As a result, the axis of the soil arch should correspond to the horizontal stress (σ x ) contour in front of the pile. Four layers of earth pressure boxes are installed in front of the pile in accordance with the passive earth arch's characteristics. Each layer of the earth pressure box follows the same layout and naming conventions. The earth pressure distribution method is designed on the following principle: using model symmetry, the σ x and σ y of measuring places in the range between two piles of three model piles in the middle are determined. Finally, the findings are aggregated into a single range between piles, and the stress values for each soil point are obtained in two horizontal directions for examination. Taking the first layer as an example (Figure 4), the earth pressure cells are distributed differently on both sides of axis | (symmetric axis). Earth pressure cells of Type A measure earth pressure in the y direction (along axis x), while earth pressure cells of Type B measure earth pressure in the x direction (along axis {). The central axes of piles are x, |, and , while the central axes of adjacent heaps are z and~. Axes y and { are the quarter lines of pile net distance, while axes } and are positioned at the pile edge's 2 cm inward offset. Four layers of earth pressure boxes are laid vertically ( Figure 2) at buried depths of 25 cm, 35 cm, 55 cm, and 75 cm. The fourth layer of earth pressure cells has a greater buried depth and a lower pile-soil mutual displacement, allowing for the elimination of two rows of earth pressure boxes. The test utilized Bx-2 strain type earth pressure cells. It had a range of 0.6 MPa, a precision of 1/1000, and a diameter of 2 cm. The method of burying earth pressure cells is standardized in the test: after filling the soil to a depth of 5 cm above the design height, a hole of 6 cm is dug (considering the radius of the earth pressure cells), and then one earth pressure cell is buried; the backfill soil should be ground and backfilled in equal quantities to the excavated soil, to ensure that the backfill soil's density is close to that of the surrounding soil. Layout of Strain Gauges Strain gauges were installed at a 20 cm interval on all five model piles. For each pile, 2 × 9 = 18 pieces are placed in front (compression side) and behind (tension side), and the numbers are correspondingly S*F1~S*F18 in front of the pile and S*B1~S*B18 behind the pile (beginning from the pile top). '*' denotes the total number of anti-slide piles (as shown in Figure 5). In the original scheme, epoxy resin was employed as the protective coating of the strain gauge rather than dolomite to minimize the effect on pile stiffness. cells has a greater buried depth and a lower pile-soil mutual displacement, allowing for the elimination of two rows of earth pressure boxes. The test utilized Bx-2 strain type earth pressure cells. It had a range of 0.6 MPa, a precision of 1/1000, and a diameter of 2 cm. The method of burying earth pressure cells is standardized in the test: after filling the soil to a depth of 5 cm above the design height, a hole of 6 cm is dug (considering the radius of the earth pressure cells), and then one earth pressure cell is buried; the backfill soil should be ground and backfilled in equal quantities to the excavated soil, to ensure that the backfill soil's density is close to that of the surrounding soil. Layout of Strain Gauges Strain gauges were installed at a 20 cm interval on all five model piles. For each pile, 2 × 9 = 18 pieces are placed in front (compression side) and behind (tension side), and the numbers are correspondingly S*F1~S*F18 in front of the pile and S*B1~S*B18 behind the pile (beginning from the pile top). '*' denotes the total number of anti-slide piles (as shown in Figure 5). In the original scheme, epoxy resin was employed as the protective coating of the strain gauge rather than dolomite to minimize the effect on pile stiffness. Loading and Data Collection Schemes After burying the model pile, plastic film was placed on the soil surface to retain moisture, and the piles were filled three days later. A horizontal jack is installed 0.25 m from the top of each pile at the cantilever section for loading [24]. Between the jacks and the piles, force sensors were installed to gather thrust force throughout the loading process and to examine the development law of various pile forces. Dial indicators were placed behind the pile at the soil surface, and the displacement of the pile at the soil surface was used as the loading control variable, indicating. Five horizontal jacks were simultaneously and equally adjusted during loading to achieve steady and synchronous loading. After applying each amount of force, the jacks remained immobile until the data were steady. The DH3816 data collecting equipment was utilized to simultaneously collect data for horizontal thrust, pile strain gauge, and earth pressure in front of the pile. The following are the specific loading steps: (1) The 0.1 mm preload was used as the initial state of the test, 30 min after the data acquisition instrument's data balance; (2) Each stage was loaded with 0.5 mm, and data were collected after 30 min of static loading until δ = 10 mm; (3) Each stage was loaded with 1 mm, and data were collected after 45 min until δ = 40 mm; Loading and Data Collection Schemes After burying the model pile, plastic film was placed on the soil surface to retain moisture, and the piles were filled three days later. A horizontal jack is installed 0.25 m from the top of each pile at the cantilever section for loading [24]. Between the jacks and the piles, force sensors were installed to gather thrust force throughout the loading process and to examine the development law of various pile forces. Dial indicators were placed behind the pile at the soil surface, and the displacement of the pile at the soil surface was used as the loading control variable, indicating. Five horizontal jacks were simultaneously and equally adjusted during loading to achieve steady and synchronous loading. After applying each amount of force, the jacks remained immobile until the data were steady. The DH3816 data collecting equipment was utilized to simultaneously collect data for horizontal thrust, pile strain gauge, and earth pressure in front of the pile. The following are the specific loading steps: (1) The 0.1 mm preload was used as the initial state of the test, 30 min after the data acquisition instrument's data balance; (2) Each stage was loaded with 0.5 mm, and data were collected after 30 min of static loading until δ = 10 mm; (3) Each stage was loaded with 1 mm, and data were collected after 45 min until δ = 40 mm; (4) Each stage was loaded with 2 mm, and data were collected after standing for 80 min until δ = 70 mm. Analysis of Test Results When processing and evaluating data for horizontal thrust and bending strain of anti-slide piles, the data for piles 1 and 5 are averaged, which is referred to as side pile data. The data from piles 2 and 4 are averaged, resulting in what is referred to as secondary side pile data. Identify pile 3 as the central pile. Horizontal Thrust The force sensors installed between the jacks and the piles are utilized to gather the corresponding jack thrust at various loading levels, and the thrust curve for each pile is plotted against the loading amount, as shown in Figure 6. As the loading quantity increases, the thrust of the pile by the jack can be divided into two stages: the bearing stage and the failure stage. The thrust grows progressively with the loading quantity and the growth rate reduces gradually during the bearing stage. When the loading quantity is increased to 40 mm, the thrust is essentially constant, while the side pile thrust drops, indicating that the anti-slide pile system has entered the failure stage. Buildings 2022, 12, x FOR PEER REVIEW 7 of 17 (4) Each stage was loaded with 2 mm, and data were collected after standing for 80 min until δ = 70 mm. Analysis of Test Results When processing and evaluating data for horizontal thrust and bending strain of anti-slide piles, the data for piles 1 and 5 are averaged, which is referred to as side pile data. The data from piles 2 and 4 are averaged, resulting in what is referred to as secondary side pile data. Identify pile 3 as the central pile. Horizontal Thrust The force sensors installed between the jacks and the piles are utilized to gather the corresponding jack thrust at various loading levels, and the thrust curve for each pile is plotted against the loading amount, as shown in Figure 6. As the loading quantity increases, the thrust of the pile by the jack can be divided into two stages: the bearing stage and the failure stage. The thrust grows progressively with the loading quantity and the growth rate reduces gradually during the bearing stage. When the loading quantity is increased to 40 mm, the thrust is essentially constant, while the side pile thrust drops, indicating that the anti-slide pile system has entered the failure stage. In the bearing stage, the thrust values of the secondary side pile and the middle side pile are quite similar, and can be considered identical and smaller than the side pile. This indicated that the presence of the side pile effectively eliminates the boundary effect problem in the model test system of five anti-slide piles, and that it is reasonable to investigate the passive soil arch effect in front of the embedded pile using the soil between the middle pile and the secondary side pile. Bending Moment of the Piles Equation (2) illustrates the calculation formula for pile bending strain [25]. The following formula can be used to determine the bending moment of a pile. where εM denotes the bending strain of the pile body, ε+ denotes the strain behind the pile, ε− denotes the strain in front of the pile, E is the elastic modulus of concrete, and W denotes the flexural section modulus of the pile. In the bearing stage, the thrust values of the secondary side pile and the middle side pile are quite similar, and can be considered identical and smaller than the side pile. This indicated that the presence of the side pile effectively eliminates the boundary effect problem in the model test system of five anti-slide piles, and that it is reasonable to investigate the passive soil arch effect in front of the embedded pile using the soil between the middle pile and the secondary side pile. Bending Moment of the Piles Equation (2) illustrates the calculation formula for pile bending strain [25]. The following formula can be used to determine the bending moment of a pile. where ε M denotes the bending strain of the pile body, ε + denotes the strain behind the pile, ε − denotes the strain in front of the pile, E is the elastic modulus of concrete, and W denotes the flexural section modulus of the pile. As seen in Figure 7, the distribution curves of the bending moment of the center pile and secondary side pile under various loading conditions are drawn in relation to the pile top distance. As the bending moment distribution law and value for the middle and secondary piles are quite similar, they can be considered to have the same stress and deformation laws. As seen in Figure 8a, the bending moment distribution curves of intermediate piles were generated for various loading amounts. As seen in Figure 7, the distribution curves of the bending moment of the center pile and secondary side pile under various loading conditions are drawn in relation to the pile top distance. As the bending moment distribution law and value for the middle and secondary piles are quite similar, they can be considered to have the same stress and deformation laws. As seen in Figure 8a, the bending moment distribution curves of intermediate piles were generated for various loading amounts. As illustrated in Figure 8a, the pile bending moment is zero above the loading point, whereas the pile bending moment below the loading point has a parabolic distribution, which is consistent with the findings of previous studies [22,25]. The greatest value is between 0.8~1.0 m. When δ is equal to 40 mm, the maximum value is 7.58 kN•m at 1.0 m from the pile top. As the loading increases, the bending moment values at 0.8 m and 1.0 m rapidly increase. The curve of the bending moment values at these two places is plotted with the loading (Figure 8b), and it is evident that the growth rate of the bending moment As seen in Figure 7, the distribution curves of the bending moment of the center pile and secondary side pile under various loading conditions are drawn in relation to the pile top distance. As the bending moment distribution law and value for the middle and secondary piles are quite similar, they can be considered to have the same stress and deformation laws. As seen in Figure 8a, the bending moment distribution curves of intermediate piles were generated for various loading amounts. As illustrated in Figure 8a, the pile bending moment is zero above the loading point, whereas the pile bending moment below the loading point has a parabolic distribution, which is consistent with the findings of previous studies [22,25]. The greatest value is between 0.8~1.0 m. When δ is equal to 40 mm, the maximum value is 7.58 kN•m at 1.0 m from the pile top. As the loading increases, the bending moment values at 0.8 m and 1.0 m rapidly increase. The curve of the bending moment values at these two places is plotted with the loading (Figure 8b), and it is evident that the growth rate of the bending moment As illustrated in Figure 8a, the pile bending moment is zero above the loading point, whereas the pile bending moment below the loading point has a parabolic distribution, which is consistent with the findings of previous studies [22,25]. The greatest value is between 0.8~1.0 m. When δ is equal to 40 mm, the maximum value is 7.58 kN·m at 1.0 m from the pile top. As the loading increases, the bending moment values at 0.8 m and 1.0 m rapidly increase. The curve of the bending moment values at these two places is plotted with the loading (Figure 8b), and it is evident that the growth rate of the bending moment increases dramatically at δ > 40 mm, indicating that the pile is damaged at this point and the failure zone occurs within 0.8~1.0 m. Following the test, it was discovered that noticeable Figure 9 is a schematic diagram of the soil in front of the pile. The primary region of investigation is the soil inside the range of piles, which corresponds to the shaded area in the picture. The three measured parts are positioned as follows: Section I-I is centered on the midline between two adjacent piles (x = 0), section II-II is centered on the quad-bisect between the two piles (x = L/4), and section III-III is centered on the pile side (x = (L/2 − 2 cm)), where L = 4b. increases dramatically at δ > 40 mm, indicating that the pile is damaged at this point and the failure zone occurs within 0.8~1.0 m. Following the test, it was discovered that noticeable fissures emerged behind the pile, primarily near 0.9 m from the pile top, or about 1/3 of the anti-slide pile's buried depth. Figure 9 is a schematic diagram of the soil in front of the pile. The primary region of investigation is the soil inside the range of piles, which corresponds to the shaded area in the picture. The three measured parts are positioned as follows: Section I-I is centered on the midline between two adjacent piles (x = 0), section II-II is centered on the quad-bisect between the two piles (x = L/4), and section III-III is centered on the pile side (x = (L/2 − 2 cm)), where L = 4b. The soil stress is analyzed in section I-I. According to the findings of spline curve fitting, the maximum value for σx should be between 10~20 cm in front of the pile (between two measuring sites), which is referred to as σx,max0. Then, fit spline curves for the σx values at sections II-II and III-III that correspond to distinct loading loads. As illustrated in Figure 10, the coordinate values of the points σx = σx,max0 are searched for arch axis fitting. As seen in Figure 10a, when the depth is 15 cm and δ < 40 mm, the σx,max0 equivalent point of soil in front of the pile can be nicely matched by a parabola. It proves that when The soil stress is analyzed in section I-I. According to the findings of spline curve fitting, the maximum value for σ x should be between 10~20 cm in front of the pile (between two measuring sites), which is referred to as σ x,max0 . Then, fit spline curves for the σ x values at sections II-II and III-III that correspond to distinct loading loads. As illustrated in Figure 10, the coordinate values of the points σ x = σ x,max0 are searched for arch axis fitting. increases dramatically at δ > 40 mm, indicating that the pile is damaged at this point and the failure zone occurs within 0.8~1.0 m. Following the test, it was discovered that noticeable fissures emerged behind the pile, primarily near 0.9 m from the pile top, or about 1/3 of the anti-slide pile's buried depth. Figure 9 is a schematic diagram of the soil in front of the pile. The primary region of investigation is the soil inside the range of piles, which corresponds to the shaded area in the picture. The three measured parts are positioned as follows: Section I-I is centered on the midline between two adjacent piles (x = 0), section II-II is centered on the quad-bisect between the two piles (x = L/4), and section III-III is centered on the pile side (x = (L/2 − 2 cm)), where L = 4b. The soil stress is analyzed in section I-I. According to the findings of spline curve fitting, the maximum value for σx should be between 10~20 cm in front of the pile (between two measuring sites), which is referred to as σx,max0. Then, fit spline curves for the σx values at sections II-II and III-III that correspond to distinct loading loads. As illustrated in Figure 10, the coordinate values of the points σx = σx,max0 are searched for arch axis fitting. As seen in Figure 10a, when the depth is 15 cm and δ < 40 mm, the σx,max0 equivalent point of soil in front of the pile can be nicely matched by a parabola. It proves that when As seen in Figure 10a, when the depth is 15 cm and δ < 40 mm, the σ x,max0 equivalent point of soil in front of the pile can be nicely matched by a parabola. It proves that when the pile spacing is four times the width of the pile, a passive soil arch exists in front of the embedded portion of the anti-slide pile and that its arch axis is a parabola. Except at δ = 4 mm, the apex of the soil arch axis advances forward gradually as the loading amount increases, and the arch position approaches δ ≥ 10 mm, showing that the soil arch is totally constructed and that the position and shape of the soil arch do not vary considerably. When δ > 12 mm, however, the parabolic soil arch cannot be accurately predicted using model test data. At a depth of 35 cm (Figure 10b), evident parabolic arch axes may be obtained when δ = 6, 7, and 8 mm, but complete arch axes cannot be obtained when δ > 8 mm, which contradicts previous research findings. At buried depths of 55 cm and 75 cm, no visible arch axis was created. Analysis of Soil Stress in Front of Pile The following explanations are given for the above failure to match the entire arch axis through observed data once the loading level reaches a specific value: Because σ x,max0 occurs between two measure points in section I-I, it is difficult to obtain the precise value, and location of σ x,max0 , and so this problem cannot be handled using data processing methods. As a result, it is important to conduct supplemental analysis and rectification of model test findings using an appropriate numerical simulation method. Finite Element Model To supplement the model test results and conduct a systematic analysis of the dynamic evolution of a passive soil arch in front of an embedded piece of an anti-slide pile, ABAQUS was used to create a three-dimensional numerical model for numerical simulation. It should be mentioned that a symmetric finite element model was developed based on past research in order to ensure the finite element grid's calculation correctness and efficiency. The plane of symmetry was established using the middle line of two adjacent anti-slide piles, and only the anti-slide pile and soil model between the two planes of symmetry were established (as shown in Figure 11). constructed and that the position and shape of the soil arch do not vary considerably. When δ > 12 mm, however, the parabolic soil arch cannot be accurately predicted using model test data. At a depth of 35 cm (Figure 10b), evident parabolic arch axes may be obtained when δ = 6, 7, and 8 mm, but complete arch axes cannot be obtained when δ > 8 mm, which contradicts previous research findings. At buried depths of 55 cm and 75 cm, no visible arch axis was created. The following explanations are given for the above failure to match the entire arch axis through observed data once the loading level reaches a specific value: Because σx,max0 occurs between two measure points in section I-I, it is difficult to obtain the precise value, and location of σx,max0, and so this problem cannot be handled using data processing methods. As a result, it is important to conduct supplemental analysis and rectification of model test findings using an appropriate numerical simulation method. Finite Element Model To supplement the model test results and conduct a systematic analysis of the dynamic evolution of a passive soil arch in front of an embedded piece of an anti-slide pile, ABAQUS was used to create a three-dimensional numerical model for numerical simulation. It should be mentioned that a symmetric finite element model was developed based on past research in order to ensure the finite element grid's calculation correctness and efficiency. The plane of symmetry was established using the middle line of two adjacent anti-slide piles, and only the anti-slide pile and soil model between the two planes of symmetry were established (as shown in Figure 11). C3D8 element type was used to simulate the finite element model. M-C constitutive model was used to simulate the soil. Table 3 summarizes the parameters. Anti-slide piles were simulated using a linear elastic constitutive model, the parameters of which are listed in Section 2.2.1. The model's bottom boundary is fixed, the vertical direction-direction boundary allows vertical displacement, and the vertical x direction two boundaries are symmetric. The interface between the pile and the earth is the surface-to-surface contact element with a friction coefficient of 0.7. The loading procedure is identical to that used in the model test. y displacement is applied to the anti-slide pile's cantilever portion 25 cm from the pile top. Specific loading steps are detailed in Section 2.4. Figure 11. The finite element model. Figure 11. The finite element model. C3D8 element type was used to simulate the finite element model. M-C constitutive model was used to simulate the soil. Table 3 summarizes the parameters. Anti-slide piles were simulated using a linear elastic constitutive model, the parameters of which are listed in Section 2.2.1. The model's bottom boundary is fixed, the vertical direction-direction boundary allows vertical displacement, and the vertical x direction two boundaries are symmetric. The interface between the pile and the earth is the surface-to-surface contact element with a friction coefficient of 0.7. The loading procedure is identical to that used in the model test. y displacement is applied to the anti-slide pile's cantilever portion 25 cm from the pile top. Specific loading steps are detailed in Section 2.4. The mesh precision of the anti-slide pile and dirt in the x and z axes is 2.5 cm, as seen in Figure 11. The soil close to the anti-slide pile is divided in the y direction with a 2.5 cm grid precision. Beyond 1 m, the mesh precision is steadily raised to 10 cm in front and behind heaps. While the mesh precision required for the primary research area is met, the model's overall mesh count is minimized to maximize computing efficiency. Accuracy of Numerical Simulation Results To ascertain the quasi-determination of the numerical simulation results, a comparison is performed between the model's measured data and the numerical simulation results. In section I-I, the bending moment of piles and soil stress are discussed. Bending Moment of Piles The comparison curve in Figure 12 illustrates the relationship between the measured data from the model test and the numerical simulation results when different loading loads are drawn. As can be seen, when δ ≤ 40 mm, both the observed data and the numerical simulation results follow the same distribution law. The measured values are slightly less than the calculated numbers, and the discrepancy is negligible. On the whole, the numerical simulation findings correspond well with one another. However, when δ > 40 mm, the measured data from model tests gradually exceed the numerical simulation results at 1.0 m and 0.8 m, which is owing to the fact that the numerical simulation of anti-slide piles uses a linear elastic element and does not account the pile's failure stage. The numerical simulation results presented in this article are primarily focused on the δ ≤ 40 mm stage. The mesh precision of the anti-slide pile and dirt in the x and z axes is 2.5 cm, as seen in Figure 11. The soil close to the anti-slide pile is divided in the y direction with a 2.5 cm grid precision. Beyond 1 m, the mesh precision is steadily raised to 10 cm in front and behind heaps. While the mesh precision required for the primary research area is met, the model's overall mesh count is minimized to maximize computing efficiency. Accuracy of Numerical Simulation Results To ascertain the quasi-determination of the numerical simulation results, a comparison is performed between the model's measured data and the numerical simulation results. In section I-I, the bending moment of piles and soil stress are discussed. Bending Moment of Piles The comparison curve in Figure 12 illustrates the relationship between the measured data from the model test and the numerical simulation results when different loading loads are drawn. As can be seen, when δ ≤ 40 mm, both the observed data and the numerical simulation results follow the same distribution law. The measured values are slightly less than the calculated numbers, and the discrepancy is negligible. On the whole, the numerical simulation findings correspond well with one another. However, when δ > 40 mm, the measured data from model tests gradually exceed the numerical simulation results at 1.0 m and 0.8 m, which is owing to the fact that the numerical simulation of antislide piles uses a linear elastic element and does not account the pile's failure stage. The numerical simulation results presented in this article are primarily focused on the δ ≤ 40 mm stage. Soil Stress in Front of Pile Passive soil arch in front of a pile is a stress arch generated by nearby piles. Thus, in order to demonstrate the correctness of numerical simulation findings, a comparative examination of soil stress in front of the pile is required. Figure 13 illustrates the comparison curve between measured soil stress model test data and numerical simulations in section I-I. d denotes the buried depth in the figure. As can be shown, both the model test and numerical simulation have a consistent distribution law. When the value is tiny, the two outcomes accord well. As the loading quantity grows, the relative displacement between the pile and the soil increases, resulting in a huge gap between them. The reasons for this are as follows: as the relative displacement between pile and soil increases, the soil in front Soil Stress in Front of Pile Passive soil arch in front of a pile is a stress arch generated by nearby piles. Thus, in order to demonstrate the correctness of numerical simulation findings, a comparative examination of soil stress in front of the pile is required. Figure 13 illustrates the comparison curve between measured soil stress model test data and numerical simulations in section I-I. d denotes the buried depth in the figure. As can be shown, both the model test and numerical simulation have a consistent distribution law. When the value is tiny, the two outcomes accord well. As the loading quantity grows, the relative displacement between the pile and the soil increases, resulting in a huge gap between them. The reasons for this are as follows: as the relative displacement between pile and soil increases, the soil in front of the pile experiences noticeable displacement, resulting in a change in sensor position and angle. Additionally, when the sensor's accuracy is considered, a significant gap exists between the sensor and the numerical simulation. Simultaneously, as the buried depth increases, the consistency between the two values increases, and as the buried depth increases, the relative displacement of pile dirt lags behind the loading. As can be observed from the preceding analysis, the numerical simulation results in this work accurately reflect the variation law of the soil stress field in front of the model test pile, implying that the numerical simulation-based supplemental analysis of the model test is logical and effective. Buildings 2022, 12, x FOR PEER REVIEW 12 of 17 of the pile experiences noticeable displacement, resulting in a change in sensor position and angle. Additionally, when the sensor's accuracy is considered, a significant gap exists between the sensor and the numerical simulation. Simultaneously, as the buried depth increases, the consistency between the two values increases, and as the buried depth increases, the relative displacement of pile dirt lags behind the loading. As can be observed from the preceding analysis, the numerical simulation results in this work accurately reflect the variation law of the soil stress field in front of the model test pile, implying that the numerical simulation-based supplemental analysis of the model test is logical and effective. As illustrated in Figure 13, σx,max0 appears at 0.1~0.2 m in front of the pile and advances in lockstep with the loading amount, exactly between the model test sites on both sides. As a result, when the value is big, a large gap between the model test results and numerical simulation results arises, and the phenomena described in Section 3.3 of the whole arch axis being unable to be fitted through the observed data appears. Axis of Passive Soil Arch in Front of Pile The passive soil arch axes in front of piles were fitted with varying loading levels at depths of 15 cm, 35 cm, 55 cm, and 75 cm, as illustrated in Figure 14. By examining the variation trend of the arch axis, it is possible to deduce the following: (1) The passive soil arch in front of piles is related to the relative displacement between the piles and the soil. With increasing buried depth and the same force condition, the relative displacement of pile-soil reduces. Thus, the shallower the buried depth, the less loading is required, and the sooner the arch is destroyed. Passive soil arches at 15, 35, and 55 cm depth can exhibit four distinct stages as the load increases: formation, development, completion, and destruction. As illustrated in Figure 13, σ x,max0 appears at 0.1~0.2 m in front of the pile and advances in lockstep with the loading amount, exactly between the model test sites on both sides. As a result, when the value is big, a large gap between the model test results and numerical simulation results arises, and the phenomena described in Section 3.3 of the whole arch axis being unable to be fitted through the observed data appears. Axis of Passive Soil Arch in Front of Pile The passive soil arch axes in front of piles were fitted with varying loading levels at depths of 15 cm, 35 cm, 55 cm, and 75 cm, as illustrated in Figure 14. By examining the variation trend of the arch axis, it is possible to deduce the following: (1) The passive soil arch in front of piles is related to the relative displacement between the piles and the soil. With increasing buried depth and the same force condition, the relative displacement of pile-soil reduces. Thus, the shallower the buried depth, the less loading is required, and the sooner the arch is destroyed. Passive soil arches at 15, 35, and 55 cm depth can exhibit four distinct stages as the load increases: formation, development, completion, and destruction. Spatial Form of Passive Soil Arch To investigate the spatial distribution variation of the passive soil arch in front of piles as a function of loading amount, a distribution curve of the y value of the soil arch vault with buried depth was produced, as shown in Figure 15. As buried depth increases, a passive soil arch in front of the pile is generated and progressively destroyed. Additionally, the following outcomes can be obtained: (1) When the same amount of load is applied, the spatial distribution of passive soil arch essentially follows the distribution rule that along the buried depth, gradually approaching the pile front. (2) When the buried depth is less than 15 cm, the loading amount associated with the passive soil arch from formation to failure is negligible, and its effect on the bearing capacity of the embedded section of the anti-slide pile is negligible. The passive soil arch is most prevalent in the 15~80 cm buried depth range. (3) As stress increases, the passive soil arch in front of the pile gradually develops downward along the buried depth. Gradually, the three-dimensional surface grows, and the entire structure slides away from the pile. Additional soil arching from top to bottom till the anti-slide piling structure fails. To create the three-dimensional surface of passive soil arch space, the maximum range of the passive soil arch space surface was chosen (δ = 30 mm), as illustrated in Figure 16. As can be observed, the spatial surface of the passive soil arch is separated into three distinct zones based on the buried depth: (1) when the buried depth is greater than 45 cm, with the decrease in the buried depth, the soil arch vault and the arch foot gradually move to the loading direction, and the shape of the arch axis changes constantly, which belongs to the development stage. (2) With the further decrease of buried depth, the shape and spatial position of soil arch basically do not change, which is called the completed stage. (3) When the buried depth is less than 25 cm, the passive soil arch enters the completion stage, but its shape is different due to the change of buried depth and soil properties. a passive soil arch in front of the pile is generated and progressively destroyed. Additionally, the following outcomes can be obtained: (1) When the same amount of load is applied, the spatial distribution of passive soil arch essentially follows the distribution rule that along the buried depth, gradually approaching the pile front. (2) When the buried depth is less than 15 cm, the loading amount associated with the passive soil arch from formation to failure is negligible, and its effect on the bearing capacity of the embedded section of the anti-slide pile is negligible. The passive soil arch is most prevalent in the 15~80 cm buried depth range. (3) As stress increases, the passive soil arch in front of the pile gradually develops downward along the buried depth. Gradually, the three-dimensional surface grows, and the entire structure slides away from the pile. Additional soil arching from top to bottom till the anti-slide piling structure fails. To create the three-dimensional surface of passive soil arch space, the maximum range of the passive soil arch space surface was chosen (δ = 30 mm), as illustrated in Figure 16. As can be observed, the spatial surface of the passive soil arch is separated into three distinct zones based on the buried depth: (1) when the buried depth is greater than 45 cm, with the decrease in the buried depth, the soil arch vault and the arch foot gradually move to the loading direction, and the shape of the arch axis changes constantly, which belongs to the development stage. (2) With the further decrease of buried depth, the shape and spatial position of soil arch basically do not change, which is called the completed stage. (3) When the buried depth is less than 25 cm, the passive soil arch enters the completion stage, but its shape is different due to the change of buried depth and soil properties. In accordance with the investigation of the three-dimensional surface of the passive soil arch, the displacement of the pile and soil grows gradually as the buried depth decreases, which may be regarded as passive soil arch changes related with pile soil displacement. The law is compatible with Section 4.2's conclusion. It indicates that the complete process of generating a passive soil arch in front of an embedded anti-slide pile may be divided into four stages: formation, development, completion, and destruction within 4 times the pile width of pile spacing. In accordance with the investigation of the three-dimensional surface of the passive soil arch, the displacement of the pile and soil grows gradually as the buried depth decreases, which may be regarded as passive soil arch changes related with pile soil displacement. The law is compatible with Section 4.2's conclusion. It indicates that the complete process of generating a passive soil arch in front of an embedded anti-slide pile may be divided into four stages: formation, development, completion, and destruction within 4 times the pile width of pile spacing. Conclusions The existence of a passive soil arch in front of anti-slide piles with a pile spacing of four times the pile width is demonstrated through a model test. The bending strain of the anti-slide pile body is investigated, as well as the shape and development law of the passive soil arch axis in front of the pile. Simultaneously, numerical simulation is utilized to support the model test, and the three-dimensional distribution law of the passive soil arch in front of the pile is investigated. The following are the major conclusions: (1) By installing side piles, the boundary effect problem can be greatly alleviated. It is proposed that side piles be placed beyond the main focus of the study in order to eliminate boundary effects in the pile row and pile group model tests. The jack's force on the anti-slide pile steadily increases as the load grows, and the growth rate gradually decreases. When the loading amount reaches δ ≥ 40 mm, the thrust becomes essentially constant as the loading amount increases, and the anti-slide pile system enters the failure stage. Conclusions The existence of a passive soil arch in front of anti-slide piles with a pile spacing of four times the pile width is demonstrated through a model test. The bending strain of the anti-slide pile body is investigated, as well as the shape and development law of the passive soil arch axis in front of the pile. Simultaneously, numerical simulation is utilized to support the model test, and the three-dimensional distribution law of the passive soil arch in front of the pile is investigated. The following are the major conclusions: (1) By installing side piles, the boundary effect problem can be greatly alleviated. It is proposed that side piles be placed beyond the main focus of the study in order to eliminate boundary effects in the pile row and pile group model tests. The jack's force on the anti-slide pile steadily increases as the load grows, and the growth rate gradually decreases. When the loading amount reaches δ ≥ 40 mm, the thrust becomes essentially constant as the loading amount increases, and the anti-slide pile system enters the failure stage. (2) The parabola distribution of the pile's bending strain after loading is referred to as the parabola distribution. When the strain value exceeds 40 mm, it rapidly increases between 0.8 m and 1.0 m from the pile top. This shows that bending failure occurs on the pile body at this position, with the failure point located between 0.8 and 1.0 m. Follow-up observation confirms that the breakdown point is around 0.9 m from the pile top. Simultaneously, the pile strain values obtained using model test and numerical modeling accord well when δ ≤ 40 mm. (3) The parabola form can be used to fit the σ x,max0 corresponding points at 15 and 35 cm depth. The existence of a passive soil arch in front of a pile with a spacing of four times the pile width is demonstrated. Simultaneously, when the loading amount is small, the measured value of the model test agrees well with the numerical simulation results, demonstrating the reasonableness of the numerical model presented in this article. (4) The soil arch evolution law can be divided into four stages within a pile spacing of four times the width of the pile: formation, development, completion, and destruction. With increasing loading, passive soil arches in front of piles are produced and destroyed. As loading increases, the three-dimensional surface formed by the passive soil arch in front of the pile eventually develops downward along the buried depth and moves toward the loading direction in its entirety. Additionally, the soil arch destroys from top to bottom until the anti-slide piling mechanism fails. Although there are important discoveries revealed by these studies, there is also a limitation. That is, this paper relies on the model test of pile spacing four times pile width for research and analysis, and the working condition is relatively uniform. Subsequently, the passive soil arch effect in front of the pile should be studied under multiple working conditions to further investigate the influencing factors of passive soil arch in front of the pile, and a method for calculating anti-slide pile bearing capacity taking passive soil arch effect into account should be proposed.
13,056.4
2022-07-21T00:00:00.000
[ "Engineering", "Environmental Science" ]
The Usage of Crumb Rubber Filtration and UV Radiation for Ballast Water Treatment this research is aimed to build ship’s ballast water treatment prototipe that used to inactivate microbial water patogen in ballast water to produce unpolluted ballast water that can be standardised by IMO Ballast Water Management Convention. A simple concept that used in the development of this prototype is by draining ballast water with capacity at 5 lpm, 10 lpm and 20 lpm into alternative filtration crumb rubber and UV reactor. In the filtration process using crumb rubber, ballast water will be filtered with the precision filtration up to 50 micron, while in the UV reactor ballast water will be illuminated by UV-C with maksimum dose 16,58 mW/cm2. Finally,the study shows the performance of alternative filtration of crumb rubber and UV-C irradiation on microbial water phatogen, and at what UV-C dose ballast water treatment prototipe can inactivate microbial water phatogens, which are complying with IMO Ballast Water Management I. INTRODUCTION Bal last water is very important to keep the savety operational of ship.Ballast water is used to control trim, draft, stability and tention on ships hull that caused by adverse ocean conditions or as a result of changes in cargo weight [1].According to [2], besides giving positive impact for ship, ballast water can make major threats for environment, publict health and economy.This problem is due to the spread of Invasive Alien Species (IAS) or Harmful Aquatic Organism and Pathogens (HAOP) through ballast water medium.To solve the problem, in 2004 IMO issued Ballast Water Management Convention which required ships using ballast water treatment sys 1 tem to allow only disposing of less than 10 living organisms of more than or equal to 50 micrometers per 1m3.And for microorganisms that hold between 10 and 50 micrometers, only 10 microorganisms can be removed every 1 millimeter.As for the type of microbe, should not exceed the concentration that has been set.For vibrio cholerae less than 1 cfu per 100 ml.For Escherichia coli less than 250 cfu per 100 ml.For intestinalentercocci less than 100 cfu. To fulfill the requirements of IMO BWT Convension several ballast water treatment methods were developed on the ship, such as filtration and UV irradiation.To continue the developed research, in this research will be conducted study on design of ballast water treatment prototipe by using alternative filtration using crumb rubber and UV radiation rays to destroy microbial water pathogens.With this research will be known how the performance of crumb rubber when used as a water ballast filter and also known how UV reactor design that 2008) has observed the effectiveness of filtration produced by sand, crumb rubber, and the combination betwen sand and crumb rubber.The effectiveness of this study is based on the water turbidity level, the number of pithoplangton and the amount of zooplangton.Z. J. Ren, et al., (2016) has undertaken research to understand the potential technology of UV irradiation to kill microorganisms in ship's ballast water using static and dynamic experiments.Static experiments were used to determine the effect of UV doses on inactivation of microorganisms, whereas dynamic experiments were used to study the effects of water capacity on the inactivation of microorganisms. From related work in previous research above, they have studied the performance of crumb rubber and uv filtration in ballast water treatment.Hence in this paper will be presented research on the performance of filtration of crumb rubber and UV irradiation which are installed together in a prototype of ballast water treatment in inactivating pathogenic water microorganisms. II. DEVELOPEMENT OF BALLAST WATER TREATMENT PROTOTIPE 2.1 UV-C Radiation UV-C is a type of ultraviolet light spectrum that can be absorbed by ribonucleic acid (RNA) protein and deoxyribonucleic acid (DNA).UV-C has a wavelength of 200 nm -300 nm.At the wavelength owned by UV-C willmakes the bacteria die easily when exposed by those rays.Therefore, in the water treatment system, it can be used UV-C light to turn off the microbial water pathogens (Liu, 2005). In this research, the typeof UV-C will be used.Those type has a power of 30 watts with a length around 57 cm and used two equipments.Both of these lamps will be mounted on a UV reactor that has a length of 53 cm and a diameter of 9,7 cm. Crumb Rubber Filtration Crumb rubber are chunks of rubber made from raw rubber that is pressed into sheets then cut into small pieces.In addition, crumb rubber can also be made from waste tires that are cut and milled to the desired size and then cleaned and removed any metal particles contained in it.In this research, crumb rubber filter is design with internal diameter of 6.5 cm and filter depth of 20 cm.The crumb rubber media used in this study is made from rubber waste motorcycle tires are cut small on the size of 5 mm.In this research is also used the use of carbon filters as filter media on the prototype to be a benchmark filtering performance performed by crumb rubber filters.The carbon filter used has a screening precision of 5 microns. Prototipe Working Scheme Ballast water treatment prototype in this study was built using the following working scheme Based on the working diagram above, ballast water treatment prototipe consists of tank number one which serves as a reservoir of seawater in the existing condition.Water in tank number one will be pumped into the filter with a predetermined capacity.The Capacity of ballast water in the prototipe is regulated using a regulating valve located on the discharge side of the pump.When adjusting the ballast water capacityusing ball valve, it is also used the monitoring function on flow meter which is located between ball valve and flow meter.This monitoring is conducted to determine the amount of water capacity in the system and to ensure whether it has been in accordance with the variation of the predetermined capacity.After coming out of the flow meter, ballast water will flow into filter to undergo a process of sediment filtering and filters all of microbes that have size above 50 micron.After that, the water will flow into UV reactor to undergo microbial inactivation process.In UV reactor there are two 30 watt UV-C lamp.The dose of UV light irradiation on this reactor will be varied by regulating the voltage of UV lamp using an electrical regulator.Upon exiting the UV reactor, ballast water will be accommodated in the processed tank.From this tank will be taken water samples to analyze the amount of microbes contained in ballast water by using Total Plate Count (TPC) method. Prototipe Design Drawing Figure Bellow is the design drawing of ballast water treatment prototype using a combination of crumb rubber filtration and UV radiation. Experiment In the experiment, seawater will be pumped into ballast water prototype with a variablecapacitythat shown at Table 1.In addition, the uv dose was also varied to determine the relationship between the amount of ballast water capacity and UV doses that required for pathogenic microbial water inactivation in seawater (Table 1).After coming out of the prototype, processed water was tested on the amount of its microbial content using TPC method.TPC method is a method of microbial calculation using a medium of growth in the form of gel.The processed seawater obtained from the prototype will be diluted from 10 -4 to 10 -7 ( Figure 9), then each sample will be incubated for 24 hours into the medium in order for living microbes to grow.After 24 hours microbes will be observed then put into formula bacteria / ml = number of colonies x 1 / fp [3] to produce the amount of microbes that written with units of Colony. Microbial Content In Sea Water Under ExistingConditions Analysis of the amount of microbial content in seawater, basedon the existing condition was doneusing Total Plate Count(TPC) method.Those methodis analyzing the growth ofa bacteria grower in the form of Na.In this analysis, water sample that used is water sample from Kenjeran.After being observed by TPC method, Kenjeran sea water sample contain 1.31 x 10 5 cfu. Quantitative Analysis of Microbial Water Using TPC Method With Sterile Aquades Solvent The analysis of microbial content on sea water is used aqueous solution as a Na solventmedium.A sterile aquades solution is used as a Na solvent to meet the protocol of making bacteria-growing medium on TPC method.From the observation the number of microbes that have been implemented using this method results as in Table 2. In the microbial quantitative test using TPC method, the results on all samples of seawater that have been treated using prototype is obtained.the result is shown that the object does not contain microbes except in crumb rubber filtration sample with 10 lpm discharge and 7.10 mW / cm 2 which still contains 10 cfu.With this result, the inactivation of microorganisms in seawater using ballast water treatmet prototipe have the around 99% efficiency.The result of microbial observation on seawater with quantitative test has been in accordance with the testing protocol, but from the results, it is expected to have an error in the calculation of the number of microbes.The error of this calculation because of the using of aqueous solutions as a mixture in the medium.Aqueous solutions have lower salinity than sea salinity and thought to be less suitable if used as a microbial growth medium diluent.This condition is suspected to cause growth of the microbesin the medium, thus inhibiting or killing the bacteria To examine the effect of salinity on bacterial growth, further testing is done using turbidity method.In this test, the bacteria in the sample was obtained based on the observation of the turbidity level whose value was raised from the spectrophotometer with Optical Density (OD) unit.In this turbidity test, it is used two blank which consist of sterile aquades and sterile seawater. The result of turbidity test on water samples shows that OD value is 0.066, whereas by using sea water blank shows OD value is 0.070.The results shows that there is difference in OD values both in the sample with aquades Dec. 2017. 33-40 (pISSN:2541-5972, eISSN: 2548-1479) and sterile seawater.The differences in the values that obtained may be due to the influence of salinity levels.Salinity that is inconsistent with the existing microbial conditions in the aquades blank, allows the microbe to experience lysis during the dilution process.Which is the reason when the object is tested with a spetrophotometer will produce a lower value than the sample.This value proves that differences in environmental conditions of microbial such as salinity will affect microbial growth.These resulted in error / invalid value of microbial calculations using aquades diluents.However, the difference in value from OD test is very small, which is 0.004.This indicates that the effect of salinity is not so significant. In addition to the test of the turbidity of sea water samples, the turbidity test on sea water samples that have been processed in a ballast water treatment prototipe is conducted.From this test, obtained the value of OD as in Table 3 below. From Table 3, it is shown that with filtration and UV irradiation treatment resulted in a decrease in OD value of 0.048.From this result it can be analyzed that the salinity difference in the growth medium does affect microbial growth, but based on the turbidity test the value is relatively small.While the dominant factor onthe inactivation of microbial seawater pathogens in this research is because of inactivation produced by prototype of ballast water treatment. Quantitative Analysis of Microbial Water Using TPC Method With Sterile Sea Solvent In the analysis of microbial content of sea water, it is used sterile sea water as a medium Na solvent.Sea water is used as a Na solvent to meet the salinity level that is enhanced by marine bacteria in order to grow normally in bacterial growth medium.From this observation obtained results as in Table 4. In the observation of seawater sample susing TPC method with medium and sterile sea water obtained inconsistent various results.This is similar to the result in experiment using carbon filtration as in shown onTable 5. 5 shown that in the experiment of seawater treatment at 20 lpm discharge with increasing UV irradiation dose did not result increasing number of microbial inactivation.In sample number one with UV dose 7.10 mW / cm 2 the number of living microbes around 8.3 x 106 Cfu.This amount is larger than the amount of microbial water samples in the existing condition that is around 1.31 x 105 cfu.In the second sample with UV dose of 14.21 mW / cm 2 showed good inactivation, in this sample there was no surviving microbes, but the condition of sample number two was not supported by the consistency of inactivation in sample number three.In the third sample with UV doses greater than the second sample of 70 mW / cm 2 , it showed that living microbes rose to 7.5 x 106 Cfu.This amount also exceeds the number of microbes in the existing condition. The inconsistency of seawater sample test results in this study is hypothesized to be caused by several causes.The first hypothesis of inconsistencies caused by prototype performance factors that can not work optimally in providing treatment to seawater so there are still a lot of living microbes.The second hypothesis inconsistency is caused by the contamination of the microbial sea water derived from the medium diluent so that it has not died even though it has been sterilized before.The third hypothesis of inconsistency is caused by microbes that grow significantly when implanted in the medium.The microbes grow significantly because they get an ideal environment that has salinity, ph, water content and other supporting factors that support microbial growth. To prove the truth of the three hypotheses, a more indepth quantitative analysis is conducted using turbiditi method.The results of this turbidity test is shown in Table 6. Table 6 shows that the presence of UV irradiation treatment does not decrease the OD value, whereas with UV filtration and irradiation treatment the value is decreased by 0.060.From this result it can be concluded that the first hypothesis is considered less precise because the prototype can inactivate microbes well when when combining UV irradiation with filtration.In the second hypothesis, where inconsistency of inactivation is caused by microbial contamination of seawater on the medium to be assessed has very little potential.This is because the microbial endospores take more than TPC incubation time used is 12 hours.The third hypothesis is considered to be the most likely hypothesis to occur than any other International Journal of Marine Engineering Innovation and Research, Vol. 2 (1), Dec. 2017. 33-40 (pISSN:2541-5972, eISSN: 2548-1479) hypothesis. According to Aguskrisno (2011), aexperiment with Escherichia coli.factors such as medium, wetness, pH, temperature is still good.We can calculate how large the amount of 1 E. coli after it is allowed to breed 24 hours, that is 272 = 22 x 270 or more than 4 x 1021.From that case it is probable that the medium that has been mixed with sterile sea water into an environment that is for the Microbes to reproduce, so that in the incubation period for 12 hours microbes grow significantly. IV. CONCLUSIONS Based on the results of sea water treatment experiments using ballast water treatment prototype, it can be concluded that 1. Prototype of ballast water treatment can inactivate microbial samples of marine waters by 99% when prototype is run using carbon filtration or crumb rubber, with 20 lpm in water capacity and 7.10 mW / cm 2 in UV irradiation dose.2. Carbon filter has better performance than crumb rubber filter in filtering sea water.This is shown in the sample of seawater treated at a discharge of 10 lpm with UV dose of 7.10 mW / cm 2 , on the carbon filtration, the number of bacteria growing around 0 Cfu, while in the bacterial crumb rubber filtration grows around 10 Cfu. 3. Ballast water treatment using filtration method is not able to reduce the value of water turbidity that conduct by microbes in OD level 0.070.While with the treatment of filtration and UV radiation the OD level will decrease to 0,010. Fig. 9 Fig.9 Procedure of quantification by Total Plate Countmethod TABLE 1 . TABLE OF SEAWATER TREATMENT EXPERIMENTS TABLE 4 . TABLE OF MICROBIAL QUANTITATIVE ANALYSIS USING TPC AND
3,766
2017-12-22T00:00:00.000
[ "Environmental Science", "Engineering" ]
Rescuing the nonjet (NJ) azimuth quadrupole from the flow narrative According to the flow narrative commonly applied to high-energy nuclear collisions a cylindrical-quadrupole component of 1D azimuth angular correlations is conventionally denoted by quantity $v_2$ and interpreted to represent elliptic flow. Jet angular correlations may also contribute to $v_2$ data as"nonflow"depending on the method used to calculate $v_2$, but 2D graphical methods are available to insure accurate separation. The nonjet (NJ) quadrupole has various properties inconsistent with a flow interpretation, including the observation that NJ quadrupole centrality variation in A-A collisions has no relation to strongly-varying jet modification ("jet quenching") in those collisions commonly attributed to jet interaction with a flowing dense medium. In this presentation I describe isolation of quadrupole spectra from pt-differential $v_2(p_t)$ data from the RHIC and LHC. I demonstrate that quadrupole spectra have characteristics very different from the single-particle spectra for most hadrons, that quadrupole spectra indicate a common boosted hadron source for a small minority of hadrons that"carry"the NJ quadrupole structure, that the narrow source-boost distribution is characteristic of an expanding thin cylindrical shell (strongly contradicting hydro descriptions), and that in the boost frame a single universal quadrupole spectrum (L\'evy distribution) on transverse mass $m_t$ accurately describes data for several hadron species scaled according to their statistical-model abundances. The quadrupole spectrum shape changes very little from RHIC to LHC energies. Taken in combination those characteristics strongly suggest a unique {\em nonflow} (and nonjet) QCD mechanism for the NJ quadrupole conventionally represented by $v_2$. Introduction The flow narrative, believed by some to be of central importance to high-energy nuclear collisions [1,2], is based primarily on v 2 data interpreted to represent elliptic flow -azimuthal modulation of radial flow of a locally-thermalized bulk medium in non-central A-A collisions. Earlier versions referred to more-central A-A collisions at higher collision energies. More recently, "collectivity" in small systems (p-p, p-A) has been claimed as well, based on certain LHC data [3,4]. However, evidence strongly contradicting the flow narrative has accumulated over the past ten years: Differential analysis of p t spectra from 200 GeV Au-Au collisions reveals no evidence for radial flow -the blast-wave model said to measure radial flow responds arXiv:1609.07693v1 [nucl-th] 25 Sep 2016 EPJ Web of Conferences instead to a predicted and observed strong jet contribution to spectra [5]. Measurements of p t -integral nonjet (NJ) v 2 , based on model fits to 2D angular correlations that exclude a jet contribution ("nonflow") [6], reveal v 2 systematics uncorrelated with "jet quenching" [7] contradicting claims for a dense bulk medium [8,9]. Equivalent systematic v 2 trends are observed for high-multiplicity A-A collisions and for p-p collisions down to negligible particle densities [10]. There is no evidence for a QCD phase transition or changing equation of state. In this talk I present quadrupole spectra inferred from v 2 (p t , b) data that further contradict the flow narrative. Quadrupole spectra from recent LHC Pb-Pb data are compared to those from previous analysis of 200 GeV Au-Au data [11]. Quadrupole spectra reveal a hadron source boost incompatible with Hubble expansion of a flowing bulk medium, and the spectrum shape is very different from the single-particle (SP) spectrum for most final-state hadrons. 2 Quadrupole Spectrum Definition p t -differential2 is defined as the ratio of the quadrupole (m = 2) Fourier amplitude of the event-wise azimuth-dependent SP spectrum to the azimuth-averaged SP spectrum. As defined that ratio may include two jet contributions: (a) jet-related angular correlations in the numerator and (b) SP spectrum hard component in the denominator. The2 definition assumes that almost all hadrons "carry" the quadrupole correlation component and therefore are described by the same SP spectrum that should then cancel in the2 ratio. The NJ quadrupole Fourier amplitude, assuming source-boost azimuth distribution ∆y t (φ r , b) = ∆y t0 (b) + ∆y t2 (b) cos(2φ r ), can be expressed as [11] where φ r is φ relative to a reference angle, p t is p t in the boost frame, transverse rapidity is is the spectrum for those hadrons carrying the quadrupole correlation component, which may or may not be equivalent toρ 0 (y t , b). Ideal Hydro The following simple system illustrates some implications of Eq. (2). The source-boost distribution, which should be broad for Hubble expansion of a bulk medium, is assumed to be a single fixed value ∆y t0 for each collision system. The quadrupole spectrumρ 2 [y t , b; ∆y t0 (b)] is assumed to coincide with SP spectrumρ 0 [p t , b; ∆y t0 (b)] including centrality-dependent radial flow [12] measured by ∆y t0 (b). In that case v 2 (p t ) ≈ p t (∆y t2 /2T 2 ) and v 2 (p t )/p t ∝ p t /p t . Figure 1 (a) shows p t in the boost frame vs lab p t for three hadron species. Figure 1 (b) shows ratio p t /p t vs y t (with proper hadron mass) as a universal curve common to all hadron species, with shape determined solely by fixed source boost ∆y t0 = 0.6. Figure 1 (c) shows panel (a) adjusted to anticipate 200 GeV v 2 (p t ) data, and panel (d) shows the equivalent for ratio v 2 (p t )/p t . In each case viscous-hydro predictions for 200 GeV Au-Au collisions (dotted curves) [13] shown for comparison exhibit striking differences from the "ideal-hydro" example. This exercise illustrates "mass ordering" at lower p t in the conventional plotting format of panel (a). The quantitative source-boost distribution that hydro theory actually predicts (broad or narrow?) is more easily tested in the format of panel (d) than in (a) or (c). (d) Figure 1. "Ideal-hydro" kinematic trends assuming single value of source boost ∆yt0 and quadrupole spectrum equal to SP spectrum [11]. Viscous-hydro theory trends [13] are included for comparison. 200 GeV Au-Au Quadrupole Spectra Quadrupole spectra can be inferred from v 2 (p t , b) data in a few steps. Figure 2 (a) shows 0-80% central v 2 (p t ) data for three hadron species in the conventional plotting format [14,15]. "Mass ordering" below 1.5 GeV/c is said to indicate a hydro mechanism. Curves representing "ideal hydro" cross the top edge. A viscous-hydro theory curve for protons is indicated by R [13]. Curves passing through data at higher p t are explained below. Figure 2 (b) shows the same data in the form v 2 (p t )/p t (lab) vs y t with proper mass for each hadron species. The ideal-hydro curves go to a constant value for higher y t , and the data trends for three hadron species share a common zero intercept that can be identified with fixed source boost ∆y t0 ≈ 0.6. Figure 2 (c) confirms that the (dotted) viscous-hydro theory curve is strongly falsified by Lambda (or proton) data. Figure 2 (d) shows the quadrupole Fourier coefficient as the product V 2 (y t ) =ρ 0 (y t )v 2 (y t ) obtained via SP spectra for three hadron species [11]. [14,15] processed to obtain V2(yt) =ρ0(yt)v2(yt) ∝ p t (yt)ρ2(yt; ∆yt0) quadrupole spectra as in Ref. [11]. To obtain final quadrupole spectra from the data in Fig. 2 (d) three more steps are required. The spectra are multiplied by p t (lab)/p t (boost) determined exactly by inferred ∆y t0 = 0.6, shifted left by ∆y t0 from lab frame y t to boost frame y t and finally transformed to boost-frame m t with the appropriate Jacobian. The result is shown in Figure 3. The spectra, rescaled by their statistical-model abundances relative to pions [16], lie on a common locus (solid curve) with slope parameter T 2 = 92 MeV and Lévy exponent n 2 = 14 dramatically different from the hadron SP spectrum with T 0 = 145 MeV and n 0 = 12 [5]. The solid curve, back-transformed by reversing the sequence, gives the curves passing through data in Fig. 2. Figure 4 (a,b) shows v 2 (p t , b) data vs p t for pions and protons from seven centralities of 2.76 TeV Pb-Pb collisions [18]. 200 GeV v 2 (p t , b) data are presented in Ref. [19] for comparison. Figure 4 (c) shows proton data plotted as in Fig. 2 (b) but rescaled by p t -integral values v 2 (b) [20]. The dashed curve is the 200 GeV equivalent. The 2.76 TeV data reveal significant variation of source boost ∆y t0 (b) with centrality. Figure 4 (d) shows the same data with all centralities boosted (shifted on y t ) to coincide with the 30-40% data. Within uncertainties all centralities follow the same locus that coincides also with the 200 GeV dashed curve. [21]: (a) The Pb-Pb pion spectrum normalized by N part /2 must be divided by factor 1.65 to coincide with the p-p spectrum at lower p t in accord with 200 GeV data [5]. (b) The Pb-Pb proton spectrum normalized by N part /2 (solid dots) is divided by the same factor but falls substantially below (factor 2) the p-p spectrum at lower p t suggesting significant uncorrected inefficiency in that interval. Figure 5 (c) shows the equivalent of Fig. 2 (d) for four hadron species. The dashed curves through those data are the dashed curves for 200 GeV v 2 (p t , b)/p t (lab) multiplied by the Pb-Pb SP spectra rescaled by 1/1.65. Figure 5 (d) shows those data multiplied by factor p t (lab)/p t (boost) and transformed to the boost frame (shifted left by ∆y t0 = 0.6). GeV quadrupole spectrum data (inverted solid triangles and thin solid curve) are shown multiplied by factor 2.5 expected from observed energy scaling of v 2 andρ 0 . Slope parameters T 2 are not significantly different, but the Lévy exponent decreases significantly at the higher energy consistent with the trend for SP spectrum soft componentŜ 0 (m t ) [21,22]. Just as at RHIC energies there is a great difference between quadrupole spectra and SP spectra. SP spectrum soft componentŜ 0 (m t ) for 2.76 TeV p-p collisions plotted on m t is shown as the dashed curve. The spectrum for protons (and Lambdas) falls substantially below the bold solid curve for m t < 0.7 GeV/c 2 , consistent with the spectrum result in Fig. 5 (b). The dotted curves for kaons and protons are the 200 GeV dashed curves in Fig. 5 (c) derived from the solid curve in Fig. 3 processed in this case with the corresponding 2.76 TeV SP spectra. Summary Quadrupole spectra for identified hadrons, which may involve only a small fraction of finalstate particles that actually "carry" the quadrupole component, can be extracted from p tdifferential v 2 (p t , b) data by a simple sequence of transformations given the availability of matching single-particle hadron spectra. The sequence leads to determination of a hadron source-boost distribution consistent with a single value ∆y t0 (for a given collision system) that is inconsistent with the broad distribution expected for Hubble expansion of a flowing bulk medium. Quadrupole spectra at 2.76 TeV are remarkably similar to those for 200 GeV and very different from single-particle hadron spectra, contradicting a basic assumption of the flow narrative that almost all hadrons must participate in such flows. These new results, combined with related trends observed over the past ten years, strongly suggest that the nonjet azimuth quadrupole does not represent a hydrodynamic flow. The NJ quadrupole may instead be the manifestation of a QCD mechanism similar to QED antenna radiation. The solid curve is a Lévy distribution with T2 = 94 MeV (dash-dotted curve) and exponent n2 = 12, very different from p-p SP spectrum soft componentŜ0(m t ) (dashed curve) values T0 = 145 MeV and n0 = 8.8 [5]. The 200 GeV quadrupole spectrum multiplied by anticipated energy-scaling factor 2.5 is plotted as the thin solid curve and inverted solid triangles
2,807
2016-09-25T00:00:00.000
[ "Physics" ]
Double trouble? Towards an epistemology of co-infection Tuberculosis and HIV co-infection came to figure as one of the major global health problems at the beginning of the twenty-first century, with multiple attempts to tackle this intricate issue on epidemiological, clinical, and public health levels. In this article, we propose thinking beyond the practical problems caused by co-infections in order to explore medicine’s epistemological attachment to the id ea of single diseases, using TB/HIV as an analytical lever. We retrace how TB/HIV co-infection has been problematised in public health discourses since the 1990s, particularly in WHO reports and international public health journals, and show that it has been mainly discussed as a complex biosocial phenomenon in need of more resources. The epistemological interrogation of the concept of co-infection itself – as an entangled object of two or more diseases with different histories and social, political, and scientific identities – is largely missing. To elaborate on this gap, we look at the translational processes between the two diseases and their communities, and suggest concrete historical and ethnographic entry points for future research on this global health phenomenon. Introduction #deadlyduo During the twentieth international AIDS conference, held in Melbourne in 2014, two colourful creatures jumped around the convention and exhibition centre: a rather long, green stick with a funny face, licking its lips in pleasant anticipation, and a plump, purple ball, raising its eyebrows in pitiable despair. Under the hashtag #deadlyduo, the already significant global attention to the conference was supplemented by tweeted pictures of the human-sized plush figures -the green stick embodying TB and the purple ball incarnating HIV. The duo advertised a 'first-time-ever' event at an international AIDS conference: a 'TB/HIV networking zone' with a plenteous programme of speakers, studies, and events, all dedicated to the ever more pressing phenomenon of co-infection. The comical representation of the two pathogens exemplifies the common strategy of ridiculing the causal agent of a disease in order to empower those who oppose it. This spectacle of two personified pathogens worked, also, as recognition of the distinctiveness of each disease. While TB/HIV has much been campaigned upon, and while different protagonists and institutions have worked hard to introduce this #deadlyduo into global health's repertoire of action, TB and HIV never ceased to be two separate entities. Much work was needed to bring them together into one frame of reference, so that they could circulate on Twitter and elsewhere together. Already in the late 1980s, papers were being published that not only argued for the inclusion of TB on the list of AIDS index diseases but that also pointed to the changing biology of TB in cases of HIV, and thus to the drastic increase of complexity when dealing with patients who manifest both diseases (Nambuya et al. 1988). Since the emergence of a significant number of TB cases in people with HIV during the last two decades, the infection of one person with TB and HIV simultaneously has been called a broad variety of names. The earliest official WHO papers reported a 'deadly partnership' (WHO Global Tuberculosis Programme and UNAIDS 1996), while in more recent publications, TB/HIV co-infection has been referred to as a 'perfect storm' (Yoon 2007) and a 'deadly liaison' (Kaufmann and Walker 2009). A special issue of the Journal for Infectious Diseases framed it as 'synergistic pandemics' (Mayer and Dukes Hamilton 2010), while others conceptualised it as 'double suffering' (Vidal and Kuaban 2011) or 'double stigma' (Daftary 2012). In 2007, 1.37 million people infected with HIV were estimated to be co-infected with TB, and one in four deaths from TB was related to HIV (Getahun et al. 2010). These numbers illustrate the severity of TB/HIV co-infection as a public health threat; our interest is in how it has become increasingly 'branded' (Ogden, Walt, and Lush 2003) as a unique problem in need of independent political and financial means at the beginning of the new millennium. The combined acronym of TB/HIV has thus turned into a distinguishable and common 'brand' in the field of global health, much like related treatment approaches including DOTS, HAART, PreP, or TasP, and the attempt to combine them (Farmer et al. 2001). 1 The addition of syndemics Yet does one plus one really equal two? Are TB and HIV taken together 'just' a deadly duo? The coappearance of both diseases in patients indeed leads to an added level of complexity, oftentimes framed as a 'syndemic' (Singer 1996), yielding new questions, problems, and challenges for the clinic and public health. The objective of a syndemics approach is to acknowledge co-occurring epidemics as fundamentally entangled and structured by similar epidemiological conditions of poverty, inequality, and discrimination, all adding up to states of bad health (Singer et al. 2006). Syndemics research thus focuses on those 'communities experiencing co-occurring epidemics that additively increase negative health consequences' (Singer 2009, 12). In other words, co-infections like TB/HIV but also epidemics like drug addiction and hepatitis 'add up', becoming even more severe, multiplying their disastrous health consequences. Within a concept of syndemics, co-infections are thus framed as multiplied deadly afflictions, which constitute first and foremost a deadly phenomenon, as well as an intriguing practical problem in the clinic and in public health, determined by structural factors. In the syndemics literature, it is argued that these structural factors not only add up to bad health, but also multiply the vectors of overall disease burden (Singer 2009, 21). The 'inability to take into account biological, social, and political issues of coinfection' (Taylor and Harper 2014, 199) is held responsible for the ravages of syndemics, and it is in this field that public health policies and treatment and prevention programmes need to be improved. In the end, the concept of syndemics shows how single-disease approaches that do not take into account structural inequalities constantly fail. Its proponents usually make a prescriptive argument that such inequalities should be addressed in conjunction with each other, taking into account the multiplied complexity for treatment and prevention in dire socioeconomic conditions when more than one epidemic occurs at a time. Proposing syndemics as a way to conceive of the practical and structural problems coinfections create does not, however, seem entirely satisfactory to us, as it does not fundamentally question the single-disease framework and its associated ways of research. We therefore suggest taking TB/HIV not only as a practical problem but also as a heuristic lens, as an analytic lever, so to speak. Instead of only creating problems, TB/HIV can also open up new ways to rewrite and rethink the histories and presents of TB and HIV, as well as the general phenomenon of co-infection: entanglement. We argue that as a new, combined entity, TB/HIV permits not only a fresh look at each disease's field of knowledge and practice, but also epistemological questions on how to know and see infectious diseases. We suggest, with this perspective, that one plus one does not equal two: co-infections are more than deadly duos, they are more than complex syndemics. They are intriguing 'epistemological obstacles' (Bachelard 2002) -not only for medical and public health practice, but also for the social sciences and humanities. We understand epistemological obstacles to be productive: they challenge established ways of seeing and dealing with disease, and allow us to investigate new avenues for different kinds of research on epidemics. Seeing diseases as co-infections allows us to think and analyse beyond the given narratives of specific diseases, and draws our attention to common problems, shared underlying conditions, and the ways in which strategies and concepts, which have been developed to tackle one disease, travel on to another. Conferences, archives, library sections, reading lists, series and collections, and book chapters and articles regularly follow disease 'biographies'. It thus takes a great deal of intellectual effort and creativity to bridge their histories, to relate them; to find concepts, practices, and ideas in the in-between of two diseases; and to follow how one disease informs the ways in which another disease is approached. Some work has already be done to show how epidemics simultaneously reveal and veil each other, most recently by Julie Livingston (2012), who shows the interrelations between AIDS and cancer in Botswana, and by Johanna Crane (2011), who links the conceptualisation of treatment resistance in the field of HIV to the problematisation of multiresistant tuberculosis. Even as we write this, other co-infections like AIDS and hepatitis C are becoming new grounds of research and funding (see, for example, Chabrol 2014c). As researchers in the field of anthropology and the history of infectious diseases, we wish to slightly step back from the dire reality of multiplied disease loads, and instead interrogate the historical conditions in which co-infections became problems of health politics and policies in the first place. By shedding light on the processes through which TB/HIV co-infection came to figure as a productive concept, we aim to foreground issues of clinical complexity, lack of funding, pharmaceutical development, and advocacy in the field of global public health. Our argument -that thinking with 'co-infection' challenges our idea of disease entities -is therefore informed by a historical anthropology of sorts, rather than ethnographic material collected through interactions with health professionals on the ground or global health actors and institutions. While this is a preliminary investigation of the global health and medicine policy documents that shaped the discourse on co-infection in the late 1990s and early 2000s, our larger epistemological argument suggests ways in which historical and ethnographic research on diseases of co-infection might be conceptualised in the future. TB/HIV as an epistemological obstacle TB/HIV is a paradigmatic example of the insufficiency and inadequacy -and yet solidityof singular clinical, epidemiological, and other classification systems, which are made to separate and to distinguish phenomena. These systems are designed to order complexities, rather than perceive them. We take TB/HIV as a synecdoche, a case that exemplifies the fundamental questions that arise when dealing with the parallel and entangled occurrence of multiple diseases at the same time, where one disease veils and simultaneously reveals the other (Livingston 2012). As recent studies on hepatitis C and HIV (Greub et al. 2000; Chabrol 2014c show, pathologies manifest differently once their entanglement ceases to be masked by the divisions -diagnostic procedures, economic interests, research opportunities, and political urgency -of the medical and scientific field. As such, co-infections pose challenges to treatment guidelines, clinical protocols, randomised trials, epidemiological models, and practices of care -as well as the epistemological premises of writing and thinking about diseases in the social sciences and humanities. That is why we understand TB/HIV as an epistemological obstacle in the productive sense. As an entangled phenomenon, TB/HIV produces continuous and shifting states of complexity, which are due to the distinct histories and presents of TB and HIV, but which can never fully be referred to by them nor be explained by them alone. TB/HIV thus constitutes a new entity for public health while at the same time being entrapped in two already-existing disease histories, which have to be rewritten for the future. In TB/HIV, we thus see co-infection as a process that both dissolves and stabilises two of the major infectious diseases in global health. To better grasp this parallel process we turn to Ludwik Fleck's (1981) seminal work on the genesis of a scientific fact. Fleck (1981, 109) used the term 'translation' to describe the process of two thought collectives talking to each other: 'Collectives, if real communication exists between them, will exhibit shared traits independent of the uniqueness of any particular collective'. Building on Fleck's description of how the concept of syphilis passed from one thought community to another, we see translation in the context of TB/HIV as the following: the entanglement of TB and HIV can be understood as the passage of one thought community and disease concept through the other. We take from Fleck the observation that if an entangled object made of different collectives appears, there ought to be a common ground in both diseases that permits the entangling of the object in the first place. Thus, co-infection allows us to interrogate the pasts and presents of diseases for their common ground and shared traits. Such features might contest the uniqueness and specificity of both TB and HIV/AIDS; they might also reveal that treatment and prevention of TB and HIV continue to be largely structured by a modern understanding of infectious diseases as caused by discrete microbiological agents, best solved through pharmaceutical treatment, and always entangled in the social, political, and cultural webs that make up societies in history. If translation is successful, and communication takes place, then transformation is inevitable. Fleck (1981, 111) continues: 'Communication never occurs without a transformation, and indeed always involves a stylised remodeling, which intracollectively achieves corroboration and which intercollectively yields fundamental alteration'. The crucial point here for the case of TB/HIV co-infection is: when the thought collectives of TB and HIV communicate, they are always already transforming their ideas and concepts of the diseases at stake. This means that the concepts a) gain strength and significance within the existing collectives, and simultaneously b) become altered in between the thought collectives. For Fleck, this doubled process of alteration and corroboration is the basis for an epistemological approach to the genesis and development of any knowledge. Our brief evaluation of both diseases in their own spheres, and of efforts to address their short entangled history, reveals both the sturdiness of certain aspects of each disease as well as the fluidity and contingency of the very same cultural, social, and biological elements involved in their making. While the lens of co-infection might partly dissolve the existence of single-disease concepts in medicine and public health, their practical use endures, solidly anchored. Given the endurance of single-disease concepts, we begin in the next section by engaging with the histories and presents of TB and HIV. Following that discussion, we trace the features of each disease that get invoked in relation to the phenomenon of TB/HIV coinfection, as found in public health publications and WHO reports from the early 1990s through the present. We then use this analysis as a tool to open up new research questions and to propose strategic entry points for future ethnographic and historical research in this emergent field of histories and anthropologies of co-infection. Single diseases In his classic work on the history of nosography -the systematic description of diseases -Knut Faber (1930, 7) states that the clinician 'cannot live, cannot speak or act without the concept of morbid categories'. The idea of morbid categories, or single-disease concepts, is of course older and can easily be traced back to Hippocrates and beyond -if it is understood as a set of abstract signs that describe disease, used to organise diagnostics, treatment, and surveillance. The single-disease concept became fundamental to medicine (and the history of medicine and anthropology) at the beginning of the nineteenth century, with the birth of modern medicine and its empirical lab procedures and scientific principles of proof. Elaborating on this claim, Charles Rosenberg (2002, 237) argues that a 'modern history of diagnosis is inextricably related to disease specificity, to the notion that diseases can and should be thought of as entities existing outside the unique manifestations of illness in particular men and women. During the past century especially, diagnosis, prognosis, and treatment have been linked ever more tightly to specific, agreed-upon disease categories, in both concept and everyday practice'. The phenomena of co-infection would thus seem difficult to accommodate into modern medicine's etiological theories and its way of seeing disease as singular, or perhaps additive at best. But neither do single diseases, however, easily come into clinical existence as such. It is precisely in the histories of TB and HIV, and their respective stabilisation as single diseases with a characteristic scientific, clinical, social, and cultural profile, that one can begin to understand the emergence of TB/HIV as additive conceptual entity, one that engenders practical problems, and the reasons for the difficulties in treating or researching both diseases as a lived entanglement. TB In Europe and North America in the nineteenth and early twentieth centuries, TB was the 'number one cause of death' (Packard 1989, 1). Known as the 'white plague', it triggered bacteriological and medical research of unprecedented kind (Gradmann 2005), which resulted in the creation of a strong tuberculosis research, practice, and policy community at the intersection of science, medicine, and society. It is thus not accidental that, in medical history accounts, tuberculosis figures as the paradigmatic disease of early biomedicine of the 1940s and 1950s, when the relations between the laboratory, the clinic, and the pharmaceutical industry were reconfigured (Quirke and Gaudillière 2008). Since then, the TB community has become international in scope, consisting of national and international medical associations like the International Union of Tuberculosis and Lung Disease, bacteriological reference laboratories, vertical disease-control programmes (see Harper 2006), and medical institutions like dispensaries and treatment centres across the world. At the European level, nationalised public health strategies include screening (see Armstrong 2012; Welshman andBashford 2006), contact tracing (see Kehr 2012a), and isolation (see Strange and Bashford 2003). Worldwide, the WHO's Directly Observed Treatment Short-Course strategy (DOTS) dominates TB control, especially in the Global South. TB has always disproportionately affected poor, disadvantaged, and dominated populations. It thus exemplifies the complex relationship between social inequalities, biological processes, biomedical research, and the unequal development of disease in different groups of people, making it a truly biosocial phenomenon. Anthropologist Erin Koch (2013a, 309) has recently argued that TB can be seen as a 'threshold where social and biological aspects of disease are negotiated'. TB is therefore an interesting object for the social history of medicine (Amrith 2004; Barnes 1995; Bryder 1988; Condrau and Worboys 2010; Packard 1989, critical medical anthropology (Draus 2004; Farmer 2000; Kehr 2012b; Keshavjee 2014; Koch 2013b, and social epidemiology (Gandy and Zumla 2003), fields of research that we also see as part of the TB community. TB is, in other words, a vantage point from which both the history and the present of complex interrelations between disease, medicine, and society can be examined. Yet compared to the large amount of scholarship generated in the field of HIV/AIDS, in the social as well as medical sciences, the body of work taking TB as a primary object of research is almost ridiculously small, and so is the TB community. Why is this so? One reason is that TB began to disappear as a major public health problem in Europe and North America in the 1960s, which led to considerable neglect of this disease during the 1970s and 1980s in the international arena and in research (Ogden et al. 2003, 180). TB, at least in the North, had become 'manageable' in the 1960s with the advent of antibiotic combination therapy. Long sanatorium stays and yearlong treatments were transformed into short-term relations between patients and health professionals, mediated through the mostly technical administration of drugs. Additionally, economic and social developments like universal access to health care, social insurance, improved living conditions, and a decrease in poverty in the postwar years proliferated in the North. The de facto availability of treatment coupled with these welfare advances thereby effectively contributed to declining TB disease rates in Europe and North America. TB thus became less and less visible in Northern societies, as prevention campaigns and mass-screening measures, such as mobile X-ray vans, gradually ceased to operate. The epidemiological decrease in disease rates in the North was paralleled by a strong belief in ever-advancing modernisation and development in the 1960s on a global scale, which helped make TB invisible as a public health problem. In sum, biomedical science gradually stopped basing its future on old diseases like tuberculosis, turning instead to new, scientifically more interesting, and more profitable challenges (Kehr 2012b). In the South, though, among the newly independent nations, tuberculosis did not disappear as a major public health problem. When the incidence of multiresistant tuberculosis began to peak among poor people in New York and London, and when immigrants from the South began to be seen as a new threat to the North in a 'regime' of global health that Andrew Lakoff (2010) has recently called 'global health security', the disease began to receive renewed interest from the public health scene -from funders, to scientists, to humanitarian organisations, to disease control programmes. Though described as the 'return' (Gandy and Zumla 2003) of tuberculosis, this was in fact rather a renewed visibility of TB on a global scale, a second modernity of a disease long ignored. Just a little more than a decade ago, TB as a site of research and action began to be invested in again, with the creation of such powerful organisations as the Global Fund and the TB Alliance. Not incidentally, renewed interest in TB has emerged alongside the massive advent of HIV/AIDS in an ever-more interconnected world and in the nascent field of global health, even if the power balance between TB and HIV/AIDS remains tilted. In 2012, the Treatment Action Group observed a total spending of $US627.4 million on TB research and development (Frick and Jiménez-Levi 2013, 1) while spending for HIV research and development totalled $US2.6 billion (Smelyanskaya and Treatment Action Group 2013, 1). One could provocatively hypothesise that HIV created a new window of opportunity -not only for deadly co-infections to prosper, but also for the TB community to reactivate and reconstitute itself. What role HIV/AIDS played in the reactivation of TB research remains an open question. What we do know is that in the mid-1990s and early 2000s, new research and funding for this long-neglected disease was revived on a global scale: the DOTS strategy gave new visibility to TB in the 1990s in the midst of the emerging AIDS crisis; interest in the development of new TB drugs grew in the 2000s, not only due to bacterial resistance to existing drugs but also due to a need for better compatibility with antiretroviral combination therapy; novel institutions like the Global Fund for AIDS, Malaria and TB were created; and -last but not least -the TB/HIV strategic framework, elaborated by a working group hosted by the Stop TB department of the WHO, came into existence (WHO Stop TB Initiative 2002). HIV/AIDS In stark contrast to the long history of TB with its ups and downs over the past one hundred years, AIDS is a rather young disease. AIDS is not a disease in itself but is understood to be a syndrome of immune deficiency, which disposes a person to contract or develop a number of known diseases. In many ways AIDS can be understood as the paradigmatic disease of co-infection, born out of an assemblage of many known diseases that appear in unusual circumstances and strange habitats. AIDS in itself is always already manifested through the emergence and visibility of other diseases like Kaposi's sarcoma (KS) or pneumocystis pneumonia (PCP) (Preda 2005). By working through the unusual displacement of KS and PCP on bodies of young, mostly homosexual, men in the late 1970s, a new clinical picture emerged in which KS and PCP, in connection with a series of other infections, became resignified as symptoms of a new syndrome (Harden and Fauci 2012). The history of AIDS remains inextricably bound to the early years of the disease, in which the homosexual male body served as a vessel assembling the many unusual and not understandable signs of a new epidemic, establishing a strange relationship between the disease and some 'aspects of a homosexual lifestyle' (CDC 1981). The rather technical process of re-arranging and re-establishing abstract entities of a disease was accompanied by a series of accusations in which homosexual lifestyle became a crucial part of the endeavour to classify AIDS, which had been previously and informally called 'Wrath of God Syndrome' (WOGS), and was also briefly classified as 'Gay-related Immune Deficiency Syndrome' (GRIDS) (Treichler 1988, 52). Paula Treichler famously coined the term 'epidemic of signification' for the endless chain of meanings that got attached to the new and, in the beginning, inexplicable disease -obsessively cycling around the trope of the homosexual man (Bersani 1988; Crimp 1988; Watney 1987; Yingling 1997. By 1983 the US-based Centers for Disease Control had classified the new syndrome as an infectious disease with an unknown transmissible agent. They described the probable modes of transmission and characterised the syndrome through four prevalent risk groups: homosexuals, heroin users, haemophiliacs, and Haitians, the infamous '4-H' (Brandt and Jones 2000). A list of infectious diseases that likely occur in cases of AIDS was defined and predominantly used for diagnostics and screening, as blood testing only became available in 1985 (Farthing 1988). The identification of the virus is in itself a story of scientific obstacles and transnational politics (Epstein 1996). At one point in 1985, not less than six candidates had been identified as the virus responsible for AIDS. Immense political pressure and mostly pragmatic reasons led to the publication of an article in Science, where the various models and candidates were merged into the well-known acronym 'HIV' (Coffin et al. 1986). But the classification of the disease was also achieved through other practices, including the geographical mappings of its origins (Crane 2011; Gallo 1987; Fassin 2007; Pepin 2011; Shannon and Pyle 1989, public health interventions (Bordowitz 2010; Crimp and Rolston 1990; Cooter and Stein 2007, and, especially, social activism. The unprecedented history of ACT UP, and many more community-based practices of protest and resistance to governmental neglect and public hysteria, shifted conceptions of global health, the relationships between doctors and patients, and the relationships between the state and recipients of health services (Aggleton, Davies, and Graham 1997; Crimp 2003; Patton 2002. With the acceptance of the viral agent HIV, AIDS became a stabilised and defined disease entity. The identification of the virus is often presented as a key moment in the history of AIDS, which led to the historically unprecedented development of scientific research and its immense funding (Oppenheimer 1988; Fee andFox 1992). But identifying the virus also permitted the carving out of homosexuality as the initially identified causal factor for the disease. In this way, HIV served as yet another vessel to remove both public and scientific attention from social arguments, placing them instead inside the laboratory and its microbiological possibilities of intervention (Engelmann 2012). With the establishment of antiretroviral therapies (ARVs) in the mid-1990s, the image of AIDS was transformed, and its characteristic habitat was shifted from the urban centres of Northern Europe and the United States to the rural landscape of sub-Saharan Africa. Again, the very identity and structure of AIDS, or its nature, one might say, was transformed and reinvented. 'African AIDS' became a disease of the poor, predominantly heterosexual and mostly ignored throughout the rest of the world (Packard and Epstein 1991). Framed as 'Pattern 2' (Patton 2002), and shrouded under a global anaesthesia (Fassin 2007), the pandemic thrived in some countries. It was only when ARVs became available -though not equally accessible -in the early years of the twenty-first century, that the issue of distribution and health equity once again dominated the epidemic. The Treatment Action Group (TAG) and other activist organisations fought the cynical system of pharmaceutical patents, achieving the removal of trade regulations for generic ARVs in most of the highly affected countries. Today, the early 1980s can be understood as an archived history of AIDS. Through numerous practices like safe sex, educational campaigns, and the distribution of condoms; through blood testing and treatment plans; but also through visual representations, the messy and seemingly boundary-less phenomenon of a threatening pandemic was transformed into a rather fixed entity of knowledge, attached to the clear and almost incontestable aetiology of HIV. In short: the history of AIDS demonstrates how biomedicine, public health, and biosocial communities worked very hard -often with each other despite their many differences and open conflicts -to achieve the specificity of AIDS, often but not exclusively bound to the infectious agent, HIV. Bringing TB and HIV into conversation We have shown that since the 1950s TB has been a curable disease, engendering short-term relations between sometimes highly infectious patients and health professionals through the mostly technical administration of drugs, while HIV remains incurable. Yet HIV has been largely normalised into a chronic disease through social and political change, the pharmaceutical intervention of ARVs, and long-term care relationships that are regularly accompanied by the formation of self-help groups and political activism. While tuberculosis is just recovering from its long neglect as a disease 'without a future' (Kehr 2012b), ever since its emergence as a serious global health threat in the 1990s, HIV has been a popular focal point for global health actions and funding worldwide. Death rates in the AIDS epidemic significantly declined after the distribution of ARVs became an essential cornerstone of global health endeavours. In contrast, TB has again become deadly through multiresistant and ultraresistant bacteria, and the TB community continues to struggle for funding and recognition. Disease surveillance, treatment programmes, prevention activities, and funding streams have largely operated separately for HIV and TB, thus reflecting distinct, if not incommensurable, disease identities, histories, and research communities. The geographies of disease are not quite the same, even if both diseases followed a similar path of 'tropicalization' (Rees 2014, 240). Nor do the cultural histories neatly map onto each other. While TB is still framed in terms of old age, low tech, and little potential for innovation, HIV/AIDS has long attracted state-of-the-art research, rapid change, and significant activism. Given these different scientific, historical, and cultural trajectories, how then can TB/HIV be jointly addressed by global public health efforts? When policies, guidelines, and recommendations are established to address co-infections in a collaborative manner, what are the problems that emerge? To analyse this double process of historical distinction and contemporary entanglement, Fleck's (1981) work on thought styles, thought communities, and translation is again useful. While Fleck followed how syphilis was made into a disease entity, tracing different thought styles that bridged clinical practice and the bacteriological laboratory, we seek to understand the entanglement of two diseases. As we noted earlier, Fleck used the term 'translation' to describe and understand the events that unfold when two thought collectives collaborate and communicate with each other. Applying this concept to the case of TB/HIV, two disease collectives brought together through the practical entanglement of two diseases in patient bodies, one can very well see the doubled process of corroboration and alteration described by Fleck in the case of syphilis. TB/HIV, as we will show below, is as much a new disease entity -merging and emerging out of the field of already-known entities -as it is a process in which both diseases are stabilised. Twenty years of TB/HIV co-infection Since the 1990s, there have been numerous efforts to bring together the 'disease cultures' and treatment approaches of TB and HIV, and attempts to raise awareness of co-infections on the epidemiological, clinical, and political levels. A few years before the WHO established an active protocol on TB/HIV co-infection, the CDC (1991) published the first official report on the phenomenon. Based on studies in the USA, for example on male inmates in state penitentiaries (Salive, Vlahov, and Brewer 1990), or on the TB prevalence in certain districts of New York City (Fairchild and Oppenheimer 1998), HIV was identified as a crucial factor for an elevated risk of TB infection. These studies showed that HIV was driving the increase in TB infections, an unprecedented finding in a setting where TB was long seen as overcome. In parallel to the geographical trajectory of both epidemics, the centre of gravity for co-infections shifted from the United States and Europe to the territories of the so-called developing world, engaging old and new actors of global health. As a result, by the early 1990s, the WHO and the World Bank had already developed strategies to 'revitalize the global efforts against tuberculosis' (Broekmans 1991). In 2004, the WHO issued its first comprehensive Interim Policy on Collaborative TB/HIV Activities to 'assist policy-makers to understand what should be done to decrease the joint burden of tuberculosis and HIV', responding to a 'demand from countries for immediate guidance on which collaborative TB/HIV activities to implement'. The WHO's TB/HIV policy is thus a practical response to the emergence of TB/HIV co-infection in the nascent field of global public health (WHO and Department of HIV/AIDS 2004, 1). But how was TB/HIV conceived of and responded to in the very first years? Assembling a combined effort: WHO reports on TB/HIV co-infection The earliest documents on TB/HIV circulated in the WHO were two articles summarizing the clinical features, diagnosis, and treatment (Raviglione, Narain, and Kochi 1992), and the epidemiology and strategies of prevention (Narain et al. 1992). Both papers were written from the perspective of the WHO Tuberculosis Programme, and both aim to survey the challenges posed by co-infection for global health professionals working on TB in the Global South. A technical guide, published in 1993 (PAHO 1993) and an early, unpublished document from the WHO follow the same trajectory: they focus on the 'implications for TB control'. The latter is marked as a report based on a loose collaboration of the WHO TB Programme and the Global Programme on AIDS, who developed the paper 'to summarize the current state of knowledge about how best to deal with TB in circumstances where HIV is prevalent or emerging' (WHO Tuberculosis Programme 1994, 1). The early years of the co-epidemic of TB/HIV were thus formed and structured by protagonists from the field of TB, rather than HIV. This could be attributed to the 'attraction' (Chabrol 2014a) that the rising field of HIV prevention and research triggered, a field far more lucrative than any other global infectious disease programme. Yet the increasing mobilisation of the TB community on issues of co-infection might also be read as a strategy in which the emergence of a new entity -TB/HIV -was used to re-establish the global focus on TB, at that time a neglected and underfunded disease. The WHO report of 1994 paints a drastic picture of both epidemics, estimating that the 1990s would see ninety million new cases of TB, with about thirty million deaths, and thirty to forty million new cases of HIV, with about ten million deaths. Co-infections were anticipated to increase within the decade from around 300,000 in 1990 to around 1.4 million by 2000. And, indeed, co-infections peaked in 2004 at 1.39 million and roughly 550,000 deaths (Getahun et al. 2010). In sum, sombre scenarios of a growing public health threat with high rates of mortality were literally figured up through epidemiological visions of the deadly nature of co-infection, contributing to a sense of immediate urgency. As Craig Calhoun (2004) has shown, the conjuring of such states of emergency and urgency never stand alone but are always followed by calls for intervention, which was also the case in the field of TB/HIV. The WHO report concludes by pointing to the urgent need for increased funding, staff, and resources in the already existing structures of TB prevention and treatment. Co-infection was said to be effectively containable by making TB visible again and by tackling its underfunded status. As such, the 'neglect and the allocation of resources to other health needs' should be addressed (WHO Tuberculosis Programme 1994, 10). Even more so, the co-occurrence of TB and HIV should be used to identify those places and institutions where TB guidelines were not or only partially followed: 'Any TB program's weaknesses are exposed where HIV is prevalent and are indicated by increases in TB cases and mortality' (WHO Tuberculosis Programme 1994, 23). In sum, since its beginnings, TB/HIV was not only seen as a novel practical problem, but also as an occasion to reflect on the cultural status, treatment approaches, institutional structures, and funding mechanisms of both diseases, and especially those of TB. Another consistent feature throughout comparable documents is the urgent call to apply standardised diagnostic procedures, which are said to be especially lacking in the domain of TB: 'Providing trials of anti-TB drugs to patients to see if their health improves has sometimes been attempted to obviate the need for diagnosis. TB treatment is sometimes started solely on the basis of clinical symptoms. … [I]ndeed, in most places TB aspects of health services had been so neglected that these crucial elements were weak or non-existent prior to HIV's entry into the picture' (WHO Tuberculosis Programme 1994, 7). The appearance of HIV within TB's control and treatment structures therefore re-establishes routines, 'rejuvenating' protocols and rationalities originally invented to tackle TB. TB/HIV not only complicates the treatment of each disease, but also works as a matrix through which practical problems on the ground come to the fore. The entering of HIV into TB's field of practice and problematisation works as a diagnostics of insufficiency, showing its failures, inconsistencies, and incompleteness, yet also confirming TB as a distinct disease entity with its own structures of control. Finally such early reports set out an 'agenda for collaboration' between TB programmes and AIDS programmes. A well-functioning national TB programme, built along the lines of the WHO guidelines, will be able to collaborate on all necessary levels with AIDS programmes, so goes the argument. A TB programme that cannot accomplish the basic task of achieving a comparably high cure rate, though, will not. These early reports on TB/HIV co-infection are written against the frightening backdrop of spiking AIDS rates worldwide, at a time when HAART was not established yet, but DOTS was increasingly being adopted. The proceedings of a workshop, held in May 1995, gives a detailed account of the problems at hand. The workshop was organised by the WHO's Global TB Programme and was intended to result in a new research strategy on TB/HIV co-infection. The main goals were to improve TB control in areas of growing AIDS and HIV prevalence; this was to be achieved by the shared involvement of groups, communities, experts, and researchers from the long-standing TB Programme and the newly constituted UNAIDS Programme (WHO Global Tuberculosis Programme 1995). At the time, TB was understood as one of the most common opportunistic infections during the development of AIDS, and thus seen as the leading killer of patients with AIDS. This particular situation of urgency challenged health professionals to develop new settings for care, complementary to the clinic, for example, in private homes. In the workshop's report, co-infection is presented as a unique chance to improve the distribution of care costs, a growing burden due to the dramatic increase in AIDS cases, to fields outside of infectious disease wards, providing an opportunity to investigate the feasibility and effectiveness of such strategies in the case of TB (WHO Global Tuberculosis Programme 1995). In the absence of a pharmaceutical solution, the control of TB in HIV-infected patients was thus seen as having great potential. Once TB was acknowledged as a major cause of death for AIDS patients, its treatment was also seen as a possible avenue for action. While the entering of AIDS into the field of TB conjured a diagnostics of insufficiency and the possibility of increased funding, the entering of TB into the field of HIV/AIDS led to a demand for collaboration and 'mutualisation', which held TB treatment communities responsible for successes and failures. Almost ten years after the early reports, and despite the introduction of ARVs into the domain of HIV treatment and prevention, many of the problems already documented in the 1990s seemed to persist. The strategic framework on TB/HIV co-infection, the WHO's Stop TB Initiative (2002), which served as the basis for the WHO's Interim Policy on TB/HIV, again criticised the one-sided focus on highly specialised and often elitist HIV clinics as the only possible place to fight both epidemics. The document suggested pursuing a more general health-care approach, in which the entanglement of both diseases and their two-way ramifications could be better acknowledged, instead of further specializing in the treatment and control of single diseases: 'Tackling HIV should include tackling tuberculosis as a major killer of [people living with HIV]; tackling tuberculosis should include tackling HIV as the most potent force driving the tuberculosis epidemic' (WHO Stop TB Initiative 2002, 17). Another WHO workshop report on TB/HIV ('Two Diseases -One Patient'), this time held in Addis Ababa in September 2004, which was convened by the Stop TB Partnership, describes the many gaps between the competing and often conflicting cultures, histories, and infrastructures of TB and HIV treatment communities: 'The different histories and cultures of the TB and HIV communities raise many challenges in achieving an effective and productive partnership' (WHO 2004, 2-3). While HAART had become a major 'game changer' in the AIDS crisis, not much else had changed in the field of TB. The practical difficulties of cooperation are paramount in the reports. Bringing the TB and HIV communities -with their respective loci of care, treatment approaches, and funding schemes -together was indeed seen as a major precondition for collaboration. Thinking, preventing, diagnosing, and treating HIV/AIDS, in other words, must take into account the possibility of TB co-infection, and vice versa. The strategic frameworks suggest that the coupling of TB and HIV treatment and prevention activities would not only increase funds to deliver sufficient treatment for TB, but also that the new framing of HIV through TB co-infection would help raise awareness about inequality issues among health professionals working in well-equipped HIV institutions. It is precisely in this space between specialised, rather well-off HIV clinics and notoriously underfunded general health delivery systems or disease wards (Chabrol 2014b; Livingston 2012) that TB/HIV co-infection becomes a key issue at the beginning of the twenty-first century. Another recent report by the international partnership ACTION (2009, 7) underlines this connection: 'it has become crystal clear that effective HIV/AIDS programs must address TB as the disease most likely to kill people living with HIV. Despite a wealth of evidence and clear guidance, however, a concerted, integrated response to the coepidemic has yet to coalesce: in 2007, WHO estimates that worldwide only 2 percent of people with HIV were screened for TB'. Looking chronologically at these developments, we can see a turning point in how TB/HIV co-infection is problematised in the early 2000s, notably when the issue of cooperation becomes more and more focused on questions of treatment, especially in the Global South. The slogan 'Living with HIV, dying of TB' marks the cruel reality of suffering from potentially deadly TB disease while controlling one's HIV infection through ARVs. Often framed as a chronic shortage of resources for the treatment and identification of TB, the phenomenon of co-infection thus helped to reveal the practical importance of the different cultures of care, regimens of intervention, and politics of treatment in which both diseases were constituted and are contained. A critical analysis of the increasing convergence of health programmes and community efforts in the domain of TB/HIV since 2004 had to acknowledge again and again the vast imbalance between a powerful global scene of AIDS research activities and activism and the 'weak advocacy and anaemic research funding' for TB (Harrington 2010). This situation continues to affect the ways each disease is approached, and, as a consequence, the impossibilities of cooperation and convergence. In 2009, reports on TB/HIV co-infection became increasingly alarming. They started to openly criticise the big donors of global health like the Bill & Melinda Gates Foundation and the Global Fund for having long ignored the burden of TB/HIV co-infection. Donors were specifically accused of having failed to implement almost all of the WHO policies developed since 2004, and of having continuously insisted on the reproduction of existing funding schemes along the lines of single-disease concepts (ACTION 2009, 4). This short survey of WHO policy reports and working group documents shows that the entanglement of TB/HIV remains a hotly debated issue with no easy solutions in terms of treatment and prevention, despite the many 'shoulds' and 'woulds'. Despite a twenty-year effort to raise awareness of co-infection, and despite the overwhelming evidence that TB isin principle -a treatable and curable disease even when a co-infection with HIV occurs, TB remains the number one killer of people living with AIDS and HIV (Getahun et al. 2010). Yet it is also clear that despite the many policies and advocacy efforts, TB and HIV are still largely conceived of and managed as two distinct disease entities, associated with distinct treatment trajectories, different care practices, and unique politics of public health. Our article does not aim to resolve this ongoing problem of public health systems around the world when co-infections occur. Nor does it point towards possible answers to the complicated questions of collaboration between two historically and biologically different diseases and their treatment and research communities. Instead, we aim to open up a field of inquiry and pose new questions in relation to TB/HIV co-infection, to go beyond a concept of co-infection as the complex and problematic sum of two diseases. We also aim to go beyond diagnostics of insufficiency, which result in largely prescriptive policies of disease control, often written in the conditional tense. Rather, by demonstrating the persistence of single-disease concepts alongside the emergence of TB/HIV co-infection, we want to ask how TB/HIV as a merged entity alters and stabilises each disease at the same time, and then open up some propositions for future research. Beyond addition, towards alteration: TB/HIV as heuristic lens The reports and publications above share two very general ambitions. First, all of the authors and institutions involved wish to create awareness of a new entity, one that is relevant to public health policy making, scientific research, and medical practice. Second, this endeavour is accompanied by various efforts to bring two very different disease communities in touch with each other, and to establish both an understanding as well as a strong sense of mutual dependency. These parallel processes are paradoxical: the creation and stabilisation of a new entity is used to challenge the notion of single diseases and their communities as much as it is used to highlight the features, benefits, and structural problems of each disease community. While TB/HIV is crafted to become a focal point of politics, funding, and medical intervention, neither TB nor HIV is dissolved as an independent entity. On the contrary, the published reports often point to the mutual benefits of collaboration for the treatment and prevention of each disease in their own respective fields. Adding together two diseases does not simply lead to a new amalgamated version, in which the old diseases dissolve and a new entity appears. Co-infection as a phenomenon challenges the way we think about single diseases as distinguishable entities, as its appearance also stabilises each disease as a distinct and specific entity, reinscribing differences rather than collapsing them. On the one hand, TB/HIV and its associated practical problems figure as a constant reminder to health professionals of the historically dense local specificities of and national differences between treatment and prevention programmes, and also the neglected issues of inequality and poverty worldwide. TB/HIV as a new entity thus conjures once again, with force, the figure of the complex patient and her lived experience, a figure that always already evades abstract politics, solutions, and concepts, dreamed up in big institutions, scientific laboratories, and global economies. Co-infection thus grounds public health professionals in their efforts of introducing programmatic changes or implementing guidelines and procedures, because it works as a constant reminder that implementation is rarely a problem of implementation alone, but more typically one of adaptation, translation, and reconfiguration. It is no accident that Farmer and colleagues (2013) introduce their recently published introduction to the edited volume Reimagining Global Health with a detailed description of a young man, living in a sub-Saharan village, suffering badly from both HIV and TB. The figure of the suffering patient vividly illustrates that the global health project continues to fail to address the intricate complexities of treatment and prevention as they take place in real-life circumstances, shaped by conditions of poverty, inequality, and colonial history. On the other hand, the very failure of addressing TB/HIV properly as a new, amalgamated phenomenon, as was argued in ACTION's 2009 critique of major global donors, shows the persistency of TB and HIV as established categories of single diseases with their own trajectories, communities, and assigned professionals and programmes. Invoking the need for communication and coordination both references the difference between TB and HIV and paradoxically stabilises each, in so much as each is re-essentialised. So while we have pointed out numerous ways in which AIDS and TB came to be stabilised, and to a certain extent normalised, entities in the realm of global public health, we argue that co-infection might also be understood as an additional factor, one that simultaneously challenges the single-disease concept as much as it solidifies its very nature. Revised against the background of medical history, the thoughts of George Canguilhem (1978) might prove helpful to better understand what is at stake here. The specificity of the single disease proves desirable again and again in the realm of medicine and public health because it allows clinical as well as societal discourse to enfold an abstract but graspable object of thinking that is clearly distinguishable from other normal aspects of life. What Canguilhem and others have called the 'ontology of a disease' allows us to understand it as an entity that has acquired a qualitative difference from what is healthy, normal, and sustainable. Losing this ontological quality leads to complexity, allows for speculation, and distorts categories as well as framings and names. TB/HIV co-infection could be understood as doing both: establishing the ontology of both diseases, while continuously pointing to the contingency of their making. We have modestly started to address TB/HIV as a branded and historically localisable concept. Yet much more epistemological work is needed to evaluate the ramifications, shortcomings, and chances of co-infection as a concept. This work is necessary, we believe, not only to be able to think differently about this pressing public health problem, but also to broaden current historical, sociological, and anthropological analysis in the field of global health, which still largely follows distinct disease or treatment entities as well as concepts of co-infection that do not challenge them epistemologically. TB/HIV presents itself as both a singular and an exemplary problem, and as such is an important point of entry to such new forms of analysis. First, TB/HIV allows for a concrete approach to phenomena of co-infection while touching on much larger epistemological issues of biosocial entanglements. Single-disease concepts as objects of knowledge are the modus operandi in all kinds of disciplines, ranging from clinical medicine to public health to anthropology to history. What happens to this 'gold standard' (Timmermans and Berg 2010), when diseases mingle and create both epidemiological as well as epistemological interferences, a process that can be traced through the example of TB/HIV in practice as well as in theory? Second, TB and HIV are both archetypical diseases of what came to be known as 'global health'. It is an increasingly vast field of actors, interventions, and knowledge, where a balance between trends of universalisation and localisation needs to be mirrored in analysis, where locally and historically contingent and contextualised practices turn into universal approaches and brands, and where knowledge travels and is adapted worldwide. TB and HIV are diseases with globally standardised treatment and prevention schemes, which are yet always very much dependent on the local social, political, and economic context of their implementation. With the example of TB/HIV, one can reverse the analysis of 'implementation problems' from the ground up -to study practical problems not as implementation problems but as pragmatic problems of clinical medicine and public health struggling to localise and adapt global categories. The practical struggle to treat co-infections brings locality to the fore, and thereby allows articulation of the perpetual collision of local circumstances and global standards. What if the problem is not implementation but the way co-infections in particular and infectious diseases in general are conceived of in the first place? How can the de facto treatment for TB/HIV and other co-infections in clinical and public health settings help us to differently conceive of the concept of TB/HIV in particular and of co-infection in general? Third, TB and HIV/AIDS have both been subject to a vast amount of scientific research and literature, medical as well as historical, anthropological, and sociological. As 'menaces of mankind', they have altered and structured the fabric of societies, fomented cultural imaginations, and laid the ground for biological citizenships, and for political and therapeutic subjectivities. The question remains: how does the brand 'TB/HIV' alter perspectives on the many facets of both diseases in the field of medical humanities, and how can these alterations be captured as a way to conceptualise an epistemology of co-infection? If we return to the 2014 World AIDS conference, where co-infection was brought up as the coexistence of two pathogens that need to be understood in the complexity created by their coappearance, it is remarkable to see in the outlines of the conference programme how deeply the communities of TB and HIV have collaborated and corroborated already. Treatment as Prevention (TasP) has become one of the fundamental paradigms to reinvent prevention in the field of HIV, an approach that has been the basis of TB control since the 1970s in the Global North, namely as treatment of latent TB infections. Pre-Exposure Prophylaxis (PrEP) has drastically changed the overall focus on condoms as the only tool for preventive social behaviour, an approach that is paralleled in the field of TB through a renewed focus on pharmaceutical development and indeed a second pharmaceuticalisation of the disease in the wake of multiresistant bacteria (Kehr and Condrau forthcoming). Now, TasP and DOTS in homes and houses rather than clinics appear to be slowly shifting the paradigms of HIV prevention and treatment, and the traces of TB treatment routines, especially DOTS, seem to work as a role model (Farmer et al. 2001; Holt et al. 2012. Also, the persistence of TB as the number one killer of people living with HIV has led to a slow increase in research and development activity over the last decade. New pharmaceutical substances like PaMZ (PA-824-moxifloxacin-pyrazinamide) claim to further control and regulate the occurrence of TB and to drastically shorten the treatment timeframes, spurring the TB Alliance, a nonprofit organisation advocating for the development of new anti-TB drugs, to frame it as a 'Brave New World for TB'. Here, the high-tech biomedical intervention paradigm that HIV is based on has started to replace the regimes of slow treatment and direct surveillance in the realm of TB. This shift might in part be attributed to the emergence of multiresistant and ultraresistant tuberculosis, but could also be thought of as being influenced by the entrance of TB into the realm of HIV/AIDS and vice versa, and the problematisation of both infections as diseases of global health with its focus on pharmaceutical solutions. In sum, TB/HIV co-infection is an opportunity to further investigate why single-disease concepts have become such a crucial way of writing the history and present of medicine, of organizing conferences and structuring publications, when in fact diseases and their communities are always messy on the ground and inescapably engaged with each other. Breaking out of a single-disease framework -at least in the social sciences and humanitiesmight thus entail an expansion of analytic perspective as well as propel new fields of research. Conclusion Following entanglements rather than separations, thinking about commonalities rather than differences, and tracking actors rather than their rhetoric would be among the first steps to directly engage with co-infections and their heuristic ramifications on the ground. On a practical level, three lines of research would help to tackle the epistemological obstacles of and research opportunities for TB/HIV as well as other co-infections: 1. Historicise TB/HIV as a branded concept in the internationalised field of public health, in the North and in the South, in order to get a better understanding of this 'new entity' in biomedicine as well as the corresponding actors, institutions, and research. Processes of localisation and universalisation, standardisation and specification, and corroboration and alteration should be taken into account. In parallel, the history of each disease, TB and HIV, should be reopened for investigation, in order to understand how the advent of HIV/AIDS influenced TB treatment and the TB community, and vice versa, to trace the genealogies of HIV/AIDS treatment and prevention through the lens of TB. 2. Investigate the 'histoire croisée' (intersection history, Werner and Zimmermann 2006) of treatment and prevention approaches that go beyond a single-disease framework, in order to understand the communication, traffic, translation, and corroboration of thought collectives and thought styles in a reflexive manner. An initial entry point for new empirical research would be to investigate the histoire croisée of DOTS and HAART in the 1990s and 2000s, which would also serve as a fresh contribution to the recent history of the field of global health, its scientific logics, expert communities, and political economies. The same could be done with PreP and treatment for latent TB -to interrogate not only the circulation of knowledge and practices between disease communities and treatment approaches, but also the differing logics of public health in the Global North and the Global South, as well as the political, economic, and scientific stakes involved. 3. Develop more ethnographies of joint TB/HIV treatment and prevention initiatives -in offices and in clinical wards, in prevention centres and activist cafés, in laboratories and in homes -to trace how TB/HIV as a practical problem is conceived, managed, and treated by policy makers, doctors, nurses and, last but not least, encountered, fought and endured by millions of patients across the world today. Following the practices, trajectories, and epistemological stakes of co-infections might then allow for the reproblematisation of some very common features relevant to the clinic, to public health, and to the field of medicine in general. Doing so will draw the gaze to those processes in which diseases are constituted, between the realm of societal assumptions, clinical manifestations, public health policy, research funding, complex and unusual symptoms, and abstract yet enduring tables of disease classifications. As a lived and treated co-infection, TB/HIV adds complexity to clinical, epidemiological, and political ways of handling the health risks for which both diseases are jointly responsible. As such, it is exemplary for everyday problems in the clinic, where standardised treatment guidelines following the logic of single diseases encounter multi-morbidity and complex syndromes, where 'doctoring' (Mol 2008), 'improvisation' (Livingston 2012), and adaptation are the rule rather than the exception of everyday practice. In this way, co-infections are more than an additive deadly duo: they are epistemological obstacles and analytic levers at the same time, with the potential to substantially enrich social science scholarship in the realm of global health.
13,696.2
2015-04-14T00:00:00.000
[ "Medicine", "Philosophy" ]
Nonuniform Heat Transfer Model and Performance of Molten Salt Cavity Receiver : The temperature distribution and thermal e ffi ciency of a molten salt cavity receiver are investigated by a nonuniform heat transfer model based on thermal resistance analysis. For the cavity receiver MSEE in Sandia National Laboratories, thermal e ffi ciency in this experiment is about 87.5%, and the calculation value of 86.93–87.79% by a present nonuniform model fits very well with the experimental result. Di ff erent from the uniform heat transfer model, the receiver surface temperature in the nonuniform heat transfer model is remarkably higher than the backwall temperature. The incident radiation flux plays a primary role in thermal performance of cavity receiver, and thermal e ffi ciency approaches to maximum under optimal incident radiation flux. In order to increase thermal e ffi ciency, various methods are proposed and studied, including heat convection enhancement by an increase of flow velocity or the decrease of the tube diameter and number of tubes in the panel, and heat loss decline by a decrease of view factor, surface emissivity and insulation conductivity. According to calculation results by di ff erent modes of the nonuniform heat transfer model, the thermal e ffi ciency of the cavity receiver is reduced by nonuniform heat transfer caused by variable fluid temperature or variable circumferential temperature, so thermal e ffi ciency calculated by variable fluid temperature and variable circumferential temperature is lower than that calculated by average fluid temperature and bilateral uniform circumferential temperature for 0.86%. Introduction Solar thermal power [1] is a very promising technology for clean and renewable energy. The heat receiver [2] is key equipment for energy conversion from solar radiation to thermal energy, and it directly affects the operating temperature and thermodynamic efficiency of solar thermal power. Since a heat receiver with higher incident radiation flux can have higher efficiency and smaller receiver area, the allowable incident heat flux is increased during the developing process of solar thermal power. In the 1980s, the allowable incident radiation flux of a water/steam receiver is nearly 300 kWm −2 in Solar One [3] and CESA-1 [4]. In the 1990s, the incident radiation flux of a molten salt receiver increased to about 850 kWm −2 in Solar Two [5]. In this century, the incident radiation flux for air receiver can be more than 1 MWm −2 [6]. Though the receivers with high incident radiation flux have been widely investigated, and the optimal incident radiation flux should be further researched. The heat losses of a heat receiver caused by convection and radiation have been widely studied in the available literature. Clausing [7] studied convective heat loss from a cavity solar central receiver. Reddy and Kumar [8] studied combined laminar natural convection and surface radiation heat transfer in a modified cavity receiver of a solar parabolic dish. Prakash et al. [9] reported natural convection and radiation heat losses in a solar receiver. Cui et al. [10] studied the combined heat loss of a dish receiver with quartz glass cover. Wang et al. [11] considered the optical loss of the receiver with a parabolic solar concentrator. Msaddak et al. [12] analyzed combined natural convection and radiation heat losses in an open rectangular solar cavity receiver by the Lattice Boltzmann method. The molten salt receiver is recently one of the most promising receivers, and the convective heat transfer of molten salt inside the absorber tube will directly affect the receiver efficiency. Hoffman and Lones [13] measured the heat transfer of mixed molten salts NaNO 2 -KNO 3 -NaNO 3 in a circular tube. Silverman et al. [14] obtained forced convective heat transfer performance of molten-fluoride salts LiF-BeF 2 -ThF 4 -UF 4 and NaBF 4 -NaF. Lu et al. [15] experimentally investigated the transition and turbulent convective heat transfer of molten salt in a spirally grooved tube. Lu et al. [16] further reported the enhanced heat transfer performance of a molten salt receiver with a spirally grooved tube. Liu et al. [17] compared the heat-transfer characteristics of solar salt, Hitec and liquid sodium in a solar receiver tube with nonuniform heat flux. The heat receiver with optimal conditions is designed and investigated by many researchers. Neber and Lee [18] designed a high temperature cavity receiver for residential scale concentrated solar power. Steinfeld and Schubnell [19] studied the optimum aperture size and operating temperature of a solar cavity-receiver by a semi empirical method. Montes et al. [20] proposed an optimal fluid flow layout to improve the heat transfer in the active absorber surface of solar central cavity receivers. Roux et al. [21] used the optimized cavity receiver for the direct solar thermal Brayton cycle. Albarbar and Abdullah [22] proposed optimal design for a 20 MWe solar power plant external receiver in northeast Saudi Arabia. In recent years, many novel structures of the solar receiver have been proposed and investigated. A solid particle receiver [23] is an effective receiver for its high heat capacity and heat transfer coefficient. Nie et al. [24] investigated the properties of solid particles as a heat transfer fluid in a gravity-driven moving bed solar receiver. Sarafraza et al. [25,26] proposed a microchannel solar thermal receiver, and then studied its thermal and hydraulic performance. Sedighi et al. [27] development a novel high-temperature, pressurized, indirectly-irradiated cavity receiver. Yu et al. [28] proposed a semi-cavity reactor heated by a solar dish system, and investigated its thermochemical storage performance. Corgnale et al. [29] modeled a direct solar receiver reactor for the decomposition of sulfuric acid in thermochemical hydrogen production cycles. Beside novel structure, some novel heat transfer fluid is also applied for the solar receiver and solar thermal power. Duniam et al. [30] proposed the sCO 2 Brayton cycle for concentrated solar power plants, and sCO 2 can be an important heat transfer fluid for the receiver, heat exchange and power cycle. Guo et al. [31] analyzed the thermodynamic performance CO 2 -based binary mixtures within the molten salt solar power tower system. Goodarzi et al. [32] and Sarafraz and Safaei [33] used nanofluid in the solar cavity and evacuated tube solar collector. Many researchers [17,34] have designed their cavity receiver by the thermal resistance model with uniform fluid and surface temperatures, but the uniform model ignores large wall and fluid temperature differences in the practical receiver. On the other hand, direct simulation of this cavity receiver needs too great a calculation cost because of many receiver tubes and the complex structure in receiver. The main aim of this article is to propose a nonuniform heat transfer model of a cavity receiver by considering the circular tube structure, variable fluid temperature and variable circumferential temperature, and this model can present nonuniform temperature and thermal parameters, but need little calculation cost. By using the nonuniform heat transfer model based on the thermal resistance model, the heat loss and thermal efficiency of the receiver will be further analyzed under different incident radiation flux caused by the receiver area and incident energy power, different flow velocity and view factor, etc. By maximizing the thermal efficiency, the optimal incident radiation flux is obtained. In addition, the temperature distribution and thermal efficiency of the molten salt cavity Energies 2020, 13, 1001 3 of 19 receiver calculated from the uniform heat transfer model and nonuniform heat transfer model are further compared and analyzed. Figure 1 presents the basic structure and the heat absorption model of the cavity receiver. Inside the receiver, the incident radiation flux is absorbed by many absorber tubes. The receiver is surrounded by many receiver panels, and an aperture is left for the incident radiation flux. Each receiver panel has several circular absorber tubes. Outside the receiver panel, the insulation layer is used to reduce heat loss. In this present article, the incident radiation flux on the receiver panel is assumed to be uniform. For a uniform heat transfer model, fluid temperature and wall temperature are assumed to be uniform. For a nonuniform heat transfer model, receiver surface temperatures in the front and back sides are different, and variable fluid temperature along the flow direction and variable circumferential temperature can be further used. Nonuniform Heat Transfer Model of Cavity Receiver According to Figure 1, the thermal resistance model of the cavity receiver can be described as Figure 2. By using the energy balance law, the incident energy power Q in is equal to the sum of the absorbed energy power Q ab , the reflective heat loss through cavity aperture Q re f , radiation heat loss through the cavity aperture Q rad , the convective heat loss through cavity aperture Q c , and the conductive heat loss through insulation layer Q con , and this can be expressed as: where the incident energy power Q in is mainly dependent upon solar concentrator system, solar direct irradiance and the receiver. Figure 1 presents the basic structure and the heat absorption model of the cavity receiver. Inside the receiver, the incident radiation flux is absorbed by many absorber tubes. The receiver is surrounded by many receiver panels, and an aperture is left for the incident radiation flux. Each receiver panel has several circular absorber tubes. Outside the receiver panel, the insulation layer is used to reduce heat loss. In this present article, the incident radiation flux on the receiver panel is assumed to be uniform. For a uniform heat transfer model, fluid temperature and wall temperature are assumed to be uniform. For a nonuniform heat transfer model, receiver surface temperatures in the front and back sides are different, and variable fluid temperature along the flow direction and variable circumferential temperature can be further used. According to Figure 1, the thermal resistance model of the cavity receiver can be described as Figure 2. By using the energy balance law, the incident energy power is equal to the sum of the absorbed energy power , the reflective heat loss through cavity aperture , radiation heat loss through the cavity aperture , the convective heat loss through cavity aperture , and the conductive heat loss through insulation layer , and this can be expressed as: where the incident energy power is mainly dependent upon solar concentrator system, solar direct irradiance and the receiver. ′ are the receiver surface temperature and inner wall temperature in front side; is fluid temperature, and ′ are the receiver outer wall temperature and inner wall temperature in back side; and is outer wall temperature of insulation layer. The thermal efficiency of the cavity receiver can be calculated as: Figure 1 presents the basic structure and the heat absorption model of the cavity receiver. Inside the receiver, the incident radiation flux is absorbed by many absorber tubes. The receiver is surrounded by many receiver panels, and an aperture is left for the incident radiation flux. Each receiver panel has several circular absorber tubes. Outside the receiver panel, the insulation layer is used to reduce heat loss. In this present article, the incident radiation flux on the receiver panel is assumed to be uniform. For a uniform heat transfer model, fluid temperature and wall temperature are assumed to be uniform. For a nonuniform heat transfer model, receiver surface temperatures in the front and back sides are different, and variable fluid temperature along the flow direction and variable circumferential temperature can be further used. According to Figure 1, the thermal resistance model of the cavity receiver can be described as Figure 2. By using the energy balance law, the incident energy power is equal to the sum of the absorbed energy power , the reflective heat loss through cavity aperture , radiation heat loss through the cavity aperture , the convective heat loss through cavity aperture , and the conductive heat loss through insulation layer , and this can be expressed as: where the incident energy power is mainly dependent upon solar concentrator system, solar direct irradiance and the receiver. ′ are the receiver surface temperature and inner wall temperature in front side; is fluid temperature, and ′ are the receiver outer wall temperature and inner wall temperature in back side; and is outer wall temperature of insulation layer. The thermal efficiency of the cavity receiver can be calculated as: The basic thermal resistance model of the cavity receiver T s is the surrounding temperature; T w and T w are the receiver surface temperature and inner wall temperature in front side; T f is fluid temperature, T bw and T bw are the receiver outer wall temperature and inner wall temperature in back side; and T oi is outer wall temperature of insulation layer. The thermal efficiency of the cavity receiver can be calculated as: The incident energy power can be described as: where S r is the receiver area, and I is average incident radiation flux. In this article, the receiver area S r denotes the total inner surface area of receiver panel instead of the surface area of the circular tube, as illustrated in Figure 1. The reflective heat loss thought the cavity aperture can be calculated as [35]: where k is the reflectivity of receiver surface, S ap is the aperture area and F r is the view factor from the receiver surface to the aperture. In the cavity receiver, the aperture is a flat surface and the cavity with the aperture surface is an enclosure space, so the view factor from aperture to the receiver surface F ap = 1 and F r S r = F ap S ap [36], and F r = S ap /S r . The radiation heat loss thought the cavity aperture or the radiation heat transfer between the aperture and receiver surface can be calculated as [36]: where T w means the receiver surface temperature in the front side of cavity and T s is the surrounding temperature. Since the emissivity of aperture ε ap = 1, Equation (5) can be rewritten as: where the effective emissivity ε e = ε ε+F r −εF r . The convective heat loss through the cavity aperture mainly includes the natural convection caused by the fluid density difference and forced convection caused by wind. For the cavity receiver, the convective heat losses caused by natural convection and forced convection are both important, and the mixed convective heat loss can be calculated as [34]: where Q nc and Q f c denote the heat loss of natural convection and forced convection through the cavity aperture. The heat loss of natural convection can be correlated as [37]: Equation (9) is applicable for 10 5 < Gr < 10 12 . The heat loss of forced convection (wind) through the cavity aperture can be correlated as [37]: h f c = λ a H ap Nu f c = λ a H ap 0.0287Re 0.8 Pr 1/3 (11) where H ap is characteristic length of receiver aperture, λ a is the heat conductivity of air, and Nu f c is the Nusselt number of forced convection thought the cavity aperture, and Equation (11) can be used as wind velocity, where u < 20 m/s [37]. In Equations (8) and (11), the characteristic temperature is (T w + T s )/2. The absorbed energy Q ab transferred by the fluid inside absorber tube is mainly determined by the absorbed heat flux q ab f in the front side of absorber tube and the heat loss flux q lob in the back side of absorber tube, as illustrated in Figure 1. In the practical cavity receiver, the heat flux changes in the circumferential direction. Our previous research [38] investigated the receiver pipe with variable radiation flux in the circumferential direction, and found that the average wall temperature and thermal efficiency along the semi-circumference were almost equal to those parameters calculated by the average energy flux of the semi-circumference. As a result, the absorbed heat flux q ab f and the heat loss flux q lob can be calculated by the average heat fluxes in the front side and back side of absorber tube in the present article. Because the absorber tube is a circular tube, the area of the front or back side of absorber tube wall is π/2 times of the receiver area [39]. As a result, the absorbed energy can be calculated as: The absorbed heat flux in the front side of absorber tube can be calculated as: where T w is the inner wall temperature of absorber tube in the front side of receiver, and the heat transfer coefficient of absorber tube wall is [36]: where D and d are the outer and inner diameters of absorber tube, and λ p is heat conductivity of the tube wall. For a fully developed turbulent flow in the absorber tube, the Dittus-Boelter correlation [39] can be used to calculate its heat convection as: where Nu f is the Nusselt number of the molten salt flow inside the absorber tube, and T f is the fluid temperature. Equation (17) is applicable for 10 4 < Re < 1.2 × 10 5 , 0.7 < Pr < 120 and L/d > 60. In the present article, 10 4 < Re < 10 5 , 4.4 < Pr < 12.6 and L/d > 100. Combining Equations (13) and (15), the heat transfer from the outer wall surface to the fluid in the front side can be calculated as [36]: where Similar to Equation (18), the heat transfer from the outer wall surface to the fluid in the back side can be calculated as: (19) where T bw means the wall temperature of the absorber tube in the back side. The conductive heat loss can be calculated as: where T oi and ε oi mean the outer wall temperature and the emissivity of insulation layer, δ is the thickness of this insulation layer and h i is the convective heat transfer coefficient outside the receiver insulation layer. The convective heat transfer coefficient outside the receiver insulation layer can be correlated as [35]: where H r is receiver characteristic length, Nu i f c is Nusselt number of forced convection outside the receiver, Re is the Reynolds number for wind, and the characteristic temperature is (T oi + T s )/2. Equations (21)-(23) can be applicable for mixed convection as 0.01 < Gr/Re 2 < 10. The heat transfer of the receiver in the back side can be directly calculated by Equations (19)- (23), and the whole energy balance equation is: The heat transfer of receiver in the front side can be calculated from Equations (1) and (3)- (18), and the whole energy balance equation is: The heat transfer of the receiver in the front side and back side is calculated from Equations (24) and (25), and then the thermal performance of the receiver can be analyzed. Winter et al. [35] proposed a uniform model with uniform surface temperature (T bw = T w ), and the heat transfer from the receiver surface to the molten salt is described as: By using the nonuniform heat transfer model in this article, the heat transfer from the receiver surface to the molten salt can be described from Equations (2), (12) and (18) as: Compared with (26) and (27), the temperature difference inside the absorber tube calculated by the nonuniform heat transfer model considers the effects of conductive heat loss, thermal efficiency and the circular tube structure. In available design processes [17,34], the fluid temperature and surface temperature of the cavity receiver are normally assumed to be uniform. In a practical cavity receiver, the fluid temperature and surface temperature increases along the flow direction for heat absorption, so the temperature variation should be further considered. Inside the absorber tube, the energy transport equation along the flow direction is calculated as: The average thermal efficiency of the receiver can be: where L is the length of the absorber tube. Besides variable fluid temperature along the flow direction, the wall temperature along the circumference in the front side of receiver remarkably changes with different incident energy flux. In order to consider the heat transfer performance along the circumference, the local energy balance in the front side should be further considered. The local incident energy flux can be calculated as: where R l is the local area ratio of the receiver surface and front tube wall with incident angle θ, and R l = cos θ. Similar to the incident energy flux, the reflective heat loss and radiation heat loss mostly transfer between the receiver aperture and receiver surface. From Equations (3)-(6), the local reflective heat loss flux and radiation heat loss flux on the front tube wall can be calculated as: where T w,l is the local wall temperature. The convective heat loss is mainly dependent upon the heat transfer coefficient and the surface area. From Equations (8)- (11), the local heat fluxes of natural convection and forced convection can be calculated as: where R is the area ratio of the whole receiver surface and front tube wall, and R = 2/π. Similar to Equation (18), the local heat transfer from the outer wall surface to the fluid in the front side can be calculated as: From Equations (30)-(34), the local energy balance equation along the circumference can be described as: q in,l = q re f ,l + q rad,l + q nc,l + q f c,l + q ab f ,l The local thermal efficiency is defined as: Energies 2020, 13, 1001 8 of 19 Calculation Conditions In order to investigate the heat transfer performance of the cavity receiver in detail, the structure and operating parameters refer to the cavity receiver MSEE in Sandia National Laboratories [40]. The receiver area S r is 21.2 m 2 , and the height of receiver is 6 m, while the absorbed energy power Q ab = 5 MW. The outer and inner diameters of absorber tube are D = 0.019 mm and d = 0.0157 mm, and the conductivity of stainless steel is λ t = 19.7 Wm −1 K −1 . Selective coating is used in the cavity receiver, and its basic radiation parameters are described as following, k = 0.04, ε = 0.80. Solar salt (60 wt % NaNO 3 -40 wt % KNO 3 ) is used as working fluid, and its properties are [41]: The thickness of insulation layer is δ = 0.07 m, and its conductivity is λ i = 0.5 Wm −1 K −1 . In the present calculation, the flow velocity of molten salt is µ f = 2 ms −1 , and the velocity of wind is u win = 5 ms −1 . The inlet and outlet temperatures [40,41] are respectively 290 • C and 565 • C, while the surrounding temperature T s is 20 • C. Basic Heat Transfer Performance and Validation In this section, the heat transfer performance of the cavity receiver is first calculated by a nonuniform heat transfer model with average arithmetic fluid temperature based on inlet and outlet values (427.5 • C) and bilateral uniform circumferential temperature in the front side and back side (Mode I). Table 1 presents the thermal performance of the receiver MSEE calculated by the uniform model and nonuniform heat transfer model. For the cavity receiver MSEE, the thermal efficiency of the receiver calculated by this nonuniform model is 87.79%, while the whole heat loss is 695.9 kW. According to the experimental results of Bergan [41], the thermal efficiency of MSEE was between 85% and 90%, with average of 87.5%, so the calculation result of 87.79% fit with the experimental results, and the nonuniform model is reliable. Compared with the results calculated by the uniform model with Equation (26), the wall temperature of the receiver surface is remarkably lower, because the effect of the circular tube structure π/2 is considered in Equation (27). By using the nonuniform model, the heat loss especially for the radiation heat loss is lower due to a low wall temperature, and then the thermal efficiency of the receiver is higher for 0.95%. By using a nonuniform heat transfer model with bilateral uniform circumferential temperature, the wall temperature of the receiver in the back side T bw is lower than the receiver surface temperature T w for 52.02 • C. Figure 3 presents the heat transfer performance of the cavity receiver with a different receiver area. As the receiver area S r /S r0 decreases from 1 to 0.1, the incident radiation flux remarkably increases from 0.269 MWm −2 to 2.563 MWm −2 , and the surface temperature sharply rises. When the receiver area is reduced, the heat loss will first decrease with the radiating area dropping, and then it will increase with the surface temperature rising, so that the thermal efficiency has a maximum with an optimal receiver area. As S r S r0 = 0.167, the heat loss will approach to the minimum 404.2 kW, and the maximum thermal efficiency can be as high as 92.54%. Figure 4 further presents the heat loss of the cavity receiver with a different receiver area. As the receiver area S_r/S_r0 decreases, the radiation heat loss first decreases and then increases, and the reflective heat loss slowly changes, while the other heat losses remarkably drop. As a result, the heat loss proportions of natural convection, wind and conduction gradually decreases with the receiver area decreasing, while the reflective heat loss proportion will first increase and then decrease. On the other hand, the radiation heat loss proportion first decreases with the radiating area decreasing, and then increases with a very high surface temperature. Generally, the conductive heat loss proportion is as little as 0.5%-1.7%, and the reflective heat loss has high heat loss proportion with small receiver area, while the radiation heat loss has the maximum heat loss proportion with large receiver area. In order to obtain the absorbed energy power in a large range, the incident energy power should be changed. Figure 5 presents the heat transfer performance of the cavity receiver with different incident energy power. When the absorbed energy power increases from 5.0 MW to 50.0 MW as the incident power is increasing from 5.7 MW to 54.33 MW, the incident radiation flux remarkably rises from 0.269 MWm −2 to 2.56 MWm −2 , and the surface temperature also sharply rises. When the incident energy power is increased, the heat loss remarkably rises, while the thermal efficiency will first Figure 4 further presents the heat loss of the cavity receiver with a different receiver area. As the receiver area S_r/S_r0 decreases, the radiation heat loss first decreases and then increases, and the reflective heat loss slowly changes, while the other heat losses remarkably drop. As a result, the heat loss proportions of natural convection, wind and conduction gradually decreases with the receiver area decreasing, while the reflective heat loss proportion will first increase and then decrease. On the other hand, the radiation heat loss proportion first decreases with the radiating area decreasing, and then increases with a very high surface temperature. Generally, the conductive heat loss proportion is as little as 0.5%-1.7%, and the reflective heat loss has high heat loss proportion with small receiver area, while the radiation heat loss has the maximum heat loss proportion with large receiver area. Figure 4 further presents the heat loss of the cavity receiver with a different receiver area. As the receiver area S_r/S_r0 decreases, the radiation heat loss first decreases and then increases, and the reflective heat loss slowly changes, while the other heat losses remarkably drop. As a result, the heat loss proportions of natural convection, wind and conduction gradually decreases with the receiver area decreasing, while the reflective heat loss proportion will first increase and then decrease. On the other hand, the radiation heat loss proportion first decreases with the radiating area decreasing, and then increases with a very high surface temperature. Generally, the conductive heat loss proportion is as little as 0.5%-1.7%, and the reflective heat loss has high heat loss proportion with small receiver area, while the radiation heat loss has the maximum heat loss proportion with large receiver area. In order to obtain the absorbed energy power in a large range, the incident energy power should be changed. Figure 5 presents the heat transfer performance of the cavity receiver with different incident energy power. When the absorbed energy power increases from 5.0 MW to 50.0 MW as the incident power is increasing from 5.7 MW to 54.33 MW, the incident radiation flux remarkably rises from 0.269 MWm −2 to 2.56 MWm −2 , and the surface temperature also sharply rises. When the incident energy power is increased, the heat loss remarkably rises, while the thermal efficiency will first In order to obtain the absorbed energy power in a large range, the incident energy power should be changed. Figure 5 presents the heat transfer performance of the cavity receiver with different incident energy power. When the absorbed energy power increases from 5.0 MW to 50.0 MW as the incident power is increasing from 5.7 MW to 54.33 MW, the incident radiation flux remarkably rises from 0.269 MWm −2 to 2.56 MWm −2 , and the surface temperature also sharply rises. When the incident energy power is increased, the heat loss remarkably rises, while the thermal efficiency will first increase and then decrease with the surface temperature rising. As Q in = 32.42 MW, the maximum thermal efficiency reaches 92.53%. The variations of the receiver area or incident energy power both mean the change of incident radiation flux. Figure 6 presents the thermal performance of the receiver under different incident radiation flux from Figures 3 and 5. In general, the thermal performance of cavity receiver under different incident radiation flux by the change of receiver area or incident energy power is very similar. As the incident radiation flux rises, thermal efficiency first increases and then decreases, as illustrated in Figure 6a. By maximizing the thermal efficiency, the optimal incident radiation flux 1.5 MWm −2 can be obtained. On the other hand, the inner wall temperature almost linearly increases with incident radiation flux. As a conclusion, the incident radiation flux is critically important for the thermal efficiency and wall temperature, and it is mainly dependent upon the receiver area and incident energy power as expressed in Equation (3). Under the present calculation conditions for the MSEE receiver, the inner wall temperature of receiver with optimal incident radiation flux 1.5 MWm −2 is 590 °C, and that is higher than the operating temperature range of molten salt. So, the molten salt cavity receiver with proper incident radiation flux should have high thermal efficiency, but the inner wall temperature adjacent to molten salt should be below the maximum operating temperature of this molten salt. Since the maximum operating temperature of solar salt is 565 °C, the optimal radiation flux for the solar salt cavity receiver is about 1.26 MWm −2 . The variations of the receiver area or incident energy power both mean the change of incident radiation flux. Figure 6 presents the thermal performance of the receiver under different incident radiation flux from Figures 3 and 5. In general, the thermal performance of cavity receiver under different incident radiation flux by the change of receiver area or incident energy power is very similar. As the incident radiation flux rises, thermal efficiency first increases and then decreases, as illustrated in Figure 6a. By maximizing the thermal efficiency, the optimal incident radiation flux 1.5 MWm −2 can be obtained. On the other hand, the inner wall temperature almost linearly increases with incident radiation flux. As a conclusion, the incident radiation flux is critically important for the thermal efficiency and wall temperature, and it is mainly dependent upon the receiver area and incident energy power as expressed in Equation (3). Under the present calculation conditions for the MSEE receiver, the inner wall temperature of receiver with optimal incident radiation flux 1.5 MWm −2 is 590 • C, and that is higher than the operating temperature range of molten salt. So, the molten salt cavity receiver with proper incident radiation flux should have high thermal efficiency, but the inner wall temperature adjacent to molten salt should be below the maximum operating temperature of this molten salt. Since the maximum operating temperature of solar salt is 565 • C, the optimal radiation flux for the solar salt cavity receiver is about 1.26 MWm −2 . The variations of the receiver area or incident energy power both mean the change of incident radiation flux. Figure 6 presents the thermal performance of the receiver under different incident radiation flux from Figures 3 and 5. In general, the thermal performance of cavity receiver under different incident radiation flux by the change of receiver area or incident energy power is very similar. As the incident radiation flux rises, thermal efficiency first increases and then decreases, as illustrated in Figure 6a. By maximizing the thermal efficiency, the optimal incident radiation flux 1.5 MWm −2 can be obtained. On the other hand, the inner wall temperature almost linearly increases with incident radiation flux. As a conclusion, the incident radiation flux is critically important for the thermal efficiency and wall temperature, and it is mainly dependent upon the receiver area and incident energy power as expressed in Equation (3). Under the present calculation conditions for the MSEE receiver, the inner wall temperature of receiver with optimal incident radiation flux 1.5 MWm −2 is 590 °C, and that is higher than the operating temperature range of molten salt. So, the molten salt cavity receiver with proper incident radiation flux should have high thermal efficiency, but the inner wall temperature adjacent to molten salt should be below the maximum operating temperature of this molten salt. Since the maximum operating temperature of solar salt is 565 °C, the optimal radiation flux for the solar salt cavity receiver is about 1.26 MWm −2 . Figure 7 presents the wall temperature difference and thermal efficiency of the receiver with different flow velocity, tube diameter and number of tubes in the receiver panel. By using the nonuniform heat transfer model, the receiver surface temperature T w is obviously higher than the wall temperature of the absorber tube in the back side T bw . As the flow velocity increases from 0.5 ms −1 to 5.0 ms −1 , the wall temperature difference T w − T bw remarkably drops from 128.45 • C to 32.03 • C, while the thermal efficiency can rise from 85.18% to 88.38%. As the tube diameter or number of tubes in the receiver panel is reduced, the flow velocity increases, and the thermal efficiency will rise due to the reduced wall temperature. Generally, the heat transfer inside the absorber tube can be enhanced by the increase of flow velocity or decrease of tube diameter and the number of tubes in the panel, and then the heat losses of convection and radiation are reduced with the decrease of receiver surface temperature, so the thermal efficiency can be enhanced. Energies 2020, 13, x 11 of 19 Figure 7 presents the wall temperature difference and thermal efficiency of the receiver with different flow velocity, tube diameter and number of tubes in the receiver panel. By using the nonuniform heat transfer model, the receiver surface temperature is obviously higher than the wall temperature of the absorber tube in the back side . As the flow velocity increases from 0.5 ms −1 to 5.0 ms −1 , the wall temperature difference − remarkably drops from 128.45 °C to 32.03 °C, while the thermal efficiency can rise from 85.18% to 88.38%. As the tube diameter or number of tubes in the receiver panel is reduced, the flow velocity increases, and the thermal efficiency will rise due to the reduced wall temperature. Generally, the heat transfer inside the absorber tube can be enhanced by the increase of flow velocity or decrease of tube diameter and the number of tubes in the panel, and then the heat losses of convection and radiation are reduced with the decrease of receiver surface temperature, so the thermal efficiency can be enhanced. Figure 8 presents the heat loss and thermal efficiency of the cavity receiver with different emissivity, insulation conductivity and wind velocity. If the emissivity of the selective coating is decreased from 1 to 0.1, the radiation heat loss remarkably decreases from 343.11 kW to 37.3 kW, while the other heat losses caused by convection and conduction changes very little, so the thermal efficiency increases from 86.78% to 91.83%. As the conductivity of the insulation layer decreases from 1 Wm −1 K −1 to 0.05 Wm −1 K −1 , the conductive heat loss decreases from 80.21 kW to 6.08 kW, while the other heat losses have very little change, so the thermal efficiency increases from 86.67% to 87.88%. As the wind velocity decreases from 15 ms −1 to 1 ms −1 , the heat loss caused by wind decreases from 214.99 kW to 24.65 kW, and then the thermal efficiency can be increased from 85.81% to 88.84%. In general, the decrease of emissivity, insulation conductivity and wind velocity can respectively reduce Figure 8 presents the heat loss and thermal efficiency of the cavity receiver with different emissivity, insulation conductivity and wind velocity. If the emissivity of the selective coating is decreased from 1 to 0.1, the radiation heat loss remarkably decreases from 343.11 kW to 37.3 kW, while the other heat losses caused by convection and conduction changes very little, so the thermal efficiency increases from 86.78% to 91.83%. As the conductivity of the insulation layer decreases from 1 Wm −1 K −1 to 0.05 Wm −1 K −1 , the conductive heat loss decreases from 80.21 kW to 6.08 kW, while the other heat losses have very little change, so the thermal efficiency increases from 86.67% to 87.88%. As the wind velocity decreases from 15 ms −1 to 1 ms −1 , the heat loss caused by wind decreases from 214.99 kW to 24.65 kW, and then the thermal efficiency can be increased from 85.81% to 88.84%. In general, the decrease of emissivity, insulation conductivity and wind velocity can respectively reduce the heat losses of radiation, conduction and wind, while the other heat losses change very little, and then the thermal efficiency of the receiver gradually increases. Figure 9 further presents the heat loss and energy proportion distributions under a different view factor. As the view factor or aperture area increases, the heat losses of reflection, radiation and convection totally rise, because larger aperture has larger heat loss through convection and radiation. On the other hand, conductive heat loss only changes very little with a constant receiver area. When the view factor rises from 0.1 to 1, the reflective heat loss increases from 20.77 kW to 229.88 kW, and the radiation heat loss increases from 36.74 kW to 301.38 kW, while the convective heat loss increases from 123.08 kW to 204.03 kW. As a result, the energy proportions of radiation and reflection remarkably rises for 6.40 and 8.98 times, while the energy proportion of conduction varies within 0.21-0.23%, and the thermal efficiency decreases from 96.29% to 87.01%. Figure 9 further presents the heat loss and energy proportion distributions under a different view factor. As the view factor or aperture area increases, the heat losses of reflection, radiation and convection totally rise, because larger aperture has larger heat loss through convection and radiation. On the other hand, conductive heat loss only changes very little with a constant receiver area. When the view factor rises from 0.1 to 1, the reflective heat loss increases from 20.77 kW to 229.88 kW, and the radiation heat loss increases from 36.74 kW to 301.38 kW, while the convective heat loss increases from 123.08 kW to 204.03 kW. As a result, the energy proportions of radiation and reflection remarkably rises for 6.40 and 8.98 times, while the energy proportion of conduction varies within 0.21-0.23%, and the thermal efficiency decreases from 96.29% to 87.01%. According to the previous calculation results, the incident radiation flux caused by the receiver area and incident energy power plays the primary role in the heat transfer of cavity receiver. The thermal efficiency will approach to maximum at the optimal incident radiation flux, while the inner wall temperature with high incident radiation flux will be probably beyond the operating temperature of molten salt, so the proper incident radiation flux is very important for the high According to the previous calculation results, the incident radiation flux caused by the receiver area and incident energy power plays the primary role in the heat transfer of cavity receiver. The thermal efficiency will approach to maximum at the optimal incident radiation flux, while the inner wall temperature with high incident radiation flux will be probably beyond the operating temperature of molten salt, so the proper incident radiation flux is very important for the high thermal efficiency receiver. The increase of flow velocity or decrease of tube diameter and number of tubes in panel can increase the thermal efficiency of cavity receiver and decrease the wall temperature difference, but the pumping power consumption will be also remarkably increased. The decrease of surface emissivity can remarkably benefit the receiver efficiency, while the insulation layer only affects the receiver efficiency very little. In addition, the decrease of the view factor or aperture can increase the thermal efficiency, but a too small aperture will reduce the incident energy power. Heat Transfer Performance with Variable Fluid Temperature In the previous section, the cavity receiver was designed by average bulk temperature based on inlet and outlet values (427.5 • C) and absorbed energy power (5 MW), and the calculation results of MSEE by Mode I are described as Table 1. For a practical cavity receiver, the fluid temperature gradually increases from the inlet temperature to the outlet temperature, and then the thermal efficiency also changes. In addition, the heat transfer performance along the absorber tube (T f = 290 − 565 • C) will be further calculated by the incident energy power from Mode I (Q in = Q in,Mode l = 5.696 MW), and this calculation mode is defined as Mode I 1 . Figure 10 further presents the fluid temperature and thermal efficiency along the receiver calculated by Mode I and I 1 . Along the flow direction, the fluid temperature almost linearly increases, while the thermal efficiency gradually decreases. According to the calculation results in Table 2, the average fluid temperature along the receiver by Mode I 1 is a little higher than that in Mode I, while the average thermal efficiency along the absorber tube by Mode I 1 is lower than that by Mode I for higher heat loss. According to Figure 10 Because the average thermal efficiency along the receiver calculated by Mode I1 is 0.41% lower than that by Mode I, the absorbed energy power 4.977 MW is less than the required energy power 5.000 MW. In order to obtain the required energy power 5 MW, the cavity receiver should be recalculated, and this calculation mode is defined as Mode II with incident energy power 5.720 MW, as illustrated in Table 2. Figure 11 further presents the fluid temperature and thermal efficiency along the receiver calculated by Mode II. Along the flow direction, the fluid temperature and surface temperature almost linearly increase, while the thermal efficiency gradually decreases. At the inlet, the thermal efficiency is as high as 91.34% with the fluid temperature of 290 °C, while it decreases to 82.25% at the outlet. Because the average thermal efficiency along the receiver calculated by Mode I1 is 0.41% lower than that by Mode I, the absorbed energy power 4.977 MW is less than the required energy power 5.000 MW. In order to obtain the required energy power 5 MW, the cavity receiver should be recalculated, and this calculation mode is defined as Mode II with incident energy power 5.720 MW, as illustrated in Table 2. Figure 11 further presents the fluid temperature and thermal efficiency along the receiver calculated by Mode II. Along the flow direction, the fluid temperature and surface temperature almost linearly increase, while the thermal efficiency gradually decreases. At the inlet, the thermal efficiency is as high as 91.34% with the fluid temperature of 290 • C, while it decreases to 82.25% at the outlet. than that by Mode I, the absorbed energy power 4.977 MW is less than the required energy power 5.000 MW. In order to obtain the required energy power 5 MW, the cavity receiver should be recalculated, and this calculation mode is defined as Mode II with incident energy power 5.720 MW, as illustrated in Table 2. Figure 11 further presents the fluid temperature and thermal efficiency along the receiver calculated by Mode II. Along the flow direction, the fluid temperature and surface temperature almost linearly increase, while the thermal efficiency gradually decreases. At the inlet, the thermal efficiency is as high as 91.34% with the fluid temperature of 290 °C, while it decreases to 82.25% at the outlet. In general, the calculated heat transfer performance by Mode II in Figure 11 is similar to those by Mode I1 in Figure 10, and the detailed results are presented in Table 2. By considering variable fluid temperature along the receiver tube in Mode II, the average fluid temperature 429 • C is higher than the average of inlet and outlet temperature 427.5 • C, because the temperature distribution function along the flow direction is a convex function. As a result, the average thermal efficiency 87.41% in Mode II is lower than the thermal efficiency calculated by the average of inlet and outlet temperature for 0.38%. Heat Transfer Performance with Variable Circumferential Temperature Because the incident energy flux changes along the front semi-circumference of the receiver tube from Equation (28), the circumferential heat transfer performance is also uneven. In this section, the heat transfer along the circumference is first calculated by the incident energy power from Mode I (Q in = Q in , Model I = 5.696 MW) and the average arithmetic fluid temperature (T f = 427.5 • C), and this calculation mode is defined as Mode I 2 . From Table 3, the absorbed energy power 4.973 MW calculated by Mode I 2 is less than the required energy power 5.000 MW. In order to obtain the required energy power 5 MW, the cavity receiver should be recalculated by the incident energy power 5.728 MW, and this calculation mode is defined as Mode III. By considering the variable circumferential temperature, the average thermal efficiency along the circumference calculated by Mode III is 87.29%, and that is lower than that calculated by Mode I. Figure 12a presents the incident energy flux and absorbed energy flux along the front semi-circumference calculated by Mode III. As the incident angle θ increases from 0 • to 90 • , the absorbed energy flux decreases with the incident energy flux, and their difference or the heat loss also significantly drops. As the incident angle is zero, the incident energy flux and absorbed energy flux reach the maxima of 0.270 MW −2 and 0.239 MW −2 . Figure 12b further presents the local wall temperature and thermal efficiency along the circumference calculated by Mode III. The wall temperature first gradually decreases near the perpendicularly incident region, and then linearly drops, while the maximum temperature difference along the circumference is 83.62 • C. As the incident angle increases, the thermal efficiency first approaches to the maximum 88.3% at the perpendicularly incident point, and then it gradually decreases and final sharply drops near the parallelly incident region. Receiver with variable circumferential temperature and fluid temperature According to previous investigation, nonuniform heat transfer caused by variable fluid temperature or variable circumferential temperature can decrease the thermal efficiency of the cavity receiver for higher heat loss. In this section, the heat transfer of the receiver is calculated with variable circumferential temperature and variable fluid temperature, and this calculation mode is defined as Mode IV. Table 4 presents the heat transfer performance of the cavity receiver calculated by different modes of a nonuniform heat transfer model. By considering variable circumferential temperature and variable fluid temperature, the thermal efficiency of the cavity receiver is lower than that calculated by Mode II (variable fluid temperature) or Mode III (variable circumferential temperature), and that calculated by Mode I (uniform circumferential temperature and fluid temperature) is highest. The thermal efficiency of cavity receiver calculated by Mode IV is 86.93%, and that is also close to experimental result. Receiver with Variable Circumferential Temperature and Fluid Temperature According to previous investigation, nonuniform heat transfer caused by variable fluid temperature or variable circumferential temperature can decrease the thermal efficiency of the cavity receiver for higher heat loss. In this section, the heat transfer of the receiver is calculated with variable circumferential temperature and variable fluid temperature, and this calculation mode is defined as Mode IV. Table 4 presents the heat transfer performance of the cavity receiver calculated by different modes of a nonuniform heat transfer model. By considering variable circumferential temperature and variable fluid temperature, the thermal efficiency of the cavity receiver is lower than that calculated by Mode II (variable fluid temperature) or Mode III (variable circumferential temperature), and that calculated by Mode I (uniform circumferential temperature and fluid temperature) is highest. The thermal efficiency of cavity receiver calculated by Mode IV is 86.93%, and that is also close to experimental result. Mode IV: Nonuniform heat transfer model with variable fluid temperature and variable circumferential temperature (T f = 290 − 565 • C, Q in,l = I cos θ, Q ab = 5.000 MW). Figure 13a presents the local wall temperature along the circumference under different fluid temperature calculated by Mode IV. The wall temperature distribution under different fluid temperature is very similar, and it gradually decreases with incident angle rising. Along the flow direction, both the fluid temperature and wall temperature increase, and the maximum wall temperature at the outlet of receiver (T f = 565 • C) reaches 637.5 • C at the perpendicularly incident point. Figure 13b further presents the local thermal efficiency along the circumference under different fluid temperature calculated by Mode IV. The local thermal efficiency distribution under different fluid temperature is also similar. As the incident angle increases, the thermal efficiency first reaches the maximum at the perpendicularly incident point, and then it gradually decreases and finally sharply drops near the parallelly incident region. Along the flow direction, the thermal efficiency decreases with fluid temperature increasing. At the inlet of receiver (T f = 290 • C), the thermal efficiency reaches as high as 91.6% at the perpendicularly incident point. At the outlet of receiver (T f = 565 • C), the thermal efficiency at the perpendicularly incident point decreases to 82.9%, while the average thermal efficiency along the circumference is 81.78% for high heat loss at high temperature. sharply drops near the parallelly incident region. Along the flow direction, the thermal efficiency decreases with fluid temperature increasing. At the inlet of receiver ( = 290°C), the thermal efficiency reaches as high as 91.6% at the perpendicularly incident point. At the outlet of receiver ( = 565 °C), the thermal efficiency at the perpendicularly incident point decreases to 82.9%, while the average thermal efficiency along the circumference is 81.78% for high heat loss at high temperature. Conclusions and discussions In this paper, the nonuniform heat transfer model of the cavity receiver is established by considering the circular tube structure, variable fluid temperature and variable circumferential temperature, and then the thermal performance of the molten salt cavity receiver is further analyzed. The thermal efficiency of the cavity receiver MSEE calculated by the nonuniform model is 86.93%-87.79%, and that fit very well with the experimental result. Because of nonuniform heat transfer, wall and fluid temperatures have large differences, and then the thermal efficiency of the cavity receiver calculated by variable fluid temperature or variable circumferential temperature is lower. The Conclusions and Discussions In this paper, the nonuniform heat transfer model of the cavity receiver is established by considering the circular tube structure, variable fluid temperature and variable circumferential temperature, and then the thermal performance of the molten salt cavity receiver is further analyzed. The thermal efficiency of the cavity receiver MSEE calculated by the nonuniform model is 86.93-87.79%, and that fit very well with the experimental result. Because of nonuniform heat transfer, wall and fluid temperatures have large differences, and then the thermal efficiency of the cavity receiver calculated by variable fluid temperature or variable circumferential temperature is lower. The thermal efficiency of the cavity receiver calculated by variable fluid temperature and variable circumferential temperature is lower than that calculated by average fluid temperature and bilateral uniform circumferential temperature for 0.86%. The incident radiation flux caused by the receiver area and incident energy power plays a primary role in heat transfer of the cavity receiver. As incident radiation flux rises, thermal efficiency first increases and then decreases, and the cavity receiver with incident radiation flux of 1.5 MWm −2 has maximum thermal efficiency. For the solar salt cavity receiver, the optimal radiation flux is 1.26 MWm −2 for the operating temperature limit of solar salt. The thermal efficiency of the receiver can be increased by heat convection enhancement with the increase of flow velocity or decrease of tube diameter and number of tubes in the panel. The decrease of view factor can increase thermal efficiency by reducing the convection and radiation heat loss, and the decrease of surface emissivity and insulation conductivity can respectively reduce the radiation and conductive heat loss. This article establishes a novel nonuniform heat transfer model for cavity receiver, and then associated thermal performance and optimal/enhanced parameter can be analyzed. These results can be applied for the structure design and parameter choice of molten salt cavity receiver in solar thermal power. In this article, the cavity receiver calculation is limited by the assumption of uniform incident radiation flux, and the associated model and research can be further improved by considering nonuniform radiation flux, complex cavity structure and other heat transfer fluid.
12,622.4
2020-02-24T00:00:00.000
[ "Physics", "Engineering" ]
Leptogenesis and gravity: baryon asymmetry without decays A popular class of theories attributes the matter-antimatter asymmetry of the Universe to CP-violating decays of super-heavy BSM particles in the Early Universe. Recently, we discovered a new source of leptogenesis in these models, namely that the same Yukawa phases which provide the CP violation for decays, combined with curved-spacetime loop effects, lead to an entirely new gravitational mechanism for generating an asymmetry, driven by the expansion of the Universe and independent of the departure of the heavy particles from equilibrium. In this Letter, we build on previous work by analysing the full Boltzmann equation, exploring the full parameter space of the theory and studying the time-evolution of the asymmetry. Remarkably, we find regions of parameter space where decays play no part at all, and where the baryon asymmetry of the Universe is determined solely by gravitational effects. Introduction In a series of recent papers [1,2] we described a new phenomenon whereby gravity drives the Universe towards a matter-antimatter asymmetry. Our main realisation was that matter and antimatter propagate differently in the presence of gravity when CP symmetry is violated. Specifically, we proved [1,2] that in translation invariant environments, CPT symmetry necessarily forces matter and antimatter to propagate identically. Conversely, when this symmetry is broken by the background geometry, e.g., an expanding Universe, and when there is a source of CP violation, matter/antimatter propagators become distinct. This causes a spectral splitting for matter/antimatter and an energy cost difference which drives the system towards an asymmetric state, facilitated by particle number-violating reactions. As in our previous papers, we shall illustrate this effect within the context of leptogenesis [3], though as will become apparent, it applies equally well in any theory with a source of CP violation and B or L violation. In this case, the Lagrangian is given by Email addresses<EMAIL_ADDRESS>(J. I. McDonald<EMAIL_ADDRESS>(G. M. Shore) where ℓ i are the left-handed lepton doublets, φ is the charge-conjugate Higgs doublet, and N i are sterile neutrinos, written here in the Majorana basis 1 so that N c = N. As described above, at two-loops (figure 1) in a timedependent gravitational background, lepton and antilepton self-energies are distinct Σ ℓ (x, x ′ ) Σl(x, x ′ ). Minimal coupling ensures that at tree-level, the strong equivalence principle holds and leptons are insensitive to curvature, but when loop effects are taken into account, two things happen. Firstly, the propagators become sensitive to CP violation contained in the Yukawa couplings, a symmetry which obviously must be broken for distinct propagation. Moreover, as described in [4,5] the screening cloud surrounding the propagating leptons causes them to acquire an effective "size" and experience gravitational tidal forces, violating the strong equivalence principle and causing the leptons to couple directly to curvature. When the sterile neutrinos are integrated out from the diagrams in figure 1, the resulting effective action contains the following CP-and strong equivalence principle-violating operator for each lepton generation: where R is the Ricci scalar and I i j = I(M i , M j ) is a loopfactor depending on the sterile masses M i and M j in the corresponding diagram and which was computed in full detail in [2]. As described in refs. [2,6], this modifies the dispersion relations of leptons and antileptons to This energy splitting together with ∆L = 2 and ∆L = 1 processes drives the system towards a non-zero B-L asymmetry, independently of the departure of sterile neutrinos from equilibrium. For cosmological spacetimes, isotropy and homogeneity mean that spatial derivatives of R vanish and eq. (3) leads to an equilibrium B-L to photon ratio of the form where K i j = (h † h) i j . In this sense, we have a mechanism satisfying all three Sakharov conditions [7], the first two of which (particle number and CP violation) are inherited from the usual see saw mechanism. The third -usually stated as a departure from equilibriumis provided by the time-dependence of the background itself, whose dynamical nature is probed by the lepton screening cloud. In a radiation dominated Universe as considered in this Letter,Ṙ where σ = π 2 /30g * and g * ≃ 106.75 counts the number of relativistic degrees of freedom in the plasma. Classically, the equation of state parameter w is equal to 1/3 for radiation, and so the expression (5) vanishes. However, trace-anomalies in the gauge sector give (1−3w) ≃ 10 −1 [8], allowing forṘ 0. Combining eqs. (4) and (5) we arrive at A full description of the general theory of this gravitational leptogenesis mechanism and the calculation of the equilibrium asymmetry N eq B−L was given in [2]. In that work, we also made a preliminary estimate of the gravitationally induced baryon asymmetry η B based on the assumption that the lepton number violating interactions, which maintain the asymmetry at its equilibrium value, freeze out for temperatures T D for which z D = M 1 /T D ∼ 1. In order to achieve the observed value for η B , we were then led to consider very high sterile neutrino masses and decoupling temperatures at the limits of existing physical bounds. However, as we demonstrate here, a complete dynamical analysis using the full ∆L = 2 reaction cross-section shows that decoupling in fact occurs for significantly smaller values of z D . Inspection of (6) then makes it clear that the observed asymmetry is achieved for lower, conventional values of M 1 ∼ 10 10 − 10 11 GeV with correspondingly lower decoupling temperatures. Since our interest in ref. [2] was in the gravitational leptogenesis mechanism itself, we did not discuss the original mechanism whereby the out-of-equilibrium asymmetric decay rates Γ(N →lφ) Γ(N → ℓφ) of sterile neutrinos in the region z ∼ 1 contribute directly to the B-L asymmetry. Here, we consider the coupled Boltzmann equations involving both mechanisms and discuss in some detail the parameter space of the highenergy Yukawa phases in which one or other mechanism dominates in determining the final cosmological baryon asymmetry. The Boltzmann Equation We now study the Boltzmann equation to take into account the effect both of sterile neutrino decays and gravitational effects. We shall work in the hierarchical limit where M 1 ≪ M 2 ≪ M 3 , so that the dynamics is dominated by the lightest sterile neutrino N 1 , in which case the relevant Boltzmann equation is (see, e.g., [9]) where each of the number densities is normalised by the photon density and where z = M 1 /T . This is the standard set of coupled Boltzmann equations encountered in lepto/baryogenesis (see e.g., [9,10,11]) except that now, due to the gravitational interactions, we have N eq B−L 0 in the RHS of (8) in the washout term. Conventionally one has N eq B−L = 0 and so any lepton asymmetry generated whilst the sterile neutrinos are in equilibrium is washed out. However, if one takes into account gravitational effects, a lepton asymmetry can be maintained even when N N 1 = N eq N 1 . The CP asymmetry in the decays and inverse decays of sterile neutrinos is characterised by given in terms of M i and h i j by [3,10] where For a large hierarchy, x ≫ 1, We shall return to the form of ε 1 in subsequent sections. The various reaction rates can be parametrised in terms of the standard quantity K =m 1 /m * [9,12,13] given bỹ wherem 1 characterises the strength of the Yukawa interactions and v = 174 GeV is the electroweak scale. The quantity D can then be written as and corresponds to the N 1 → ℓφ tree-level thermal decay width. W is the "washout term", so-called because when gravitational effects are neglected, N eq B−L = 0 and any lepton asymmetry established before the decays of sterile neutrinos is destroyed. The washout term consists of two parts: The first is given by the tree-level inverse decay rate [9] The second part corresponds to ∆L = 2 binary scatterings ℓφ ↔ ℓφ in the s-and u-channel, and ℓℓ ↔ φφ and ℓ ℓ ↔ φ φ in the t-channel. The reaction rates for these processes are given by the quantity W = Γ/zH, with where is the u-averaged amplitude for the process in question. The amplitudes for s, u and t processes are denoted by the subscripts + and t respectively and take the form Introducing the variables the functions F and G are given by [9,12] and The delta function subtraction in the first line for F + represents the real intermediate state subtraction from the s-channel. This is to avoid the well-known double counting problem [9,11,12] where one over-counts the number of N 1 ↔ ℓφ processes by including them in the s-channel N 1 exchange. Only with this subtraction does the Boltzmann equation take the correct form, whereby no asymmetry can be generated when N N 1 = N eq N 1 . Of course, the whole point of our new mechanism is that N eq B−L 0 and so it is possible to generate an asymmetry when the sterile neutrinos are in equilibrium, but in the limit where N eq B−L → 0 we should still recover the traditional form of the Boltzmann equation. Our next task is to parametrise the amplitude (19) in terms of neutrino parameters. Firstly we note that where m 2 = m 2 1 + m 2 2 + m 2 3 is the sum of the neutrino mass-squares. After a little algebra we can also write We make the standard choice in the literature [9] and set Re(h 2 31 ) = Re(h 2 21 ) = 0, or equivalently, x 2 = x 3 = 0. Equation (36) then implies x 1 = m 1 /m 1 and the RHS of (24) simplifies to m 2 1 −m 2 1 /v 4 . Admittedly, this choice is somewhat arbitrary and its main aim is really to reduce the number of free variables, allowing for a simpler parametrisation of the theory. We shall work in this regime for the remainder of this Letter. Putting this together, the amplitudes become allowing us to write eq. (17), after a little manipulation, as For fixed SM neutrino masses, the amplitude becomes a function of essentially two variables 2 M 1 and K, which ultimately depend on the details of the high-energy theory. A short calculation also shows that the delta function term in F + gives a contribution −W ID to W ∆L=2 . 2 Note that c can be written as c = m * M 1 K/(8πv 2 ). Parametrising the CP violation The fundamental source of CP violation is of course the Yukawa phases contained in h i j , or more specifically, the quantities Im K 2 i j which control the strength of CP violation both in the lepton propagator and N eq B−L and also in the decays of sterile neutrinos via ε 1 . One might ask to what extent the CP violation in these two sectors is linked, and also how much each is constrained by low-energy neutrino physics. For hierarchical sterile neutrinos, M 1 ≪ M 2 ≪ M 3 we find that which after a little algebra can be re-written in terms of light neutrino parameters as [13] We can parametrise the CP violation in this quantity by using the parameters z i defined as where i |z i | = 1 andh is the mass-eigenstate Yukawa coupling given byh = Uh where U is the PNMS matrix. This satisfiesh where the see saw formulah 2 i j v 2 /M j = m i implies that Ω is orthogonal and therefore satisfies Ω T Ω 11 = 1. This implies that Hence the strength of CP violation in N 1 decays can be neatly parametrised as One might now ask whether the size of ε 1 , or more specifically the quantities y i , uniquely constrain the CP violation appearing in The answer to this question is no, as we now explain. Firstly, one should note that "CP violation" only really makes sense in the context of a particular process, since a given scattering amplitude or decay channel is determined not only by the Yukawa phases in h i j , but also by the combinations of masses M i involved in the relevant diagrams. In this sense, there will be certain regions of parameter space for which CP violation in one process is strong and simultaneously weak in another. For instance, ε 1 depends only on the Yukawa couplings via the quantity j Im(K 2 i j )/M j , but this is invariant under the transformation where M * is an arbitrary energy scale. This leaves ε 1 fixed, but changes Im K 2 i j and therefore the size of CP violation in (38), in which I [i j] depends on a completely different combination of masses from those appearing in ε 1 . Instead, for M j ≫ M i we find that I [i j] has the asymptotic behaviour [2] so that We therefore see that constraining the size of ε 1 still leaves the three quantities Im K 2 13 , Im K 2 23 and Im K 2 12 undetermined, so that the size of N eq B−L is not fully constrained in terms of y i of eq. (33) . In this sense, the gravitational effect is sensitive to different details of the high-energy see-saw physics compared to the usual delayed decay picture and is less constrained by SM neutrinos. Therefore, the only reasonable constraint which can be placed on the couplings Im K 2 i j appearing in N eq B−L is that they should be perturbatively small, in the sense that K 2 i j /4π, which plays the role of a fine-structure constant, must be less than 1. In the full solution (pink), we see that at early times, there is a gravitationally induced asymmetry, but the ε 1 D(N 1 − N eq 1 ) term dominates in the Boltzmann equation as we approach z = 1 and the asymmetry is determined solely by CP violating decays, with no memory of the gravitational effects at early times. The purple dotted curve, which includes only gravitational effects and neglects decays by setting ε 1 = 0, shows that decays have no effect until z ∼ 1. Evolution of the lepton asymmetry We now describe the solution of the Boltzmann equations (7) and (8), highlighting the different leptogenesis scenarios that occur depending on the value of the CP violating parameter ε 1 which governs the sterile neutrino decays. These scenarios are illustrated in figures 3, 4 and 5. In all cases, even if we start from a vanishing initial net lepton number at high temperatures, the system very rapidly attains its gravitationally-induced equilibrium asymmetry N eq B−L (z) 0. The asymmetry then tracks this equilibrium value as the Universe cools. As the corresponding rate for the lepton number-violating interactions falls (see figures 2 and 8), the system can no longer follow the extremely rapid 1/z 5 decrease in N eq B−L and the asymmetry freezes out. The region of z at which this decoupling takes place depends on the sterile neutrino mass M 1 and K, which control the washout coefficient W. In the scenarios illustrated here, decoupling takes place for small values of z, significantly below the scale z ∼ 1 − 10 at which the effects of the N 1 resonance in W and the N 1 decays are felt. In the first Figure 4: The other parameters are the same as figure 3, but we now take ε 1 = 10 −8 . For this value of ε 1 , the full solution is solely dominated by gravitational effects (pink curve), i.e. the decays have no effect on the relic asymmetry. This can be clearly seen by comparison with the dotted purple curve which neglects decays entirely by setting ε 1 = 0, and shows that the full solution is essentially independent of decays. From the black dashed curve, we see that taking into account decays alone does not give an accurate representation of the true solution. scenario (figure 3), we consider maximal ε 1 ≃ 10 −6 (setting y 2 ≃ 0, y 3 ≃ 1 in (37)) as in the standard delayeddecay picture. Then, with the parameters shown, since the asymmetry generated by the out-of-equilibrium N 1 decays is larger than the gravitational effect and occurs later (for z 1), the gravitationally-induced asymmetry is washed out and the system then evolves according to the conventional decay scenario with no memory of the early-time gravitational effects. A scenario where ε 1 is smaller is shown in figure 4. In this case, although the sterile neutrino decays do generate an asymmetry as usual, this effect is smaller than the gravitationally-induced asymmetry after freeze-out. Remarkably, therefore, in this scenario the final asymmetry is completely determined by the gravitational mechanism, with the decays playing no significant role. This alters our understanding of the parameter space of leptogenesis, showing that regions which were previously believed to give an asymmetry in terms of decays are actually dominated by the gravitational mechanism. Since our main interest here is in illustrating the mechanism of gravitational leptogenesis, we now study in detail the extremal case where the CP-violating decay parameter |ε 1 | ≃ 0 is minimal. In this case, only binary scatterings contribute and the Boltzmann equation for N B−L simplifies radically: As we now see, this scenario is readily realised by choosing opposite signs for the Yukawa phases in (31), (37). This places a constraint on the high energy physics of the form or equivalently, from eq. (37), Even with this restriction, there still remains much freedom in the choice of CP violation in the quantities Im[K 2 i j ] contained in (6) -for instance, eq. (43) places no constraints on the phases of K 2 23 . For simplicity, we shall set Im[K 2 23 ] = 0 and from eqs. (40) and (43) we find Notice that the size of the CP asymmetry is enhanced by the hierarchy between M 3 and M 1 . In what follows, we shall treat Im[K 2 13 ] as a free parameter controlling the strength of CP violation. Putting this together we find The corresponding solution of the Boltzmann equation (42) in this scenario is shown in figure 5. In this case, following the freeze-out of the asymmetry from its equilibrium value, the only further new feature is the The key observation, however, is that even in this model the gravitational leptogenesis mechanism on its own can produce the observed cosmological baryon asymmetry for an otherwise conventional choice of see-saw neutrino parameters. For example, in figure 5 the sterile neutrino masses were chosen to be M 1 = 10 10 GeV, M 3 = 10 16 GeV and K = 1, with Im(K 2 13 )/(4π) 2 = 10 −6 . The corresponding value for the final relic baryon asymmetry is given by where f = 2387/86 is a photon production factor and C sph = 28/70 is the sphaleron efficiency factor [9,10]. Clearly, the observed asymmetry, η B ≃ 10 −10 can be obtained for a significant range of the parameters M 1 , M 3 , Im(K 2 13 ) and K. For example, in figure 7, we illustrate the dependence of η B on Im(K 2 13 )/(4π) 2 and K for fixed M 1 , M 3 . Analytic solution for z << 1 To gain a little more insight into these numerical solutions, recall from sec. 2 that for small z we have W ∼ 1/z 2 so that the Boltzmann equation (42) takes the form where α is a constant depending on K and M 1 which can be inferred from the small z behaviour of W given in equation eq. (28). For small m 1 ≪ m, we have Rather surprisingly for a Boltzmann equation, eq. (49) has an analytic solution, which for zero initial lepton asymmetry (at z 0 ) is given by where providing a nice consistency check with our numerical solutions for z 1. This analytic solution is shown along with the full numerical solution of the Boltzmann equation (42) in figure 5. As noted above, the dip in the solution at late times is due to the departure from the 1/z 2 of W as it approaches a local maximum (see figure 8) shortly after z = 1, raising the reaction rate momentarily, and bringing the solution back slightly closer to equilibrium. Before this resonance effect, which is difficult to estimate analytically, the asymmetry after initial decoupling from N eq B−L is given approximately from eqs. (51), (52) as This gives a good approximation to the full numerical result and is a useful guide in scanning the parameter space of M 1 and K. Decoupling Finally, we wish to briefly emphasise a few subtleties concerning the nature of the decoupling temperature of the lepton number violating interactions. Traditionally one argues that the lepton asymmetry freezes out at z = z d when Γ(z d )/H(z d ) ≃ 1, and estimates the freezeout asymmetry by N eq B−L (z d ). Of course, as is clear from figure 5, the decoupling of the asymmetry is not a sharp transition but takes place gradually over a range of values of z in the vicinity of z D . However, while this is not in itself a big difference in terms of z ∼ z D , because of the extremely steep 1/z 5 dependence characteristic of the gravitationally-induced N However, the actual value to which N B−L freezes between 0.01 ≤ z ≤ 1 is in fact N B−L ≃ 2 × 10 −8 , meaning that the true value is actually two orders of magnitude different from the naive approximation. In general, N B−L is over-abundant compared to N eq B−L unless Γ/H is quite a bit larger than 1, requiring a high reaction rate keep up with the rapidly falling equilibrium value. This means that the estimates used in, for example, [8] for the relic asymmetry in general gravitational leptogenesis models based on an effective interaction of the form L ∼ ∂ µ R j µ B−L /M 2 may be increased by a few orders of magnitude when the full Boltzmann analysis is used. Of course, since as we have seen the subsequent local maximum in Γ/H shown in figure 8 in the vicinity of z = 1 causes the asymmetry to drop again in a way which is difficult to estimate analytically, it is clear that the only way to determine the final asymmetry reliably is to solve the full Boltzmann equation numerically as in figure 7. Conclusions In this Letter, we have presented a detailed study of the dynamics of lepton number generation in the early Universe, taking into account both the conventional outof-equilibrium decays of the sterile neutrinos in the seesaw model and our new mechanism of gravitational leptogenesis [1,2]. This has demonstrated clearly for the first time that this gravitational mechanism indeed provides a viable scenario to explain the observed baryon asymmetry η B ≃ 10 −10 . This study, which sheds new light on traditional perspectives in leptogenesis, involved a full numerical analysis of the coupled Boltzmann equations, modified to include the non-vanishing equilibrium asymmetry generated at two-loop order by the gravitational interactions. The parameter space of high-energy Yukawa phases was explored fully, showing that the CP violation in the gravitational and sterile neutrino decay sectors can be dialled independently. Whether the final asymmetry is determined by the gravitational or decay effects is then controlled by the size of the CP-violating decay parameter ε 1 . In particular, even in the limit of minimal ε 1 ≃ 0, we showed that the observed value of η B may be obtained for otherwise standard choices of neutrino parameters in the see-saw model. This establishes radiatively-induced gravitational leptogenesis as a viable mechanism for explaining the matter-antimatter asymmetry of the Universe.
5,663
2016-04-27T00:00:00.000
[ "Physics" ]
The basis property of generalized Jacobian elliptic functions The Jacobian elliptic functions are generalized to functions including the generalized trigonometric functions. The paper deals with the basis property of the sequence of generalized Jacobian elliptic functions in any Lebesgue space. In particular, it is shown that the sequence of the classical Jacobian elliptic functions is a basis in any Lebesgue space if the modulus $k$ satisfies $0 \le k \le 0.99$. Introduction The Jacobian elliptic function sn(x, k) and the complete elliptic integral of the first kind K(k) play important roles in expressing exact solutions of, for example, the pendulum equation u ′′ + λ sin u = 0, a typical bistable equation u ′′ + λu(1 − u 2 ) = 0, and so on. Now we will propose new generalization of sn(x, k) and K(k). For constants p, q ∈ (1, ∞) and k ∈ [0, 1), we define a generalized Jacobian elliptic function sn pq (x, k) : [0, K pq (k)] → [0, 1] with a modulus k as where p ′ = p/(p − 1) and We extend the domain of sn pq (x, k) to R so that we obtain a 4K pq (k)-period function like the sine function, and call the extended function sn pq (x, k) again. Then, sn 22 (x, k) = sn(x, k) and K 22 (k) = K(k) when p = q = 2; and sn pq (x, 0) = sin pq x and K pq (0) = π pq /2 when k = 0, where sin pq x is the generalized trigonometric function and π pq is the half period of sin pq x, which will be introduced in Section 3 below. Therefore, sn pq (x, k) is also generalization of both sn(x, k) and sin pq x. In the previous paper [22], the author proposed another generalization of sn(x, k) and K(k), and applied them to bifurcation problems for p-Laplacian. As we will see in Section 3, sn pq (x, k) and K pq (k) above are defined in a slightly different way from those in [22], but sn pq (x, k) also satisfies the following equation involving p-Laplacian nevertheless. While the generalization of K(k) of [22] converges to a finite value as k → 1 when p > 2, the K pq (k) diverges to ∞ as k → 1 for any p > 1. In this sense, sn pq (x, k) has closer properties to sn(x, k) than the function defined in [22]. In the present paper, we will show the basis property of functions f n (x, k) = sn pq (2nK pq (k)x, k), n = 1, 2, . . . , (1.1) which means that the family of these functions is a basis in Banach spaces. Here, a sequence {ϕ n } in a Banach space X is called a basis for X if for every u ∈ X there exists a unique sequence of scalars {α n } such that u = ∞ n=1 α n ϕ n in the strong sense. In general, when we try to find an approximation of a given function by a family of functions {ϕ n }, it is desirable that {ϕ n } is a basis which approximates to the function with convergence of higher order as possible. Concerning this, for example, we have known an interesting study [3] of Boulton and Lord. They study the best index q for which {sin q (nπ q x)} approximates well to the solution of p-Poisson problem, where sin q x = sin qq x and π q = π qq . The basis property is quite fundamental to such a stimulating problem. When p = q = 2, the sequence (1.1) is the family of Jacobian elliptic functions {sn(2nK(k)x, k)}. In this case, Craven [4] proves that if the modulus k satisfies 0 ≤ k ≤ 0.99, then the sequence is complete in L 2 (0, 1). Since the sequence is not orthogonal, we have no guarantee of its basis property. We will give another corollary of Theorem 1.1, whose conditions are verified easier than (1.2) and (1.3). Corollary 1.4. Let p, q ∈ (1, ∞) and r = max{p ′ , q}. If then {f n (x, k)} forms a Riesz basis of L 2 (0, 1) and a Schauder basis of L α (0, 1) for any α ∈ (1, ∞) when The paper is organized as follows. In Section 2 we give a summary of general properties of bases in Banach spaces. In Section 3 we recall the generalized trigonometric functions and introduce new generalization of Jacobian elliptic functions. In Section 4 we observe properties of the generalized Jacobian elliptic function sn pq (x, k) and its quarter period K pq (k). To show that the sequence (1.1) is a basis in L α (0, 1) for any α ∈ (0, 1), we depend on the strategy of Binding et al. [1] and Edmunds et al. [10]. Our main device is a linear mapping T of L α (0, 1), satisfying T e n = f n , where e n = sin (nπx), and decomposing into a linear combination of certain isometries. In Section 5 we show that T is a bounded operator for p ∈ (1, ∞). Section 6 is devoted to the proof of boundedness of the inverse for the ranges (1.2) and (1.3). Properties of Bases In this section we will give a summary of properties of bases in Banach spaces. For details, we can refer to Gohberg and Kreȋn [12], Higgins [13], and Singer [21]. A sequence {x n } in an infinite dimensional Banach space X is called a basis of X if for every x ∈ X there exists a unique sequence of scalars {α n } such that x = ∞ i=1 α i x i (i.e., such that lim n→∞ x − n i=1 α i x i = 0). A basis {x n } of a topological linear space U is said to be a Schauder basis of U, if all coefficient functionals f n , n = 1, 2, . . ., are continuous on U. i.e., there exists a constant c > 0 such that we have for all finite sequences of scalars α 1 , . . . , α n . i.e., there exists a constant C > 0 such that we have for all finite sequences of scalars α 1 , . . . , α n . (c) a Riesz basis, if it is both a Bessel basis and a Hilbert basis, i.e., there exist two constants c > 0 and C > 0 such that we have for all finite sequences of scalars α 1 , . . . , α n . Example 2.3. In the space X = L p (−π, π), p ∈ (1, ∞), the sequence {x n }, where is a bounded Bessel basis if p ≥ 2 and a bounded Hilbert basis if 1 < p ≤ 2. In particular, it is a Riesz basis if p = 2. We call two sequences {φ n } and {ψ n } in Banach space X equivalent if there exists a linear homeomorphism (i.e., bounded, linear and invertible operator) T on X such that ψ n = T (φ n ) for every n. Note that by 'invertible' we mean that T −1 exists and is bounded on all of X. Generalized Functions This section is devoted to the definitions of two kinds of generalized functions. For any constants p, q ∈ (1, ∞), we define π pq by where B and Γ are the Beta and Gamma functions, respectively. Then, for any x ∈ [0, π pq /2] we define sin pq x by We extend the domain of sin pq x to [0, π pq ] by sin pq x = sin pq (π pq − x), and furthermore, to the whole of R by sin pq (x + π pq ) = − sin pq x, so that sin pq x has 2π pq -periodicity. We can see that π 22 = π and sin 22 x = sin x. Moreover, the function y = sin pq x satisfies that y, We agree that π p and sin p x denote π pp and sin pp x when p = q, respectively. In that case, we can also refer to [5,6,7,8,11,15]. Using sin pq x, for x ∈ [0, π pq /2] we also define Clearly, cos pq x is a decreasing function in x from [0, π pq /2] onto [0, 1]. We extend the domain of cos pq x to [−π pq /2, π pq /2] by cos pq x = cos pq (−x), and furthermore, to the whole of R in the same way as sin pq x. Then, cos pq x has 2π pq -periodicity. We can see that cos 22 x = cos x. An analogue of tan x is obtained by defining for those values of x at which cos pq x = 0. This means that tan pq x is defined for all x ∈ R except for the points (k + 1/2)π pq (k ∈ Z). We denote by cos p x and tan p x as for the case sin p x. The functions sin p x and cos p x are useful for Prüfer transformation of half-linear differential equations. For this, see [6,7,11,19]. It is useful to collect formulae for case p = r ′ and q = r for some r ∈ (1, ∞). We can find many other properties of these functions in [10,14]. Remark 3.1. There are some different definitions of cos pq x from (3.1). For example, Drábek and Manásevich [9] define cos pq x by and so (3.5) gives cos p pq x + sin q pq x = 1, which is slightly different from (3.2). The fact that sin pq x satisfies (3.5) is essential, independently of the definition of cos pq x. Proof. Putting 1 − t r = s in the definition of π r ′ r , we have It suffices to show that tB(t, t) is decreasing on (0, 1 Clearly, the right-hand side is decreasing in t (note that log v < 0), so that tB(t, t) is also decreasing on (0, 1). Furthermore, since lim t→1 tB(t, t) = B(1, 1) = 1 and lim t→0 log tB(t, t) = log 2, we obtain the values of limits. Remark 3.3. We can find another proof of Proposition 3.2 in [10, Lemma 2.4], in which they use the fact that the area of r-circle |x| r + |y| r = 1 is π r ′ r (see also [16]). We also state the case p = r ′ and q = r for some r ∈ (1, ∞). As mentioned in Introduction, the author [22] has introduced another generalized Jacobian elliptic functions, which also include both the Jacobian elliptic functions and the generalized trigonometric functions. However, we should note that the definitions above of K pq (k) and sn pq (x, k) are slightly different from those of [22], in which the common index of 1 − k q t q to (3.7) and (3.8) is not 1/p ′ but 1/p. On account of the index, K pq (k) has similar asymptotic behavior near k = 1 as K(k), indeed, lim k→1 K pq (k) = ∞ for any p, q ∈ (1, ∞). To observe the convergence properties of generalized Jacobian elliptic functions as k → 1, we will prepare generalized hyperbolic functions, for which similar definitions are seen in [15]. For x ∈ [0, ∞), we define sinh pq x by and extend its domain to R by sinh pq x = − sinh pq (−x). Using sinh pq x, for x ∈ [0, ∞), we define and extend its domain to R by cosh pq x = cosh pq (−x). The function tanh pq x is defined by We agree that sinh p x, cosh p x and tanh p x denote sinh pp x, cosh pp x and tanh pp x when p = q, respectively. Putting p = q and t p = s p /(1 − s p ) in (3.9), we have Then, it is easy to prove the following properties: for any p, q ∈ (1, ∞) and all x ∈ R, 4 Properties of sn pq (x, k) and K pq (k) In this section we observe some properties of generalized Jacobian elliptic function sn pq (x, k) and its quarter period K pq (k). The function y = sn pq (x, k) satisfies that sn pq (0, k) = 0, sn pq (K pq (k), k) = 1, 0 < sn pq (x, k) < 1 for x ∈ (0, K pq (k)), y ∈ C 1 [0, K pq (k)], and When p > 2, we see that y ′′ ∈ L 1 (0, K pq (k)) and To obtain the estimate of K r ′ r (k) in Lemma 4.7 below, we state Tchebycheff's integral inequality in [18,20]. If one of the functions f or g is nonincreasing and the other nondecreasing, then the inequality in (4.2) is reversed. Concerning the following Lemmas 4.2-4.6, we can refer to [2,10,14] for the corresponding results of sin pq x and π pq , which are in case k = 0 of sn pq (x, k) and K pq (k). Lemma 4.7 extends an estimate of K(k) by Qi and Huang [20,Eq.(10)] to that of K r ′ r (k) for any r ∈ (1, ∞). Properties of the Function sn pq (x, k) Since f n (x, k) = f 1 (nx, k), it suffices to observe properties of f 1 (x, k) = sn pq (2K pq (k)x, k) in order to study those of (1.1). Proof. First we will show that sn pq (2K pq (k)x, k) is decreasing in p ∈ (1, ∞). Let 1 < p < r < ∞. Putting and Since g(t) is increasing in t ∈ (0, 1) when p < r, it is easy to see that which means that G(t) < 0, i.e, f ′ (t) < 0 for each t ∈ (0, 1]. Thus, Therefore we conclude that so that sn pq (2K pq (k)x, k) is decreasing in p > 1. The assertions for q and k are proved in a similar way. It is enough to replace f (t) by (1 < q < r) and sn −1 pq (t, l) sn −1 pq (t, k) (0 ≤ k < l < 1), respectively, and to replace g(t) by respectively. Proof. The assertion on p immediately follows from The remaining parts also follow from the form of K pq (k). Lemma 4.5. Let p, q ∈ (1, ∞) and k ∈ [0, 1). Then Proof. Putting 1 − t q = x p ′ in (3.7), we have The integration of the right-hand side is equal to K q ′ p ′ (k q p ′ ). Proof. Case p ′ ≤ q follows from only Lemma 4.2. Case p ′ > q is also proved similarly after using Lemma 4.5. Moreover, putting sin r ′ r θ = t, we have Similarly, putting cos r ′ r θ = t and t r = 1−k r k r s r 1−s r , we obtain Thus, we accomplished the first and second inequalities of (4.3). Finally, from the equality above, we obtain the third inequality of (4.3) The graphs of terms of (4.3) for r = 2 can be shown in Figure 1. The Operator T Let α ∈ (1, ∞) be an arbitrary number. In this section, we will make the functions correspond to the sine series e n (x) = sin(nπx) ∈ L α (0, 1), n = 1, 2, . . . , which form a basis of L α (0, 1). Figure 1: The graphs of terms of (4.3) for r = 2. The black line and the gray lines indicate K r ′ r (k) and the others, respectively. Proof. By Example 2.3, any function in L α (−1, 1) has a unique sine-cosine series representation. For any f ∈ L α (0, 1), we can thus represent its odd extension to L α (−1, 1) uniquely in a sine series, so the e n form a basis of L α (0, 1). Since {e n } and {f n } are equivalent, according to Proposition 2.4 the same is true for the f n . It follows from Proposition 2.1 that they form a Schauder basis of L α (0, 1). The argument for a Riesz basis when α = 2 is similar and follows from Proposition 2.5. In the remainder of this section we define T as a linear combination of certain isometries of L α (0, 1). Then we show that T is a bounded operator satisfying T e n = f n , n = 1, 2, . . ., for all p, q ∈ (1, ∞). The functions f n have Fourier sine series expansions An argument involving symmetry with respect to the middlepoint x = 1/2 easily shows that f 1 (l) = 0 whenever l is even. On account of this property, we can show f n (l) by using f 1 (l) as follows. In what follows we will often denote f 1 (m) by τ m . We first find a bound on |τ m | which will be crucial in the definition of T below. Since τ m = 0 if m is even, we may assume that m is odd. Integration by parts ensures that where the integrals exist because f ′′ 1 ∈ L 1 (0, 1). In fact (4.1) shows that In order to construct the linear operator T , we next define isometries M m of the Banach space L α (0, 1) by M m g(x) := g * (mx), m = 1, 2, . . ., where g * is its successive antiperiodic extension of g over R + by g * = g on [0, 1], and Notice that M m e n = e mn . Proof. Proof of Corollary 1.3. Let 1 < p ′ ≤ q < ∞. Then r = q, and it suffices to show that (1.2) is satisfied. Since we have the inequality tB(t, t) ≤ 2 in the proof of Proposition 3.2, we obtain so that (1.2) holds. Proof of Corollary 1.4. Suppose that q and r satisfy (1.4). Since tB(t, t) ≤ 2 as the proof of Corollary 1.3, we obtain
4,390.4
2013-10-02T00:00:00.000
[ "Mathematics" ]
Dimensional Accuracy and Surface Roughness Analysis for AlSi 10 Mg Produced by Selective Laser Melting ( SLM ) Selective Laser Melting (SLM) is an Additive Manufacturing (AM) technique that built 3D part in a layer-by-layer method by melting the top surface layer of a powder bed with a high intensity laser according to sliced 3D CAD data. AlSi10Mg alloy is a traditional cast alloy that is broadly used for die-casting process and used in automotive industry due its good mechanical properties. This paper seeks to investigate the requirement SLM in rapid tooling application. The feasibility study is done by examining the surface roughness and dimensional accuracy as compared to the benchmark part produced through the SLM process with constant parameters. The benchmark produced by SLM shows the potential of SLM in a manufacturing application particularly in moulds. Introduction Selective Laser Melting (SLM) is a layer-wise material addition technique that allows generating complex 3D parts by selectively consolidating successive layers of powder material on top of each other, using thermal energy supplied by a focused and computer controlled laser beam [1][2][3][4].The competitive advantages of SLM are geometrical freedom, mass customisation and material flexibility [5]. Aluminium-Silicon alloys are characterised by sound castability, great weld ability and excellent corrosion resistance.Due to their attractive combination of mechanical properties, high heat conductivity and low weight, the Al-Si alloys found a large number of applications in automotive, aerospace and domestic industries [2].SLM allows parts to be built additively to form near net shape components rather than by removing waste material.Traditional manufacturing techniques have a relatively high set-up cost (e.g. for creating a mould) while SLM has a high cost per part mostly because it is time-intensive and it is advisable if only very few parts are to be produced.Much of the part from SLM technologies is on lightweight fabrication that applied for aerospace.This technology is able to manufacture multifaceted shapes where traditional manufacturing constraints, such as tooling and physical access to surfaces for machining and restrict the design of components [6]. Additive Manufacturing (AM) also results in reduction of emission and very low wastage because it able to recycle the powder waste through the processes itself.Lots of energy and resources are consumed to produce tools like dies and moulds [7].Besides, AM techniques provide almost unchallenged freedom for design without the need for partspecific tooling [8]. To employ SLM into a manufacturing technique, the laser melted parts have to comply the strict material requirements regarding mechanical and chemical properties and the process must guarantee the high accuracy and appropriate surface roughness [9].In this study, benchmark product was produced by SLM with constant parameters.The process requirements for rapid tooling application was characterised and the feasibility study also done by examining the surface roughness and dimensional accuracy.The benchmark produced by SLM shows the potential of SLM in a manufacturing application particularly in moulds. Methodology AlSi10Mg is a typical casting alloy with virtuous casting properties and is typically used for cast parts with thin walls and complex geometry and it provides good strength, hardness and dynamic properties and is therefore also used for parts subject to high loads [10]. SLM is an AM process used a laser beam which selectively melts and fuses accumulating layers of powder to build solid metal parts [11].SLM equipment technology provides a laser system, a set of optical laser beam focusing, a powder feeding system (loader and roller or coater) and a control centre as shown in Figure 1.The SLM is a cyclical process which consists of three steps which repeated until the end of the construction process.First, a recoater applies an even coating of metal powder in layer thicknesses of 20 μm to 100 μm.Then, the exposure will solidify the powder with a laser beam.The absorption of the laser radiation causes the metal powder to heat up above the melting temperature of the metal.This causes the blending of the exposed areas of the current layer and the solidified areas of the layer beneath it through metallurgic melting.The AlSi10Mg in a powder form was sieved into 63 microns, while the loose density of 7.68 g/cm 3 and chemical composition as shown in Table 1 supplied from the LPW Technology Limited UK.For this study, the default SLM parameters shown in Table 2 were used to produce a benchmark sample. Since, the good surface roughness could be an important requirement in rapid tooling, a roughness analysis has been performed.The surface roughness depends on many factors: material, powder particle size, layer thickness, laser and scan parameters, scan strategy and surface post-treatment [12].The roughness of top and two side surfaces (Figure 4) were measured along different directions as shown in Figure 2 (a). Figure 2 (b) shows a benchmark model consisting of different features such as stairs, cylinders, circumferential surface, rectangular cuts and etc. was developed to test the influence of sloping angle, dimensional analysis, measurement accuracy and the difference between top and bottom surfaces (Figure 4).The sloping angle of this benchmark top plane are ranging from 0º to 90º and bottom planes are ranging from 30º to 90º.On the other hand, the sloping angle for the horizontal holes changes continuously.All surface roughness tests were investigated by using roughness measuring meter and mean R a and R z values were calculated with a cut-off length of 2.5 mm, according to the DIN 4768 standard [12].All geometrical features of benchmarks produced were measured three times by tactile probes on a Numerical Control 3D coordinate measurement machine (CMM) as shown in Figure 3. Results and Discussion The breakthrough of SLM as a Rapid Manufacturing technique will depend on the reliability, performance and economical aspects like production time and cost [13].These factors cannot be characterised in general, but some were investigated in this paper for the mould application.The results of the roughness analysis on the benchmark model as shown in Figure 2(b) are presented in Table 4. The building orientation of side surface and top surface are difference where the top surface is built in x-y direction while side surface was built in x-z direction (Figure 4).This difference should give a difference value for surface roughness, however both top and side surfaces not present significant differences regarding measurement direction.In spite of a lower powder particle size, the AlSi10Mg samples showed higher roughness in R z value (Table 4), it is because of physical properties of the material where the melt pools are more stable for Silicon. In order to have appropriate reading of average surface roughness the measurements were taken at angles, curves and other features at the top and bottom (Figure 4) of the benchmark while on the straight face of the side of it (Figure 4), as shown in Table 4.The surface roughness depends on many factors such as material, powder particle size, layer thickness, laser and scan parameters, scan strategy and surface post-treatment [14].The surface roughness of a sloping plane depend on the sloping angle because of the stair effect due to the layer wise production.In addition, roughness of top surfaces differ strongly from the roughness of bottom surfaces.So the values of roughness in this work are varied by varying the angles of the benchmark because of the layer thickness and the particle size of the AlSi10Mg.The dimensional accuracy analysis of SLM benchmark model was shows in Table 5 to Table 8. Figure 5 (a) to 5 (d) indicate the different shape feature of benchmark such 90º edge (stair) (Figure 4a), cylinders (Figure 5b), reactangular (Figure 5c) and variate rectangular slot (Figure 5d). Mean value and maximum deviations between measured and designed dimensions are stated absolutely (mm) and relative to the nominal dimension (percentage).All small features of the benchmark shown on Figure 5 (a) to (d), were built successfully.The highest deviation percentage is obviously shown in table 5(b) with 11.66%.However, others value are show the value between 0.10% to 3.30%.While the negative value for deviation are indicate the actual sample is expand over the designed dimension and contrary with that, the positive value of deviation show the part was shrinkage.The dimensional accuracy for this study is possible to accept for mould fabrication due and this trend is valuable for mould designer to predict the mould design. Conclusion AlSi10Mg has been characterised with rapid manufacturing technique by SLM technology.After analysis of the roughness and dimensional accuracy, it can be concluded that the values of roughness on the bottom are higher than the top and side of the benchmark because of the starting of the SLM process the thermal conductivity influence on the layers of the features of the benchmark. For the mould manufacturing from AlSi10Mg by SLM process, there are many positive and negative variations in the measured values and designed values of the benchmark, nevertheless the range of the deviation is acceptable. Lastly SLM can produce even a complex shape of geometry, which proved that SLM enables an efficient production of mould manufacturing and complex parts with strong economic, good surface, dimensional accuracy and potential. Fig. 2 . Fig. 2. (a) Indication of surface roughness measurements on blocks; (b) Benchmark model with different features and sloping angles for top and bottom planes The benchmark model as shown in Figure (2b) was characterised to investigate the process accuracy (in x, y, and z directions), accuracy of measurement for cylinders, stairs, rectangular cuts and other angle features.The presence of 2 mm thin wall is to indicate the warpage due to thermal stresses.Otherwise the features as Table3was developed to evaluate the feasible precision and resolution of the process. Fig. 5 ( Fig. 5 (a) to (d).Benchmark of different parts for dimensional analysis Table 2 . SLM parameters for producing benchmark for dimensional accuracy and surface roughness. Table 3 was developed to evaluate the feasible precision and resolution of the process. Table 3 . Developed features on the benchmark models Table 4 . Average surface roughness for top, down and side of the benchmark Table 5 . Dimension measurements of stairs as shown in Figure5(a) Table 6 . Dimension measurements of cylinders as shown in Figure 5(b) Table 7 . Dimension measurements of the rectangular slot with different relating position as shown in Figure5(c) Table 8 . Dimension measurements of variable rectangular slots as shown in Figure5(d)
2,371.6
2016-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Models of knot and stem development in black spruce trees indicate a shift in allocation priority to branches when growth is limited The branch autonomy principle, which states that the growth of individual branches can be predicted from their morphology and position in the forest canopy irrespective of the characteristics of the tree, has been used to simplify models of branch growth in trees. However, observed changes in allocation priority within trees towards branches growing in light-favoured conditions, referred to as ‘Milton’s Law of resource availability and allocation,’ have raised questions about the applicability of the branch autonomy principle. We present models linking knot ontogeny to the secondary growth of the main stem in black spruce (Picea mariana (Mill.) B.S.P.), which were used to assess the patterns of assimilate allocation over time, both within and between trees. Data describing the annual radial growth of 445 stem rings and the three-dimensional shape of 5,377 knots were extracted from optical scans and X-ray computed tomography images taken along the stems of 10 trees. Total knot to stem area increment ratios (KSR) were calculated for each year of growth, and statistical models were developed to describe the annual development of knot diameter and curvature as a function of stem radial increment, total tree height, stem diameter, and the position of knots along an annual growth unit. KSR varied as a function of tree age and of the height to diameter ratio of the stem, a variable indicative of the competitive status of the tree. Simulations of the development of an individual knot showed that an increase in the stem radial growth rate was associated with an increase in the initial growth of the knot, but also with a shorter lifespan. Our results provide support for ‘Milton’s Law,’ since they indicate that allocation priority is given to locations where the potential return is the highest. The developed models provided realistic simulations of knot morphology within trees, which could be integrated into a functional-structural model of tree growth and above-ground resource partitioning. INTRODUCTION Models of carbon assimilate allocation in trees generally consider branches to be part of either the woody shoot or the crown (Landsberg & Waring, 1997;Mathieu et al., 2009). However, considering branch xylem as a separate sink can extend the practical applicability of functional-structural tree models (FSTMs; Sievänen et al., 2000) to include wood properties considerations. Knots are formed when branches are occluded by growing tree stems, and exert a strong influence on the end-use characteristics of wood products (Buksnowitz et al., 2010). Knot formation is driven by complex spatiotemporal interactions between a tree and its environment. Thus, knowledge of the biological processes that regulate assimilate partitioning in trees could improve models of branch growth. The branch autonomy principle (Van der Wal, 1985;Sprugel & Hinckley, 1988) has been used in some FSTMs to simplify the modelling process (Bosc, 2000;Kull & Tulva, 2000). The branch autonomy principle states that the growth of individual branches can be predicted from their morphology and position in the forest canopy, irrespective of tree characteristics. Models that incorporate this principle can also predict mortality based on the growing space (Mitchell, 1975) or the amount of light (Nikinmaa & Hari, 1990) available to individual branches. However, there is an important limitation to this principle. By comparing the height of the lower limit of the living crown in trees of different sizes, Sprugel (2002) showed that branches on supressed trees were more likely to survive and grow than the equivalent branches on dominant trees. This implied shift in allocation priority within trees towards branches in light-favoured positions, referred to as 'Milton's Law of resource availability and allocation' (Sprugel, 2002), suggests that assimilates are invested where the potential return is highest. This is consistent with the results of Nikinmaa et al. (2003), who obtained improved predictions of crown development when considering both the position and the light environment of branches. However, experimental confirmation of Milton's Law is generally restricted to static assessments of the location of the crown base in even-aged forest stands (Valentine et al., 2013). Branch ontogeny can be studied in long-term experiments (Pretzsch, 2005), but repeated measurements on the same trees are time-consuming and costly. One solution to this problem is to use empirical branch distribution models to simulate the temporal development of tree and branch growth using cross-sectional data i.e., observations of the number, location and size of branches made on trees of different ages (Colin & Houllier, 1991;Achim et al., 2006;Weiskittel, Maguire & Monserud, 2007). However, the simplicity of the approach comes at the expense of reduced accuracy for some branch measurements (Duchateau et al., 2013a). More recently, non-destructive techniques for rapidly generating high-resolution data have been developed, such as infrared imaging, optical scanning, magnetic resonance imaging (MRI), and computed tomography (CT) using X-rays or gamma rays (Moberg, 2001;Longuetaud et al., 2012;Dutilleul, Han & Beaulieu, 2014). These innovations allow the use of internal data to simultaneously reconstruct stem and knot growth over time. In this study we present models linking knot ontogeny to the secondary growth of the main stem in black spruce (Picea mariana (Mill.) B.S.P.), a dominant species in the North American boreal forest. We used data from high-resolution CT scans of tree stems to reconstruct the history of both stem and knot development, with the aim of developing models that would apply in an FSTM framework. First, we tested the hypothesis that the ratio of branch to stem growth was dependent on stem characteristics indicative of the competitive status of the tree. We then developed statistical models for predicting the evolution of individual knot diameter and trajectory using a series of predictors related to the position in the tree, stem radial growth, and other general stem characteristics. This allowed us to test 'Milton's Law' using longitudinal data i.e., repeated measurements of branch and stem growth over time. This approach allowed us to make detailed simulations of knot development while considering the variation in assimilate partitioning between trees. Tree sampling Sample trees were collected from seven naturally-regenerated, unmanaged forest stands in the North-Shore region of Quebec, Canada. All sampling locations were part of a network of sites established to study the growth of spruce-moss forests after fire (Barrette, Pothier & Ward, 2013;Torquato et al., 2014;Ward, Pothier & Paré, 2014). At the time these plots were established, efforts were made to maintain site characteristics (i.e., surface deposit, topographic position, exposure and soil drainage) as constant as possible and representative of mesic conditions (Ward, Pothier & Paré, 2014). Because CT scanning is costly and the associated data processing time-consuming, we worked with a limited number of sample trees. In each of the seven stands, two trees were randomly selected for destructive sampling. However, four trees were omitted from the analysis due to missing discs and the presence of wood decay. Of the ten trees in our final sample, eight came from even-aged plots that had regenerated after fires dating back to between 66 and 152 years (Bouchard, Pothier & Gauthier, 2008). Two more trees (T09 and T10) were selected from one uneven-aged plot where the time since the last stand-replacing fire exceeded 200 years. The sample trees had a wide range of ages, crown size and stem dimensions (Table 1). Annual knot data After felling, each tree was cut into 2.5-m logs, giving a total of 41 logs that were then transported to the Institut National de la Recherche Scientifique in Quebec City and scanned using a Somatom Sensation 64 CT scanner (Siemens Medical Solutions USA, Inc., Malvern, Pennsylvania, USA). Each log was scanned at 2-mm intervals along its longitudinal axis with a 2-mm-wide X-ray beam (120 kV-50 mA), so that the scanned segments were contiguous. The pixel size was 0.35 mm × 0.35 mm in the transverse direction. Notes. a The base of the crown was defined as the location of the lowest pseudo-whorl containing at least one live branch, above which all pseudo-whorls contained at least one live branch. France (Longuetaud et al., 2012). On successive images, the tangential limits of each knot were manually delineated with a series of points (Fig. 1A). A second purpose-built software named 'BIL3D' (Colin et al., 2010) was developed to visualise the position and 3D geometry of each knot using the Cartesian coordinates of each point (Fig. 1B). The series of points representing the tangential limits of the knot were interpolated using spline curves. This allowed us to position the central axis (as the middle of both curves) and diameter (as the distance between each curve, assuming a circular cross section) of each knot from its point of origin to the bark. In a database, the diameter (D) of the knot was recorded at an interval of 1 cm from the stem's pith in the radial direction. Similarly, the position of the central axis of the knot along the longitudinal stem axis (Z, referred to as the 'trajectory') was recorded at an interval of 1 cm from the stem's pith. This way, we obtained a representation of the geometric profiles of 5,377 knots. A more detailed description of the knot reconstruction method was presented by Duchateau et al. (2013a). The demarcation between stem and knot xylem cannot be considered as perfectly discrete. Knot profiles were therefore extracted from the CT images by manually delineating high density wood corresponding to a knot and the surrounding lower density stem wood. Although the transition was generally clear enough to ensure accuracy ( Fig. 1), the knot reconstruction process produced some localized irregularities that did not reflect the true shape of the knots. For this reason, we chose to smooth the radial profiles of each knot using a combination of two Weibull equations, which can reproduce a wide variety of knot profiles (Duchateau et al., 2013a). This also had for advantage to provide a parametric description of each knot that was dependent on the radial position within the stem. It is possible, however, that abrupt variations in knot shape were missed due to the smoothing process. Knot development at a given radial position (l) was reconstructed using the diameter (D l ) and trajectory data (Z l ). The same Weibull equation with an additional linear term was used to model both series of D l and Z l measurements: where y l represents either the D l or Z l values (mm), l is the distance from the stem's pith in the radial direction (mm), R max is the total length (mm) of the knot along the stem's radial direction and α, β and µ are parameters to be estimated empirically. The functions were fitted to each knot independently using the nls function of the nlme library in the R statistical programming environment (R Core Team, 2014). The models for both D l and Z l converged for 95% of the knots in the database. Visual examination revealed that non-convergent knots were generally small and sinuous. Indeed, convergent knots represented 98% of the total volume of knots in the entire dataset, which we considered representative of the full history of knot growth in our sample trees. Annual ring data from the main stem The model presented by Duchateau et al. (2013a) only made static predictions of knot shape based on external branch characteristics. To meet the objective of this study to link knot ontogeny to the secondary growth of the main stem, it was necessary to reconstruct the yearly growth of the stem at its interface with each knot. Annual ring data from the main stem were difficult to obtain from the CT images due to factors such as narrow rings and the higher moisture content of the sapwood. One-cm-thick discs were hence cut from the ends of each log to reconstruct the growth history of the stems. Discs were optically scanned and annual ring boundaries were delineated in the four cardinal radial directions using image analysis software (WinDENDRO TM ; Régent Instruments, Quebec City, Quebec, Canada, 2005; Guay, Gagnon & Morin, 1992). To link annual changes in knot geometry with stem radial increments, a first linear interpolation was made, in each cardinal direction, between the widths of each matching ring from both ends of each log ( Fig. 2A). For rings present near the pith of the lower disc but absent from the upper disc, we used the mean slope and intercept of linear interpolations derived for the first five complete rings. This way, we obtained estimates of annual ring widths at any height along the stem in the four main cardinal directions. To obtain estimates of stem growth in the azimuthal direction of a knot (Fig. 2B), a second interpolation was made from the two surrounding cardinal directions for which we had annual ring width measurements. In this case we used a weighted average of the two known ring width series located on each side of the knot. We defined α r as the azimuth angle between a knot and one of the two cardinal directions on each side. The weighting factor was calculated as (90-α r )/90, which approached a value of 1 if the knot orientation was close to one of the two cardinal directions. Due to irregularities in stem shape, the resulting series of stem rings associated with a given knot did not end in the same exact location as the knot-stem interface, which was located on the CT images. Therefore, a small correction constant was added (or subtracted) to each ring in the series to ensure that both matched exactly. These linear interpolations of annual ring width variation between two sample discs were a simplification, since in reality growth rings deviate around knots (Pellicane & Franco, 1994). However, given the imposibility to extract the position of growth rings along each knot directly from CT images, this was considered as a good approximation. In a final step in the knot and stem growth reconstruction process, we traced back the annual limits of primary growth. Each annual elongation of the shoot was defined as a growth unit (GU). Like other conifers, black spruce produces several nodal and internodal branches within a growth unit. Nodal branches are those forming a whorl at the top of a GU (Achim et al., 2006;Auty et al., 2012). Botanically, the branches of conifers do not technically originate from the same vertical position, these are referred to as 'pseudo-whorls' (Fisher & Honda, 1979). However, this distinction was not apparent at the resolution of our CT-scanning measurements. Therefore, we summed the basal areas of all branches that originated from the same CT image, which facilitated the identification of pseudo-whorls of branches that were used as the limits of annual GUs. To avoid large errors, we ensured that the number of GUs matched the difference in the number of annual rings measured at both ends of each log. A more detailed description of the growth unit identification method is presented in Duchateau et al. (2013b). Once we had obtained a full description of both the knots and stem shape, a final step was to obtain the annual increments in knot diameter (ΔD t ) and trajectory (ΔZ t ). These were computed using the intersection points between stem rings and knots, and by considering the diameter perpendicular to the central axis of the knot at each intersection point (Fig. 3). Tree-level models To examine the variation in biomass allocation between the stem and branches over time, the ratio of knot to stem growth (KSR i,t , dimensionless) was calculated, for each year of growth (t) in a tree, as the sum of all knot area increments at the surface of the stem divided by the annual basal area increment of the stem at 1.3 m. Because the trees were not scanned all the way to the stem apex, the most recent annual growth rings were incomplete. These were therefore omitted from the analysis so that calculations were made only for years where complete growth data were available. When knots had reached a constant or decreasing diameter they were considered to be dead. To assess the variation of KSR i,t through the life of the tree, we developed a linear mixed-effects model (Pinheiro & Bates, 2009) describing its evolution as a function of tree height-diameter ratio and tree age. To assess the effect of within stand competition on KSR i,t , the ratio (HD i,t , m/cm) between tree height (H i,t ) and its diameter at breast height (DBH i,t , measured at 1.3 m) was used as a surrogate for the competitive status of the subject trees at a given age. This ratio is useful because inter-tree spacing is known to strongly affect crown development and hence the radial growth of the stem, whereas it has much less effect on height growth (Weiskittel et al., 2011). Since values of KSR i,t were continuous and non-negative, it was modelled as a gamma distribution with a log-link: where ln(KSR i,t ) is the natural logarithm of the knot to stem ratio in a given year t, Age i,t is the age of the tree (years), a 1 , a 2 , a 3 are the model parameters, δ i is the random effect for each tree (i), and ε is the residual error of the model. Next, we examined the effect of KSR i,t on the number of new branches produced in a given year by fitting a Poisson regression model, with a log-link, describing the number of new branches per stem as a function of KSR i,t , tree age and their interaction: where ln(NBR i,t ) is the natural logarithm of the number of new branches per stem in a given year, b 1 , b 2 , b 3 , b 4 are the model parameters, and all other variables are as previously defined. The models presented in Eqs. (2) and (3) were fitted using the glmer function in the lme4 library (Bates et al., 2014) of the R statistical programming environment (R Core Team, 2014). In model fitting, we began by screening all potential tree-level explanatory variables and biologically plausible interaction terms. Variables were selected after calculating the variance inflation factors (VIF), to address any potential multicollinearity issues (O'brien, 2007). Variables that were highly correlated (VIF > 4) were excluded from the models. Variable selection for Eqs. (2) and (3) was the result of a backwards elimination process in which the selection was based on Akaike's information criterion (AIC) (Akaike, 1974). Chi-squared-based likelihood ratio tests were used to evaluate the significance of terms that were successively dropped from the model. In the absence of a significant difference (p > 0.05), the simplest model was retained. Parameter estimates were obtained using the maximum likelihood method. Individual knot models Next, statistical models were developed to describe the temporal evolution of the morphology of individual knots using annual ring-and tree-level characteristics as independent variables. Initially, we attempted to fit a single model describing both trajectory (Z i,j,t ) and knot diameter (D i,j,t ) simultaneously, thereby reconstructing the entire knot in a single step. However, this led to an underestimation of knot diameter in the first years of growth that carried over for the entire knot profile. Therefore, separate models were developed for each separate component. Individual knot diameter and trajectory models were fitted to the data from a random selection of 75% of the total population of knots, while the remaining data were used for model evaluation. Knot diameter model. We observed relatively consistent patterns in the diameter development of the knots. There was a rapid increase in diameter increment in the first three years of knot growth, followed by a gradual decline of growth until branch death (Fig. 4A). On average, branch increments reached zero at around year 25. We hence divided each diameter profile into three sections: (1) the initiation section (years 0-3), (2) the growth section (years 4-25) and (3) the stable or declining section (years > 25). In the initiation section, because ΔD i,j,t values did not follow a Gaussian distribution, D i,j,t was modelled directly. In the remaining two sections ΔD i,j,t was used as the response variable. Knot characteristics at time t − 1 were used to make predictions at time t. This ensured a smooth transition between the different sections of the model. After the variable selection process, the general form of the knot diameter model for each section was expressed as: where GU pos,i,j is the relative position of the knot initiation point along the GU (varies from 0 at the base to 1 at the stem apex, and is used to take the phenomenon of acrotony (Powell, 1995) into account), RW i,j,t is the ring width of the stem at the location of the knot in year t, δ i and δ i,j are the tree-and knot-level random effects and ε is the residual error. All other variables are as previously defined. Knot trajectory model. The average annual variation of ΔZ i,j,t was typically positive until approximately ring 40. After this point the trajectory stabilized, before decreasing (Fig. 4B). The knot trajectory profiles were therefore separated into two sections delineated at ring number 50. Characteristics of the knots in year t − 1 were also included in this model, thus ensuring a smooth transition between the sections. Various combinations of the explanatory variables were used in each section of the model. The general form of the knot trajectory model for each section was expressed as: where all variables are as previously defined. See Table 2 or a full description of all variable names used in the models. These models were fitted using functions contained in the nlme library of the R statistical programming environment (R Core Team, 2014). A power variance function of annual ring number from the pith at the level of each knot (RN) was included to account for heteroscedasticity in the model residuals. In addition, a continuous first-order auto-regressive term (AR1) was added to account for autocorrelation between successive measurements. The model fitting process started by including a full set of potential ring-, knot-or tree-level explanatory variables and model selection was performed using the same backwards elimination procedure as described in the section on tree-level models. Simulations To analyse the influence of tree growth and competitive status on knot development, we reconstructed a single knot at 6.1 m using the predictions from Eqs. (4) and (5) and the stem and growth characteristics of tree T10. Then, while keeping tree height constant, we increased the annual ring increments by 50%. The diameter and trajectory profiles of the original knot were then recalculated. The process was repeated by decreasing the annual stem increments of the same tree by 50% of their actual values and again predicting knot morphology. In a second simulation, all knots from a 1.5-m section starting at a height of 2.5 m in tree T4 were simulated using Eqs. (4) and (5) and compared to the real knots, as extracted from the CT images. For this simulation we used the known insertion points along the stem and azimuthal orientation of each knot. Where appropriate, the year at which a knot was observed to be completely occluded by the growing stem was used as the end-point of the simulation. Table 3). Horizontal red line shows an equality between the total annual knot increment and the stem increment at 1.3 m (KSR = 1). Tree-level models The knot to stem increment ratio (KSR t ) varied considerably with tree age. On average, KSR t was higher when trees were young and decreased rapidly in the first few years, before stabilizing (Fig. 5). The rate of the initial decrease varied among trees. Values of KSR t greater than 1 indicated that, in a given year, the total knot basal area increment exceeded that of the stem. In addition to the negative relationship with tree age, KSR t ratio was positively related to HD t , such that more slender trees allocated relatively more biomass to their branches than to the main stem (Fig. 6). Furthermore, in a given year, the predicted number of new branches produced was greater in trees with higher KSR t values, but the effect of KSR t decreased with increasing tree age (Eq. (3) and Table 3). In some trees, KSR values showed large interannual fluctuations from the general trend (Fig. 5). The 3D reconstructions of the stem and knots for two of these trees showed large deviations of the pith of the main stem, likely a result of leader loss or stem damage. While one of these trees retained apical dominance in a single leader (T01), the other produced a fork (T09; Fig. 7). The model produced a good fit to all trees except tree T03, although visual examination of the 3D reconstruction of this stem revealed no obvious explanation for the lack of fit. Table 4 shows the fixed effects parameter estimates and standard errors for each section of the final knot diameter model (Eq. (4)). To evaluate the model, knot diameter profiles were predicted and compared to observations in the evaluation dataset. Plots of the raw residuals (observed minus predicted values) showed that, on average, knot diameter was slightly underestimated in the middle section of the knot profiles, but overall the model was unbiased (Fig. 8A). The mean absolute error was 0.031 and the root mean square error (RMSE) 0.054. When the profile of each knot in the database was reconstructed by adding successive annual diameter predictions, the absolute value of 50% of the residuals was less than 2.6 mm along the pith-to-bark profiles, while the absolute value of 90% of the residuals was less than 9.7 mm. Table 5 shows the fixed effects parameter estimates and associated standard errors for each section of the final model of knot trajectory (Eq. (5)). Again, predictions of knot trajectory profiles were compared to observations in the evaluation dataset. On average, the model was unbiased along the knot profile up to ring 75, with a slight overestimation beyond this point (Fig. 8B). The mean absolute error for this model was 0.118 and the root mean square error (RMSE) 0.189. When the profile of each knot was reconstructed by adding successive annual trajectory predictions, the absolute value of 50% of the residuals Table 4 Fixed effects parameter estimates and standard errors for each section of the knot diameter model given by Eq. (4). Section 1, knot initiation (1-3 years); Section 2, growth phase (4-25 years); Section 3, stabilisation and death (>25 years). Section 1 predicts the diameter and sections 2 and 3 predict the diameter increment. was less than 11.9 mm along the entire pith-to-bark profiles, while the absolute value of 90% of the residuals was less than 36.7 mm. Simulations When we used the dimensions and growth of a real tree (T10) to simulate knot growth, the diameter increments in the early years of knot development were positively related to the radial growth of the main stem. However, knot longevity was reduced when the radial growth was artificially increased (and thus the HD ratio decreased). Knot growth ceased at ring 19 for the elevated growth scenario, but it was maintained along its entire profile (47 years) when stem growth was reduced (Fig. 9). In the real growth scenario, knot diameter increments began to decline around ring 25. Tree HD ratio also had a significant effect in the first section of the knot trajectory model, although the effect was only apparent in the lower stem (not shown). Table 4) and (B) knot vertical position (Eq. (5) and Table 5). The grey line indicates the median of all observations for a given ring number. Contours provide the distribution around the median. In the second simulation we reconstructed all knots in a 1.5-m section of tree T04. This showed that although the diameter of larger knots was slightly underestimated, the models generally produced accurate simulations of the diameter and shape of real knots. However, the models produced less variation in knot insertion angle than was observed in reality ( Fig. 10), which would likely explain the larger residuals of the trajectory model. Resource allocation This study provides further support to the idea that allocation of above-ground carbon assimilates in trees is directed towards locations where the potential return is the highest Figure 9 Simulations of a single knot from Eqs. (4) and (5) at 6.1 m of the main stem. Stem increments of tree T10 were used as the reference level for input parameters. (A) Radial growth decreased by 50%; (B) Reference level and (C) Radial growth increased by 50%. Real height growth from tree T10 was used for all simulations. The knot was assumed to have died when diameter increments reached zero. Red, live section; Blue, dead section (Sprugel, 2002). To maintain a favourable position in the canopy, trees subjected to high levels of competition prioritize height growth over secondary radial growth (Lanner, 1985). Consequently, at a given age, the HD ratio is a useful predictor of assimilate partitioning among tree organs (West, 1993;King, 2005;McCarthy & Enquist, 2007). Despite large variation in annual knot growth, even among similar sized trees, the ratio of knot to stem area increment (KSR) was shown to decrease systematically with tree age. Similar ontogenetic effects have been highlighted by Wilson (1988) to describe changes in shoot: root ratio as a plant grows. Under the assumption that stem or branch area increments are proportional to biomass accumulation, the observed correlation between KSR and HD indicates a shift in assimilate allocation towards branches when tree growth is constrained by competition. Likewise, Vincent (2006) found that lower light levels were associated with an increase in leaf life span, while King (1997) showed that the percentage of biomass allocated to branches was higher in understory seedlings than in those growing in large gaps. A similar concept of functional balance has also been used to explain the decrease in shoot:root ratio when soil nutrients are a limiting factor (Génard et al., 2007). Under the principles of teleonomy, these may be seen as adaptive responses of trees to environmental factors, which would optimize their growth and survival probability (Lacointe, 2000). In this study, annual reconstructions of stem and branch development suggested that KSR values were also positively related to the number of new branches initiated in a growth unit. This is in agreement with the principles highlighted above, but it appears to contradict a common result of empirical branch distribution models, which is that vigorous trees tend to initiate more branches in a given year (Maguire, 1994;Mäkinen & Colin, 1999;Hein et al., 2007). However, these studies typically presented models for the number of nodal branches i.e., those forming a pseudo-whorl (Fisher & Honda, 1979). Furthermore, in models that consider both nodal and internodal branches, smaller branches (<5 mm) are usually ignored (Colin & Houllier, 1991;Auty et al., 2012). An advantage of using CT scanning technology is that all the knots were identifiable, including those that were occluded within the stems. Furthermore, the identification of annual growth units along the stem was made easier because it was possible to locate, with some certainty, the initiation point of branches at the stem's pith (Duchateau et al., 2013b). The relationship of knot growth to HD ratio could be clearly seen in the simulations of individual knot growth. An increase in HD ratio led to smaller but longer-lived knots. When coupled with our finding on branch initiation, this result is in agreement with the negative relationship between the number of branches and their size presented by West, Enquist & Brown (2009). Throughout the simulation, each knot was first located at the top of the stem but its position relative to the stem's apex shifted as the tree grew in height. Therefore, in the slower growth scenario, the fact that the knot was still growing at the end of the simulation implies a slower rate of crown recession. A lower crown base in trees subject to high competition is consistent with previous results (Sprugel, 2002;Valentine et al., 2013) and offers further support for Milton's Law of resource availability and allocation. Sprugel's (2002) choice of name for this principle made reference to poet Milton's (1667) phrase, "Better to reign in hell than serve in heaven." He used this analogy to highlight the fact that although branches in light-favored conditions will tend to grow faster, a shaded branch on a shaded tree is more likely to survive and grow than a similarly-shaded branch on a dominant tree. Our model provides a time-series illustration of this principle. The vigorous growth of the knot in the first 10-15 years of the accelerated growth scenario suggests that the carbon budget of the branch was more positive than branches simulated in slow growth scenarios. Despite this, branch growth ceased earlier in the accelerated growth scenario. Clearly, such behaviour could not be predicted based on individual branch carbon budgets, which leads us to question the applicability of the branch autonomy principle when modelling branch growth. Modelling knot development Previous studies have represented the dead portion of knots as a cylinder to reflect the cessation of growth (Björklund, 1997;Lemieux, Beaudoin & Zhang, 2001;Moberg, 2001). However, around 40% of knots in our sample data had declining diameter profiles in the outer stem, presumably as a result of branch deterioration after death. We accounted for this trend in the knot diameter model by allowing negative growth predictions (Fig. 9). The inclusion of the diameter and trajectory increments of the previous year as predictor variables allowed for smooth transitions between the knot sections, which provided realistic knot shapes. Furthermore, analysis of the model residuals showed that the models were relatively unbiased and generally accurate. In the second simulation, annual predictions of knot diameter and trajectory produced realistic reconstructions of the real knot profiles using the known insertion point, orientation and year of occlusion of each knot. Models that can predict the vertical and azimuthal distribution of branches within a growth unit, as well as the initial insertion angle of each branch in the main stem, will provide even more realistic stem profiles. Even further improvements could be gained from the addition of a self-pruning sub-model . The interpretation of our results on knot and stem allocation should therefore focus on general, long-term trends rather than on inter-annual variation. In fact, the long-term trends presented at the stem level should be more robust, since they aggregate information from a large number of individual knot profiles. CONCLUSION This study has provided an improved representation of the internal structure of tree stems by linking knot development with stem growth. The use of CT scanning data allowed us to reconstruct knot and stem ontogeny with unprecedented detail over a substantial time period. We have found evidence for increased allocation to branches under conditions that limit the secondary growth of the stem, which indicates that branches are non-autonomous entities. We have also produced a model of individual knot morphology that could provide greater precision in the representation of knots in FSTMs, thus expanding their applicability to the wood processing sector.
8,169
2015-04-09T00:00:00.000
[ "Biology", "Environmental Science" ]
The American Paddlefish Genome Provides Novel Insights into Chromosomal Evolution and Bone Mineralization in Early Vertebrates Abstract Sturgeons and paddlefishes (Acipenseriformes) occupy the basal position of ray-finned fishes, although they have cartilaginous skeletons as in Chondrichthyes. This evolutionary status and their morphological specializations make them a research focus, but their complex genomes (polyploidy and the presence of microchromosomes) bring obstacles and challenges to molecular studies. Here, we generated the first high-quality genome assembly of the American paddlefish (Polyodon spathula) at a chromosome level. Comparative genomic analyses revealed a recent species-specific whole-genome duplication event, and extensive chromosomal changes, including head-to-head fusions of pairs of intact, large ancestral chromosomes within the paddlefish. We also provide an overview of the paddlefish SCPP (secretory calcium-binding phosphoprotein) repertoire that is responsible for tissue mineralization, demonstrating that the earliest flourishing of SCPP members occurred at least before the split between Acipenseriformes and teleosts. In summary, this genome assembly provides a genetic resource for understanding chromosomal evolution in polyploid nonteleost fishes and bone mineralization in early vertebrates. Introduction Since the first fish genome of the fugu was released in 2002 (Aparicio et al. 2002), more than 60 fish genomes have been published (Ravi and Venkatesh 2018;Bian et al. 2019). The spotted gar (Braasch et al. 2016) and the sterlet Du et al. 2020) are the only nonteleost ray-finned fishes reported to date. Acipenseriformes (sturgeons and paddlefishes), as an important order of nonteleosts, is estimated to have originated from 300 to 350 Ma or even earlier (Hughes et al. 2018). There are only two extant paddlefish species, the Chinese paddlefish (Psephurus gladius, declared functionally extinct very recently; Mei et al. 2020;Zhang et al. 2020) and the American paddlefish (Polyodon spathula). Therefore, as perhaps the only living species within the family, the American paddlefish is valuable as a representative species for understanding early vertebrate evolution. The evolution of vertebrate ancestors was accompanied by two rounds (1R and 2R) of whole-genome duplication (WGD; Dehal and Boore 2005). A third WGD (3R) occurred at 320 Ma was defined in teleosts (Vandepoele et al. 2004), which account for more than 99% of all ray fins (Actinopterygia), but not in the basal fishes including sturgeons and paddlefishes. However, Acipenseriformes is known to be the only lineage among the basal fishes with their own lineage-specific WGDs that happened more recently (Vandepoele et al. 2004;Crow et al. 2012). It is also believed that the WGDs that occurred in paddlefishes and in sturgeons are two independent events based on studies of Hox clusters and several other genes (Crow et al. 2012;Cheng et al. 2019). Therefore, more genomic studies are required to verify the existence and timing of the WGDs, and to interpret subsequent effects caused by such lineage-specific events. One consequence of WGD is the increasing number of chromosomes. American paddlefish has a significantly higher chromosome number (2n ¼ 120; Symonov a et al. 2017) than other fishes (most with either 48 or 50 chromosomes; Mank and Avise 2006), which is an interesting common feature shared with Acipenseriformes species. Previous studies reported that paddlefish and sturgeon genomes contain many small dot-like chromosomes (defined as microchromosomes) that are significantly different from the relatively longer microchromosomes in birds and reptiles (Deakin and Ezaz 2019;O'Connor et al. 2019). However, there is no clear boundary between macro-and microchromosomes in paddlefishes and sturgeons, and the causes for such an interesting pattern are not well known, although many efforts have been made in previous karyotypic studies (Symonov a et al. 2017). Sturgeons and paddlefishes have been referred to as "living fossils" due to their conserved evolution and few morphological modifications (Liu et al. 2018). Although as ray-finned fishes, they present many morphological similarities with sharks in Chondrichthyes, especially the almost entirely cartilaginous bones (Davesne et al. 2020). The cause for such an ancient phenotype is unclear, but the cartilaginous nature of these fishes was thought to be a derived character since sturgeon ancestors have bony skeletons (Helfman et al. 2009). There is a hypothesis that the absence of secretory calciumbinding phosphoprotein (SCPP) gene is responsible for the absence of bone from the endoskeleton of cartilaginous fishes (Venkatesh et al. 2014). However, whether this hypothesis is applicable to the ray-finned paddlefish and sturgeons needs further investigation. Nonetheless, paddlefish genome has remained largely unexplored due to its polyploidy and the presence of many microchromosomes, which hinders in-depth evolutionary and biological studies of this threatened and commercially valuable fish. Therefore, in the present study, we performed whole-genome sequencing to obtain a high-quality genome assembly of the American paddlefish at a chromosome level. With this genome and the results from comparative genomic analyses, we attempted to answer the following critical questions: 1) What is the chromosomal evolutionary pattern in paddlefish? 2) How were chromosomes rearranged after independent lineage-specific WGDs in paddlefish and sterlet in comparison to the spotted gar that experienced neither the TGD (teleost genome duplication; Bian et al. 2016) nor a species-specific WGD? 3) Do the previously reported bone mineralization-related SCPP genes exist in the American paddlefish and the sterlet? Summary of the Primary Genome Assembly and Annotation We applied both short and long reads to generate the genome assembly of the American paddlefish. In total, our sequencing of 462.3-Gb raw data (supplementary table S1, Supplementary Material online) had a coverage of 300Â over the 1.56-Gb estimated genome size (supplementary fig. S1, Supplementary Material online) based on a 17-mer analysis (Liu et al. 2013). After initial contig construction, long reads-based scaffolding, and additional scaffold connection, we obtained a final assembled genome of 1.54 Gb, accounting for 98.7% of the estimated size, with a contig N50 length of 4.30 Mb and a scaffold N50 of 4.86 Mb (supplementary table S2, Supplementary Material online). Through GC distribution checking, we observed that the reads used for the genome assembly displayed a homogeneous GC distribution, indicating good quality without pollution (supplementary fig. S2, Supplementary Material online). In a BUSCO validation, total completeness of the primary genome assembly was estimated to be 93.7%, including 50.9% single-copy BUSCOs and 42.8% duplicates. The fragmented BUSCOs were estimated to be 2.3%, and the rest (4.0%) were missing BUSCOs (supplementary table S3 Chromosome-Level Genome Assembly We applied Hi-C technology to construct the chromosomes of the American paddlefish on the basis of the final assembly. A total of 99.3 Gb of raw reads was produced from the BGISEQ500 platform and aligned to the assembled contigs after filtration. The contact count among each contig was calculated and normalized ( fig. 1). According to a previous report (Symonov a et al. 2017), we set the chromosome number to be 60 pairs (2n ¼ 120). Strangely enough, the aligned contigs were anchored into only 26 chromosomes instead, along with a mosaic region on the chromosome contact map ( fig. 1A). Considering the fact that the American paddlefish genome contains 26 pairs of macrochromosomes, we assumed that these 26 distinguishable clusters with clear boundaries on the contact map ( fig. 1B) should be macrochromosomes (numbering Chr1 to Chr26), whereas the ambiguous mosaic region ( fig. 1C) was supposed to contain all microchromosomes, which were too short to be clearly distinguished ( fig. 1A). In order to test our hypothesis, we extracted the 26 distinguishable regions in those scaffolds with the clustering, ordering, and orientating information to be reassembled from the previous genome assembly. Interestingly, these putative macrochromosomes ( fig. 1B) fig. S3A and B, Supplementary Material online) due to more exons in each gene and larger intron sizes (supplementary fig. S3C, Supplementary Material online). The sequence lengths of our assembled 60 chromosomes and the physical chromosomal size measured by karyotype (Symonov a et al. 2017) were highly correlated (R 2 ¼ 0.98; fig. 1D). Genome Evolution To study the potential evolutionary pattern of American paddlefish chromosomes, we primarily performed intraspecific chromosomal comparison. We observed that the majority of the chromosomes had synteny blocks (!2 kb) with the other chromosomes, except for several microchromosomes ( fig. 2A and supplementary fig. S6, Supplementary Material Previous studies verified that the spotted gar owned very conserved chromosomes in comparison to other model vertebrates (Braasch et al. 2016); we thus aligned our assembled American paddlefish genome against the chromosomes of the spotted gar to explore potential chromosomal rearrangements. Based on our interspecific comparisons, we observed that most regions in the macrochromosomes and some of the microchromosomes of the American paddlefish could be localized onto those of the spotted gar ( fig. 2B). Most gar chromosomes have two counterparts in paddlefish, similar to the chromosomal comparison between the gar and sterlet ( fig. 2C). More specifically, the three longest pairs of macrochromosomes of the American paddlefish could be aligned to the three corresponding pairs of gar chromosomes (LG2 and LG4, LG9 and LG11, LG1 and LG16). For example, gar LG2 and LG4 fused head-to-head to form paddlefish Chr1, and also to form the duplicated Chr2 generated from WGD. Similarly, Chr3/Chr4 was a fusion of gar LG9 and LG11, followed by intrachromosomal rearrangements. Interestingly, gar LG1 and LG16 fused to paddlefish Chr5/Chr6, followed by gar LG1 undergoing fission to form the microchromosome Chr29/ Chr31 ( fig. 2E and supplementary fig. S9, Supplementary Material online). Depending on the conserved status of the spotted gar, we speculate that the American paddlefish may have experienced extensive chromosomal rearrangements during its evolution. fig. 2D). Combined with above intraspecific findings, it seems that although independent lineage-specific WGD events happened after their divergence, the American paddlefish and the sterlet still shared certain common evolutionary patterns in their chromosomes and genome sequences. Phylogeny and Divergence Time of Species and Chromosomes To estimate the phylogenetic relationship of the paddlefish and sterlet in relation to other vertebrates, we selected 702 single copy orthologous genes in 24 species, totaling 1,475,187 aligned sites ( 1 Ma, respectively. It seems that the sturgeon-specific WGD event happened more recently than the TGD, although a consensus of the exact time has not been reached yet (Crow et al. 2012;Cheng et al. 2019;Du et al. 2020). Our findings from the present study provide additional evidence for such a recent event. Prediction of Complete Hox Clusters A total of 75 Hox genes distributed in seven clusters were identified from the American paddlefish genome. The two complete HoxA clusters were mapped onto Chr3 and Chr4, whereas the two HoxD clusters were localized onto Chr10 and Chr11 ( fig. 3B and C). We also identified two HoxB clusters and one HoxC cluster on Chr12, Chr28, and Chr53 ( fig. 3B). To further evaluate the accuracy of our assembly, we determined that the previously published four BAC clones of Hox clusters (Crow et al. 2012) displayed a high degree of coverage with our present chromosome-level assembly ( fig. 3C). In detail, 100%, 98.7%, 89.1%, and 100% of the sequences from BAC352P4 (HoxAa), BAC370N10 (HoxAb), LG11 LG15 LG4 LG12 Chromosome-scale Assembly of the American Paddlefish . doi:10.1093/molbev/msaa326 MBE BAC231C24 (HoxDa), and BAC249G23 (HoxDb) were covered, respectively. The high coverage between our data and these previously reported clones supports the high reliability of our chromosome-level assembly for the American paddlefish. SCPP Genes Uncovered in the Early Vertebrates Paddlefishes and sturgeons are good models for studying bone mineralization, since they retain a relatively primitive phenotype but have derived cartilaginous skeletons (as in sharks) despite their ancestors having bony skeletons (Helfman et al. 2009).Spotted gar seems to have the largest number of bone mineralization-related SCPP genes (38 in total) identified to date (Braasch et al. 2016;Kawasaki et al. 2017), which is reasonable since it has ganoid scales, heavily ossified bones, and a full set of teeth. In the present study, we identified 25 and 27 SCPP genes (including ancient SPARC genes) in the American paddlefish and the sterlet, respectively ( fig. 4). In further BLAST searching of 40 nearby genes of spp1 with a genomic spanning of about 3 Mb in the spotted gar genome (supplementary table S10, Supplementary Material online) against the assembled chromosomes of the American paddlefish, we identified 36 and 38 genes neighboring spp1-1 and spp1-2 genes with high correlations ( fig. 5), strongly indicating the existence of two putative spp1 genes in the American paddlefish genome. Two spp1 sequences with conserved RGD motif (an integrin-binding Arg-Gly-Asp motif) were also successfully cloned from the paddlefish genomic DNA ( fig. 5 and supplementary fig. S17 and table S11, Supplementary Material online). Our results indicated that, unlike the role spp1 plays in shark and zebrafish (Venkatesh et al. 2014), other members in SCPP family or even other gene families might be involved in the reversion from a bony to cartilaginous feature of the paddlefishes and sturgeons. Resolution of a Complex Chromosome-Level Genome Assembly Using Hi-C Data In this study, we have provided a model and an example of using Hi-C data to assemble a complex fish genome with a large number of variable chromosomes. The American paddlefish genome contains 120 chromosomes (Symonov a et al. 2017), and thus it was a formidable challenge to perform a cytogenetic analysis. A karyotypic test estimated that the genome consists of 48 macrochromosomes and 72 microchromosomes (Dingerkus and Howell 1976). Another more recent study with cytogenetic markers suggested that there were 54 macrochromosomes and 66 microchromosomes in the American paddlefish (Symonov a et al. 2017). In these studies, however, the boundary between macrochromosomes and microchromosomes seems to be unclear. Our present chromosome-level assembly based on additional Hi-C data showed that the haploid paddlefish genome comprised 26 identifiable macrochromosomes and 34 microchromosomes ( fig. 1), which is very close to the estimated 54 þ 66 (2n) chromosomes from the previous karyotypic analysis, and the lengths of the assembled chromosomes were highly correlated with the measured physical sizes (Symonov a et al. 2017). The overall similarity in both size and number between the Hi-C assembled and physically tested genomes confirmed the existence of both macroand microchromosomes in the American paddlefish, which is also a shared feature in the genomes of sturgeons . The present study provides a practical solution for any chromosome-level assembly of a complex fish genome. Our results illustrate the possibility of reconstructing the ancestral Acipenseriformes chromosomes for further understanding the origin of paddlefishes and sturgeons. ). In the current study, with the intraspecific and interspecific comparisons between the American paddlefish, sterlet, and spotted gar, we delineated possible evolutionary processes of the American paddlefish chromosomes based on the whole-genome comparisons. In the intraspecific comparisons, many duplicated regions were identified between the chromosomes. However, unlike the obvious one-to-one syntenic relationship of all paired chromosomes in the common carp (Xu et al. 2014), the presence of one-to-one synteny conservation was only observed between the three largest pairs of macrochromosomes ( fig. 2A and supplementary fig. S4, Supplementary Material online), validating the lineage-specific WGD event in the American paddlefish (Symonov a et al. 2017). In addition, each pair of these paralogous chromosomes has similar repeat content, showing no evidence for allopolyploidy (supplementary fig. S18, Supplementary Material online). Extensive interchromosomal changes happened thereafter, but rearrangements mainly occurred on smaller macrochromosomes (Chr7-Chr26). In the interspecific comparison, American paddlefish displayed an intricate relationship with spotted gar, whose genome has conserved in content and size many entire chromosomes (n ¼ 29) from bony vertebrate ancestors (Braasch et al. 2016). Interestingly, the alignment did not clearly reveal an expected one-to-two relationship between the spotted gar and the paddlefish chromosomes, whereas a two-to-two pattern was identified between the two largest pairs of the paddlefish macrochromosomes and the corresponding linkage groups of the spotted gar, possibly due to the fusion of two ancestral chromosomes ( fig. 2E). Gar LG1 and LG16 can map to paddlefish Chr5 and Chr6, Chr29 and Chr31, showing a two-to-four pattern, which is a consequence of the fusions as mentioned above, followed by a fission of ancestral chromosome related to gar LG1, leading pde6b paip1 nnt fgf10b rai14 amacr slc45a2 rxfp3 adamts12 gzmk tmem267 il11ra cntfr galt sigmar1 katnal2 hdhd2 smad2 cldn23a rchy1 spp1 sh3bp2 vldlr kcnv2a pum3 carm1l fybb rictorb osmr lifra egflam gdnfa wdr70 nup155 cplane1 nipbla slc1a3b ranbp3l nadk2 skp2 lmbrd2a Spotted gar 12304636-53685398 r c h y 1 s p p 1 -2 s h 3 b p 2 v ld lr k c n v 2 a p u m 3 c ld n 2 3 a s m a d 2 h d h d 2 k a t n a l2 s ig m a r 1 g a lt c n t f r il 1 1 r a t m e m 2 6 7 r a i1 4 a m a c r s lc 4 5 a 2 r x f p 3 a d a m t s 1 2 g z m k p d e 6 b f g f 1 0 b n n t p a ip 1 lm b r d 2 a s k p 2 n a d k 2 r a n b p 3 l s lc 1 a 3 b n ip b la n u p 1 5 5 w d r 7 0 g d n f a e g f la m li f r a o s m r r ic t o r b f y b b Chr2 57502034-98917662 American paddlefish LG2 Exon Intron LG2 ( Chromosome-scale Assembly of the American Paddlefish . doi:10.1093/molbev/msaa326 MBE to the formation of paired microchromosomes in the American paddlefish. Furthermore, this chromosomal evolution pattern was also found in the sterlet, and helped us to deduce the Acipenseriformes ancestral chromosomes, which include large macrochromosomes fused from two ancient chromosomes and microchromosomes that had been fissioned from a single chromosome ( fig. 2E). Interspecies chromosomal comparison between American paddlefish and sterlet shows homology between the two fish species ( fig. 2C and D). Not only macrochromosomes (supplementary fig. S7, Supplementary Material online) but also microchromosomes (supplementary fig. S8, Supplementary Material online) were highly conserved in some regions along the chromosome, confirming the low evolutionary rate of Acipenseriformes species ). Similar to the sterlet, the American paddlefish also had chromosome losses and rearrangements ( fig. 2A and Therefore, taking these genomic comparisons into consideration, we hypothesize that there were extensive chromosomal rearrangements in the American paddlefish both before and after the WGD event. Phylogeny and Divergence Time of the American Paddlefish and Chromosomes Paddlefishes have retained some primitive characteristics, including the skeleton, heterocercal fins, and body shape. Previous molecular studies based on single or multiple mitochondrial or nuclear gene(s) supported a basal phylogenetic position of Actinopterygii (Hughes et al. 2018). Our present data based on orthologs from whole genomes further validated this basal status in Actinopterygii. Meanwhile, the phylogenetic branch of the American paddlefish presented a similar length to that of the sterlet, suggesting a similar slow evolutionary rate as previously estimated in the sterlet that are comparable to the spotted gar, which was considered as the most slowly evolved fish except for the coelacanth (Braasch et al. 2016). It seems that the slow evolutionary rate is consistent with the morphological conservation in the American paddlefish. With fossil-calibrated dating of the whole-genome orthologs-based phylogeny, we estimated that the ancestor of paddlefishes and sturgeons originated about 314.9 Ma, and this is consistent with previous molecular studies (Hughes et al. 2018). Time-calibrated phylogenies of each pair of the identified homologous macrochromosomes revealed a relatively recent WGD event in the American paddlefish about 46.6-54.1 Ma, consistent with the previous estimate of about 42.7 Ma based on the HoxA gene cluster (Crow et al. 2012). However, this estimate might be quite far off the time when the event actually happened due to delayed rediploidization (Robertson et al. 2017). Nonetheless, it is earlier or much later than the reported 21.3 Ma or 180 Ma ) of the sterlet WGD. Thus, it is necessary to carry out more analyses to confirm the exact date of the independent WGD events in the two families within the Acipenseriformes. In addition, all three topologies support the divergence of species before the divergence of each pair of the identified homologous chromosomes, suggesting that the WGDs of the paddlefish and sterlet were two independent events. Additional 4dTv analysis also shows two different peaks for the two species, indicating different occurrence times of the two WGDs (supplementary fig. S19, Supplementary Material online). However, due to the limitations of both phylogenetic and 4dTv analyses, the current results cannot rule out a shared WGD. SCPP Genes in the American Paddlefish The discovery of SCPP genes in paddlefish and sterlet uncovers the earliest flourishing of this family occurred at least before the split between Acipenseriformes and teleost. SCPP genes can be classified into two groups. The acid genes are involved in formation of bone and/or dentin, and the Pro/ Gln (P/Q)-rich genes are related to formation of enamel or enameloid matrix, mostly expressed in skin and scales (Kawasaki et al. 2017). Paddlefish and sterlet retain most of the acid SCPPs except for dmp1, a gene that functions in the mineralization of bone and dentin (Ling et al. 2005). This might be one cause for the special cartilaginous phenotype of Acipenseriformes fishes. However, these fishes had fewer P/ Q-rich SCPPs compared with spotted gar (fig. 4). It seems that they lost the whole cluster of P/Q-rich genes (mainly expressed in skin and scales, but not in teeth or bone) between sparcr1 and spp1 as in tetraploids, suggesting that the cluster may have been first derived in the spotted gar. In the other cluster adjacent to sparcl1, some genes were lost but some were retained. For example, the gene enam, crucial for formation of the enamel matrix of teeth (Dem er e et al. 2008), has been lost in the toothless paddlefishes and sturgeons but exists in vertebrates with teeth (such as human, coelacanth, spotted gar, and zebrafish; fig. 4). In addition, both American paddlefish and sterlet apparently retained only one copy of the ancient sparc genes (sparcl1l1, sparcl1, and sparcr1) after the genome duplication, although one or more were lost in tetrapods and teleosts ( fig. 4). Therefore, it is possible that nonteleost ray-finned fishes may retain the largest number of ancient sparc genes. As an acidic member of the SCPP family, spp1 is mainly related to tissue mineralization such as during tooth formation, bone formation, and potential scale formation (Kawasaki et al. 2017). Many reports have shown that spp1 may play an essential role in bone formation in zebrafish, leading to the hypothesis that absence of spp1 could be accountable for the cartilaginous skeleton in Chondrichthyes (Venkatesh et al. 2014;Kawasaki et al. 2017). Our data strongly suggest the existence of two spp1 copies in the American paddlefish (and the sterlet), indicating that the hypothesis of spp1's responsibility for cartilaginous features may be incompatible with the American paddlefish. Cheng et al. . doi:10.1093/molbev/msaa326 Conclusions Research on sturgeons and paddlefishes has long been a hot topic due to the special evolution, economic importance, and endangered status of these fishes. However, genomic studies have been greatly hampered by the extreme complexity of these genomes with high chromosome numbers and various macro-/microchromosomes. Here, we provided the first chromosome-level genome assembly of the American paddlefish in the Acipenseriformes. The success of assembling 26 macrochromosomes and 34 microchromosomes in the haploid genome indicates that extensive chromosomal rearrangements, including fusions to form the macrochromosomes and fissions to form the microchromosomes, have occurred in this ancient fish. Most acid SCPP genes were retained but some P/Q-rich genes were lost in the American paddlefish, providing new insights into the mineralization of bones, teeth, and scales of the early vertebrates. Fish Collection and Species Identification An artificially cultivated American paddlefish (about 5 years old, 1 m in snout-tail length, 3.5 kg in body weight) was sampled from a local hatchery in Taihu Station, Yangtze River Fisheries Research Institute (YFI), Chinese Academy of Fisheries Sciences (CAFS), Wuhan City, Hubei Province, China. The fish was identified on the basis of both DNA barcoding (COI gene sequence) and morphological observation. All the fish handling and experimental procedures used in this study were approved by the Animal Care and Use Committee of the YFI of CAFS, China (Animal Welfare Assurance No. YF001). DNA/RNA Extraction and Sequencing Genomic DNA samples from either blood or muscle were collected from the same fish for whole-genome sequencing with standard protocols. We employed the routine wholegenome shotgun-sequencing strategy (Venter et al. 2001) to construct three short-insert (270, 500, and 800 bp) and four long-insert (2, 5, 10, and 20 kb) libraries, according to standard protocols from Illumina (San Diego, CA). Paired-end (PE) sequencing was carried out on an Illumina HiSeq 2500 platform (blood sample; PE125 for 270-, 500-, and 800-bp libraries) and HiSeq X Ten platform (muscle sample; PE150 for the remaining DNA libraries). Low-quality raw reads (more than 10 Ns, or rich in low-quality bases) were removed by SOAPfilter version 2.2 with optimized parameters (-y -p -g 1 -o clean -M 2 -f 0). Additional blood samples were collected for genomic DNA extraction using the traditional phenol/chloroform extraction method to perform PacBio long-read sequencing as reported in a previous study (Jiang et al. 2019). High-quality DNA was used to construct a SMRATbell library with an insert size of 30 kb and sequenced on a PacBio Sequel platform (Pacific Biosciences, Menlo Park, CA). To achieve an updated chromosome-level assembly, we applied the Hi-C method (Burton et al. 2013) to detect chromatin interactions in the American paddlefish nucleus. First, we utilized the restriction enzyme MboI to digest genomic DNAs from blood tissue after conformation fixing by formaldehyde and repaired 5 0 overhang using biotinylated residue. After ligation of blunt-end fragments in situ, the isolated DNAs were reverse-cross-linked, purified, and filtered for biotin-containing fragments. Subsequently, DNA fragment end repair, adaptor ligation, and PCR were performed, and a 400-bp insert library was constructed for sequencing on a BGISEQ-500 platform (BGI, Shenzhen, China) to generate short paired-end reads with a length of 100 bp (Huang et al. 2017). For gene annotation of the assembled genome, transcriptome sequencing was performed with blood tissue from the same American paddlefish. Total RNA was extracted with TRIzol Reagent (Invitrogen, Carlsbad CA). A Nanodrop ND-1000 spectrophotometer (LabTech Int, East Sussex, UK) and a 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA) were used to check RNA quality, and two micrograms of verified RNAs were used for library construction and transcriptome sequencing on an Illumina HiSeq 4000 platform. Genome Size Estimation and De Novo Genome Assembly Genome size of the American paddlefish was estimated based on the routine 17-mer depth frequency distribution analysis (Liu et al. 2013) using the short reads from the abovementioned 500-and 800-bp Illumina libraries. Subsequently, a de novo genome assembly was generated using both the Illumina short reads and PacBio long reads. First, the Illumina short-insert (270, 500, and 800 bp) sequencing data were assembled into contigs with optimized parameters (-k 29 -d 0.3 -t 16 -m 300) by Platanus version 1.2.4 (Kajitani et al. 2014). The initial contigs were aligned against the PacBio long reads by DBG2OLC (Ye et al. 2016) to obtain consensus sequences that were further polished by Pilon version 1.22 (Walker et al. 2014). Next, PacBio reads were used to construct the primary scaffolds by SSPACE-LongRead (Boetzer and Pirovano 2014) based on the polished contig assembly. Illumina long-insert (2, 5, 10, and 20 kb) sequencing data were then used to connect the obtained scaffolds by SSPACE_Standard version 3.0 (Boetzer et al. 2011). Gaps within these scaffolds were eventually filled by GapCloser version 1.12 and GapFiller version 1.10 (Nadalin et al. 2012), and the obtained scaffolds were polished by Pilon (Walker et al. 2014) again to generate the final genome assembly of the American paddlefish. Completeness of the draft genome assembly was evaluated using BUSCO version 3.0.2 (Simão et al. 2015) with default parameters (-m genome -l actinoptery-gii_odb9 -c 8 -f -e 0.01). Construction of a Chromosome-Level Genome Assembly Using the Hi-C Technology Hi-C raw data were first mapped to our genome assembly of the American paddlefish to remove nonmapped, duplicated, and invalid reads, with the remaining valid pairs of reads accepted by HiCPro version 2.2 (Servant et al. 2015) for further analysis. Chromosome-scale Assembly of the American Paddlefish . doi:10.1093/molbev/msaa326 MBE A chromosome contact matrix was constructed using interaction frequencies, which were calculated from the number of the Hi-C paired-end reads mapped to the generated scaffolds. All interactions were clustered from the chromosome contact matrix. An original chromosome contact map displaying sequence clustering was generated and an "AGP" (A Golden Path) file with both the position and direction of all clustered sequences was created by Juicer version 1.5 (Durand et al. 2016). In this step, we temporarily assigned the chromosome number as 60 pairs (2n ¼ 120) based on previous studies (Symonov a et al. 2017). According to the chromosome contact map, we identified the boundaries of each clustering block and manually checked the validity in the "AGP" file. Sequences representing the 26 distinguishable districts on the original map were retrieved from the file to create a contact map for all macrochromosomes. The rest of the sequences, forming a mosaic region on the original map, were applied to construct another contact map for all microchromosomes. In total, 60 pairs of chromosomes of the American paddlefish were fully recovered. In order to evaluate the accuracy and reliability of our genome assembly, we checked the relationship between the assembled size and physical size (measured by karyotyping; Symonov a et al. 2017) of each chromosome. Chromosomes were sorted by length from the shortest to the longest, and a correlation map was created to show their consistency. We also applied previously published short assemblies (Crow et al. 2012) of two HoxA clusters (BAC352P4: GenBank accession number JX448769.1, and BAC370N10: number JX448770.1) and two HoxD clusters (BAC249G23: number JX280945.1, and BAC231C24: number JX280946.1) from the American paddlefish to examine the coverage of our upgraded assembly; the analysis was implemented in Lastz version 1.02 (Harris 2007) with optimized parameters of "T ¼ 2 C ¼ 2 H ¼ 2,000 Y ¼ 3,400 L ¼ 6,000 K ¼ 2,200." Gene Prediction and Functional Annotation Three standard strategies, that is, homology, de novo, and transcriptome-based annotations, were combined to predict a total gene set for the American paddlefish genome. For the homology annotation, we aligned protein sequences from published genomes (downloaded from NCBI Genome database) of ten representative vertebrates, including elephant shark (Callorhinchus milii), zebrafish (Danio rerio), medaka (Oryzias latipes), fugu (Takifugu rubripes), green spotted puffer (Tetraodon nigroviridis), pike (Esox lucius), stickleback (Gasterosteus aculeatus), cod (Gadus morhua), sea lamprey (Petromyzon marinus), and spotted gar (Lepisosteus oculatus), against the genome assembly of the American paddlefish to predict homologous genes. These genes were searched by BLAST (version 2.2.6; mode: TBlastN, Altschul et al. 1990) with an e-value of 10 À5 . The data from BLAST searching were further processed via Sorting Out Local Alignment (Yu et al. 2006) to obtain the best fit of each alignment. Subsequently, gene structures were predicted by GeneWise version 2.2.0 (Birney et al. 2004) from these best hits. Those low-quality predictions (predicted genes with less than 150 bp for the entire length) were removed. For the de novo annotation, the assembled scaffolds were masked based on the above-mentioned repeat annotation. We applied AUGUSTUS version 2.5 (Stanke et al. 2006) and GENSCAN version 1.0 (Burge and Karlin 1997) for the de novo prediction of repeat-masked genome sequences. Lowquality predictions were also discarded using the same screening threshold as for the homology annotation. For the transcriptome-based annotation, the blood transcriptome data were mapped onto the assembled scaffolds to identify splice junctions by TopHat version 2.1.1 (Trapnell et al. 2009). These mapped transcriptome reads were then assembled by Cufflinks version 2.2.1 (Trapnell et al. 2010) to assist gene annotation. Finally, all the above-mentioned gene sets were merged together to yield a comprehensive and nonredundant gene set by utilizing GLEAN (Elsik et al. 2007). To understand the potential functions of the final gene set, we chose four public databases (including Pfam, PRINTS, ProDom, and SMART) to realize functional annotation. Chromosomal Intraspecific and Interspecific Comparisons To understand the evolved chromosomal patterns in the American paddlefish, we performed both intraspecific and interspecific comparisons. For the intraspecific comparison, we extracted each chromosome from the American paddlefish as the query, and other chromosomes were set as targets for examination. Thus, the pairs of the intraspecific data set were constructed, and each of these pairs was aligned separately. All alignments were realized by Lastz (Harris 2007) with the same parameters "T ¼ 2 C ¼ 2 H ¼ 2,000 Y ¼ 3,400 L ¼ 6,000 K ¼ 2,200," and those regions over 2,000 bp were regarded as reliable for each alignment. Simultaneously, we applied all-to-all BLAST (BlastP mode) analysis to identify the syntenic regions between each batch of chromosomes, and those blocks with at least 15 genes were selected as reliable alignments. For the interspecific comparisons, we compared the chromosome-level assembly of the American paddlefish with those of the spotted gar (Braasch et al. 2016) and the Cheng et al. . doi:10.1093/molbev/msaa326 MBE sterlet using the above-mentioned Lastz method (Harris 2007) with the same parameters. To verify the chromosomal evolution pattern, we aligned homologous chromosome pairs within the paddlefish or between the paddlefish and the sterlet using the LAST package (Kielbasa et al. 2011). Dotplots were generated using filtered alignments with an error probability >1e-8. Fossil-Calibrated Phylogenetic Analysis Whole-genome encoding sequences from 24 vertebrate species were selected for phylogenetic analysis. The jawless vertebrate sea lamprey was employed as the outgroup, and the American paddlefish and 22 other species were used as ingroup species. These 22 vertebrates included the eight species used for gene prediction (elephant shark, zebrafish, cod, stickleback, spotted gar, medaka, fugu, green spotted puffer) and 14 other vertebrates, including sterlet, whale shark (Rhincodon typus), Asian arowana (Scleropages formosus), Mexican tetra (Astyanax mexicanus), tilapia (Oreochromis niloticus), Amazon molly (Poecilia formosa), platyfish (Xiphophorus maculatus), coelacanth (Latimeria chalumnae), clawed frog (Xenopus tropicalis), Chinese softshell turtle (Pelodiscus sinensis), zebra finch (Taeniopygia guttata), red junglefowl (Gallus gallus), cattle (Bos Taurus), and human (Homo sapiens). We utilized BLAST (mode BlastP) to calculate a super similarity matrix for each paired sequence with an Evalue threshold of 1e-5. OrthoMCL (Li et al. 2003) was applied to distinguish gene families based on the super similarity matrix, and a Markov Chain Clustering (MCL) with default parameters was assigned. Once one-to-one orthologs were identified, we extracted them and performed a multiple alignment using MUSCLE version 3.7 (Edgar 2004). Subsequently, the protein alignments were converted to corresponding coding sequences (CDS). The nucleotides of the first position in each codon of all coding sequences were chosen for the constitution of a super-length "fake gene" that was used for a phylogenetic analysis with the ML method. The ML method was implemented in PhyML version 3.0 (Guindon et al. 2010) with a gamma distribution across aligned sites and an HKY85 substitution model. The approximate likelihood ratio test (aLRT) was employed to evaluate the branch supports. To further confirm the deduced topology, we simultaneously performed BI using MrBayes version 3.2.2 (Ronquist et al. 2012) with the HKY85 substitution model. We performed two parallel runs of 200,000 generations and sampling every 200 generations. The initial 25% of all the runs was abandoned for unreliability, whereas the remaining samples were used to establish a maximum clade credibility tree. After the phylogeny construction, we set two fossilcalibrated nodes in the phylogenetic topology to estimate the date of divergence of the American paddlefish from other vertebrates, which was based on the Bayesian method using MCMCtree in PAML version 4.9e (Yang 2007). Two fossilcalibrated nodes (C1 and C2) were considered as normal distributions and soft constraint bands (allowing a small probability [0.025] of violation). The C1 calibration point was estimated to be the most recent common ancestor (MRCA) of Sarcopterygii based on the fossils from Latimeria with a hard-minimum age of 408 Ma and a 95% soft maximum age of 427.9 Ma (Benton et al. 2015). The C2 calibration point was estimated as the MRCA of Teleostei from Danio with a hard-minimum age of 151.2 Ma and a 95% soft maximum age of 252.7 Ma (Setiamarga et al. 2008). A total of 100,000 samples were used for the Markov Chain Monte Carlo (MCMC) analysis (Ronquist et al. 2012), and the first 20% of the samples were discarded as a burn-in. An independent rate model (clock ¼ 2) following a lognormal distribution was applied for the MCMC search. To predict the timing of the WGD event in the American paddlefish, we conducted another batch of fossil-calibrated phylogenetic analyses using the same species and method as mentioned above, where the data were limited to the three longest pairs (Chr1-Chr2, Chr3-Chr4, Chr5-Chr6) of the macrochromosomes in the American paddlefish and the sterlet, along with the whole-genome sequences of the remaining selected species. The divergence times of the chromosomes were estimated by calibrating the tree using the same fossils as mentioned above (Setiamarga et al. 2008;Benton et al. 2015). Characterization of SCPP Genes and Complete Hox Clusters Elephant shark, whale shark, American paddlefish, and sterlet have a shared cartilaginous and low-mineralized bone feature. Therefore, with protein sequences encoded by 38 SCPP mineralization-related genes (seven encode "acidic residuerich" proteins and 31 encode "Pro/Gln (P/Q) rich" proteins) from spotted gar (Kawasaki et al. 2017) as the queries, we first performed BlastP searches separately against the genomes of the American paddlefish and the sterlet, and then extracted the exon sequences using Exonerate (Slater and Birney 2005). Subsequently, the ancient sparc genes (sparcl1, sparcl1l1, and sparcr1 from which SCPP genes were derived) were also studied via the same method, using sequences from the spotted gar as references (Kawasaki et al. 2017). One important gene, spp1, reported to be missing in sharks (Kawasaki et al. 2017), was cloned experimentally using PCR as an example to verify the results predicted from the assembled genome. Chromosome-scale Assembly of the American Paddlefish . doi:10.1093/molbev/msaa326 MBE In addition to the two reported complete HoxA and two partial HoxD clusters (Crow et al. 2012), we attempted to characterize the complete set of Hox clusters in the American paddlefish genome. First, we downloaded the complete Hox cluster sequences from the spotted gar (Braasch et al. 2016) and the sterlet Du et al. 2020). Then, the obtained protein sequences were BLAST (TBlastN mode) searched against our genome assembly, and the aligned sequences were further verified by Exonerate (Slater and Birney 2005).
8,930.6
2020-12-17T00:00:00.000
[ "Biology", "Environmental Science" ]
Metabolism of Dietary and Microbial Vitamin B Family in the Regulation of Host Immunity Vitamins are micronutrients that have physiological effects on various biological responses, including host immunity. Therefore, vitamin deficiency leads to increased risk of developing infectious, allergic, and inflammatory diseases. Since B vitamins are synthesized by plants, yeasts, and bacteria, but not by mammals, mammals must acquire B vitamins from dietary or microbial sources, such as the intestinal microbiota. Similarly, some intestinal bacteria are unable to synthesize B vitamins and must acquire them from the host diet or from other intestinal bacteria for their growth and survival. This suggests that the composition and function of the intestinal microbiota may affect host B vitamin usage and, by extension, host immunity. Here, we review the immunological functions of B vitamins and their metabolism by intestinal bacteria with respect to the control of host immunity. INTRODUCTION The gut is continuously exposed both to toxic (e.g., pathogenic microorganisms) and beneficial (e.g., dietary components, commensal bacteria) compounds and microorganisms; therefore, the intestinal immune system must maintain a healthy balance between active and suppressive immune responses. This balance is controlled not only by host immune factors such as cytokines but also by a variety of environmental factors such as dietary components and the composition of the commensal bacteria. Furthermore, several lines of evidence have demonstrated that immune homeostasis in the intestine is related not only to intestinal health but also to the health of the whole body (1-3). Therefore, immune regulation by environmental factors is attracting attention as a means of maintaining immunological health and preventing many diseases. Nutrients are essential for the development, maintenance, and function of the host immune system (4,5). Vitamins are essential micronutrients that are synthesized by bacteria, yeasts, and plants, but not by mammals. Therefore, mammals must obtain vitamins from the diet or rely on their synthesis by commensal bacteria in the gastrointestinal tract. Some vitamins are water-soluble (e.g., vitamin B family and vitamin C), whereas others are fat-soluble (e.g., vitamins A, D, E, and K). Water-soluble vitamins are not stored by the body and any excess is excreted in the urine; therefore, it is important to consume a diet containing the necessary amounts of these vitamins. Vitamin deficiency associated with insufficient dietary intake occurs not only in developing countries but also in developed countries as a result of increased use of unbalanced diet (6). In addition to the diet, the commensal bacteria are recognized as important players in the control of host health (7)(8)(9). From the point of view of vitamins, commensal bacteria are both providers and consumers of B vitamins and vitamin K. Although dietary B vitamins are generally absorbed through the small intestine, bacterial B vitamins are produced and absorbed mainly through the colon (10,11), indicating that dietary and gut microbiotaderived B vitamins are possibly handled differently by the human body. B vitamins are important cofactors and coenzymes in several metabolic pathways, and it has been reported recently that B vitamins also play important roles in the maintenance of immune homeostasis (12,13). Thus, both dietary components and the gut microbiota modulate host immune function via B vitamins. Here, we review the metabolism and function of dietary and gut microbiota-derived B vitamins in the control of host immunity. VITAMIN B1 Vitamin B1 (thiamine) is a cofactor for several enzymes, including pyruvate dehydrogenase and α-ketoglutarate dehydrogenase, which are both involved in the tricarboxylic acid (TCA) cycle (14,15). World Health Organization (WHO)/Food and Agriculture Organization (FAO) recommend a daily vitamin B1 intake of 1.1-1.2 mg for adult (16). Vitamin B1 deficiency causes lethargy and, if left untreated, can develop into beriberi, a disease that affects the peripheral nervous system and cardiovascular system. Accumulating evidence suggests that energy metabolism-particularly the balance between glycolysis and the TCA cycle-is associated with the functional control of immune cells, in what is now referred to as immunometabolism (17). Previously, we examined B cell immunometabolism in the intestine. In the intestine, naïve immunoglobulin (Ig) M + B cells differentiate into IgA + B cells in Peyer's patches (PPs) by class switching, and then IgA + B cells differentiate into IgA-producing plasma cells in the intestinal lamina propria (20). Naïve B cells in PPs preferentially use a vitamin B1-dependent TCA cycle for the generation of ATP. However, once the B cells differentiate into IgA-producing plasma cells, they switch to using glycolysis for the generation of ATP and shift to a catabolic pathway for the production of IgA antibody (Figure 1). Consistent with the importance of vitamin B1 in the maintenance of the TCA cycle, mice fed a vitamin B1-deficient diet show impaired maintenance of naïve B cells in PPs, with little effect on IgA-producing plasma cells. Since PPs are the primary sites of induction of antigenspecific IgA responses, PP regression induced by vitamin B1 deficiency leads to decreased IgA antibody responses to oral vaccines (21). Vitamin B1 is found in high concentrations as thiamine pyrophosphate (TPP) in meat (particularly pork and chicken); eggs; cereal sprouts and rice bran; and beans. Dietary TPP is hydrolyzed by alkaline phosphatase and converted to free thiamine in the small intestine (22). Free thiamine is absorbed by the intestinal epithelium via thiamine transporters (e.g., THTR-1, THTR-2) and transported to the blood for distribution throughout the body (11). Free thiamine is converted back to TPP and is used for energy metabolism in the TCA cycle. Various types of intestinal bacteria, mostly in the colon, also produce vitamin B1 as both free thiamine and TPP (11,23). In the colon, free bacterial thiamine is absorbed mainly by thiamine transporters, transported to the blood, and distributed throughout the body; this mechanism is similar to how free dietary thiamine is taken up in the small intestine. However, unlike in the small intestine, TPP produced by the gut microbiota is not converted to free thiamine, because alkaline phosphatase is not secreted in the colon (24). Instead, TPP is absorbed directly by the colon via TPP transporters (e.g., TPPT-1) that are highly expressed on the apical membrane of the colon (25). The absorbed TPP enters the mitochondria via MTPP-1, a TPP transporter that is expressed in the mitochondrial inner membrane and is used as a cofactor for ATP generation (26). This suggests that bacterial TPP is important for energy generation in the colon. Thus, dietary and bacterial vitamin B1 appears to have different roles in the host. The vitamin B1 structure consists of a thiazole moiety linked to a pyrimidine moiety. Bacteria obtain the thiazole moiety from glycine or tyrosine and 1-deoxy-D-xylulose-5-phosphate, and plants and yeasts synthesize it from glycine and 2-pentulose (27)(28)(29)(30). In both bacteria and plants, the pyrimidine moiety is derived from 5-aminoimidazole ribonucleotide, an intermediate in the purine pathway (29). Metagenomic analyses of the human gut microbiota predict that Bacteroides fragilis and Prevotella copri (phylum Bacteroidetes); Clostridium difficile, some Lactobacillus spp., and Ruminococcus lactaris (Firmicutes); Bifidobacterium spp. (Actinobacteria); and Fusobacterium varium are vitamin B1 producers (Table 1) (10,46), implying that many intestinal bacteria possess a complete vitamin B1 synthesis pathway, which includes pathways for the synthesis of thiazole and pyrimidine. Indeed, Lactobacillus casei produces thiamine during the production of fermented milk drinks (31), and Bifidobacterium infantis and B. bifidum produce thiamine in culture supernatant (32). However, Faecalibacterium spp. (Firmicutes) lack a vitamin B1 synthesis pathway even though they require vitamin B1 for their growth (10). Therefore, these bacteria must obtain their vitamin B1 from other bacteria or from the host diet via a thiamine transporter, suggesting that there is competition for vitamin B1 between the host and certain intestinal bacteria. VITAMIN B2 Vitamin B2 (riboflavin) and its active forms (flavin adenine dinucleotide [FAD] and flavin mononucleotide [FMN]) are FIGURE 1 | Vitamin B1 and B2-mediated immunometabolism in B cell differentiation in the intestine. Vitamin B1 acts as a cofactor for enzymes such as pyruvate dehydrogenase and α-ketoglutarate dehydrogenase that are involved in the TCA cycle. Vitamin B2 acts as a cofactor for enzymes such as succinate dehydrogenase in the TCA cycle and acyl-CoA dehydrogenase in fatty acid oxidation (FAO, also known as β-oxidation). Naïve B cells preferentially use the TCA cycle for efficient energy generation. Once B cells are activated to differentiate into IgA-producing plasma cells, they utilize glycolysis for the production of IgA antibody. cofactors for enzymatic reactions in the TCA cycle and in fatty acid oxidization (also known as β-oxidization) (15). WHO/FAO recommends a daily vitamin B2 intake of 1.0-1.3 mg for adults (16). Vitamin B2 deficiency suppresses the activity of acyl-CoA dehydrogenases involved in the oxidation of fatty acids to generate acetyl-CoA, which is used by mitochondria to produce ATP via the TCA cycle. Fatty acid oxidization is involved in the activation, differentiation, and proliferation of immune cells through the generation of acetyl-CoA and its entry into TCA cycle (47). This suggests that vitamin B2 is associated with the control of differentiation and function of immune cells through regulation of fatty acid oxidization ( Figure 1); however, the immunological roles of vitamin B2 in the control of host immunity remain to be investigated. In addition to energy generation, vitamin B2 is associated with reactive oxygen species (ROS) generation in immune cells through the priming of NADPH oxidase 2 (48); ROS are important effector and signaling molecules in inflammation and immunity. Vitamin B2 is found at high levels in leafy green vegetables, liver, and eggs. Dietary vitamin B2 exists as FAD or FMN and is converted to free riboflavin by FAD pyrophosphatase and FMN phosphatase in the small intestine (49,50). Free riboflavin is absorbed via riboflavin transporter expressed on the epithelium of the small intestine and is then released into the blood. In the blood, free riboflavin is converted back to FAD or FMN and distributed throughout the body (51)(52)(53). Bacterial vitamin B2 is synthesized from guanosine triphosphate (GTP) and D-ribulose 5-phosphate (54). Bacterial vitamin B2 exists as free riboflavin, which is directly absorbed in the large intestine, converted to FAD or FMN, and distributed throughout the body as described above (23). A metagenome analysis of the human gut microbiota by Magnúsdóttir et al. (10) has predicted that Bacteroides fragilis and Prevotella copri (Bacteroidetes); Clostridium difficile, Lactobacillus plantarum, L. fermentum, and Ruminococcus lactaris (Firmicutes) express factors essential for vitamin B2 synthesis, suggesting that these bacteria are an important source of vitamin B2 in the large intestine (Table 1). In contrast, Bifidobacterium spp., and Collinsella spp. (Actinobacteria) lack a vitamin B2 pathway. Supplementation of fermented soymilk containing Lactobacillus plantarum with riboflavin deficient diet has been shown to promote vitamin B2 production and prevent vitamin B2 deficiency in mice (35). L. fermentum isolated from sourdough can synthesize riboflavin in vitro (36). Furthermore, recent evidence indicates that some species in Bacteroidetes phylum produce more riboflavin than do Actinobacteria and Firmicutes phyla (55). However, Actinobacteria and Firmicutes phyla still express riboflavin transporter and the enzymes necessary for FAD and FMN generation from free riboflavin (i.e., FAD synthases and flavin kinases) (10,56), suggesting that all bacteria, including those that are unable to synthesize vitamin B2 themselves, require FAD and FMN for their growth and survival. Thus, as with vitamin B1, there is likely competition for riboflavin between the host and the commensal bacteria. In addition to being able to produce vitamin B2, some bacteria (e.g., commensals such as Lactobacillus acidophilus and pathogens such as Mycobacterium tuberculosis and Salmonella typhimurium) produce the vitamin B2 intermediate (57-59), 6hydroxymethyl-8-D-ribityllumazine (60, 61). 6-Hydroxymethyl-8-D-ribityllumazine binds to major histocompatibility complex class I-related gene protein (MR1) on antigen-presenting cells; this causes mucosal-associated invariant T (MAIT) cells, an abundant population of innate-like T cells, to produce cytokines such as interferon gamma and interleukin (IL) 17, which contribute to host defense against pathogens (Figure 2) (62). It is thought that stimulation by commensal bacteria contributes to the development and activation of MAIT cells for immunological surveillance against pathogens. MAIT cells also produce inflammatory cytokines and have tissue-homing properties, suggesting that these cells are also involved in the development of autoimmune and inflammatory diseases (63). VITAMIN B3 Vitamin B3 (niacin) is generally known as nicotinic acid and nicotinamide. These compounds are precursors of nicotinamide adenine dinucleotide (NAD), a coenzyme in the cellular redox reaction with a central role in aerobic respiration. WHO/FAO recommends a daily vitamin B3 intake of 11-12 mg for adults (16). Vitamin B3 is also a ligand of GPR109a, a G-protein coupled receptor expressed on several types of cells, including immune cells (64). Vitamin B3-GPR109a signaling induces differentiation of regulatory T cells and suppresses colitis in a GPR109adependent manner (65). Vitamin B3 also inhibits the production of the pro-inflammatory cytokines IL-1, IL-6, and tumor necrosis factor alpha (TNF-α) by macrophages and monocytes (Figure 3) (66). Thus, vitamin B3 has anti-inflammatory properties by modulating host immune cells and playing an important role in the maintenance of immunological homeostasis. Indeed, in humans, vitamin B3 deficiency causes pellagra, which is a disease characterized by intestinal inflammation, diarrhea, dermatitis, and dementia (67). Unlike the other B vitamins, vitamin B3 can be generated by mammals via an endogenous enzymatic pathway from tryptophan and is stored in the liver, although it is also obtained from the diet (68). Animal-based foods such as fish and meat contain vitamin B3 as nicotinamide, and plant-based foods such as beans contain vitamin B3 as nicotinic acid. Both nicotinamide and nicotinic acid are directly absorbed through the small intestine, where nicotinic acid is converted to nicotinamide. VITAMIN B5 Vitamin B5 (pantothenic acid) is a precursor of coenzyme A (CoA), which is an essential cofactor for the TCA cycle and fatty acid oxidation (72). WHO/FAO recommends a daily vitamin B5 intake of 5.0 mg for adults (16). Like vitamins B1 and B2, vitamin B5 is involved in the control of host immunity via energy generation by immune cells. Vitamin B5 deficiency causes immune diseases such as dermatitis, as well as non-immunerelated conditions such as fatigue and insomnia (73). In a randomized, double-blind, placebo-controlled study in adults, dietary supplementation with vitamin B5 was shown to improve facial acne (74), suggesting that epithelial barrier function , and B9 in maintenance of immunological homeostasis. Vitamin B3 binds to GPR109a in dendritic cells and macrophages, and GPR109a signaling leads to an increase in anti-inflammatory properties, resulting in differentiation into regulatory T cells (Treg). Vitamin B7 binds to histones and, by histone biotinylation, suppresses the secretion of pro-inflammatory cytokines. Once naïve T cells differentiate into Treg cells, they highly express folate receptor 4 (FR4). Consistent with this finding, vitamin B9 is required for the survival of Treg cells. improves via the promotion of keratinocyte proliferation and differentiation into fibroblasts (75). To maintain vitamin B5 levels during times of deficiency, CoA is converted back to vitamin B5 or cysteamine via pantetheine (76). However, cysteamine inhibits peroxisome proliferator-activated receptor gamma (PPARγ) signaling, causing inflammation (77). Indeed, colitis has been improved in pantetheine-producing-enzyme knockout mice (78). Thus, vitamin B5 deficiency causes inflammation through both dysfunction of the epithelial barrier and the production of pro-inflammatory molecules. In terms of immune responses, vitamin B5 enhances protective activity against Mycobacterium tuberculosis infection by promoting innate immunity and adaptive immunity. In mice, vitamin B5 supplementation activates phagocytosis and cytokine production (including IL-6 and TNF-α) by macrophages and subsequently promotes Th1 and Th17 responses for the clearance of M. tuberculosis from the lungs (79). Thus, vitamin B5 contributes to host defense by activating immune responses, suggesting that this vitamin has an important role as a natural adjuvant. Vitamin B5 is found in high concentrations as CoA or phosphopantetheine in liver, eggs, chicken, and fermented soybeans. CoA and phosphopantetheine are converted to free pantothenic acid by endogenous enzymes such as phosphatase and pantetheinase in the small intestine. Free pantothenic acid is absorbed via sodium-dependent multivitamin transporter (SMVT) expressed on the epithelium of the small intestine and is then released into the blood (80). Finally, free pantothenic acid is converted back to CoA and distributed throughout the body, particularly to the liver and kidney. Bacterial vitamin B5 is synthesized from 2-dihydropantoate and β-alanine via de novo synthesis pathways (10). Bacterial vitamin B5 exists as free pantothenic acid, which is directly absorbed in the large intestine, converted to CoA, and distributed in the same way as dietary vitamin B5. According to a genomic analysis, Bacteroides fragilis and Prevotella copri (Bacteroidetes); some Ruminococcus spp. (R. lactaris and R. torques) (Firmicutes); Salmonella enterica and Helicobacter pylori (Proteobacteria) possess a vitamin B5 biosynthesis pathway, indicating that intestinal commensal bacteria can produce vitamin B5. In contrast, most Fusobacterium (Fusobacteria) and Bifidobacterium spp. (Actinobacteria) and some strains of Clostridium difficile, Faecalibacterium spp., and Lactobacillus spp. (Firmicutes) lack such a pathway ( Table 1), although some of them do express pantothenic acid transporter to utilize vitamin B5 for energy generation (10), suggesting that these bacteria compete with the host for vitamin B5. VITAMIN B6 Vitamin B6 exists in several forms, including as pyridoxine, pyridoxal, and pyridoxamine. These forms of vitamin B6 are precursors of the coenzymes pyridoxal phosphate (PLP) and pyridoxamine phosphate (PMP), which are involved in a variety of cellular metabolic processes, including amino acid, lipid, and carbohydrate metabolism (81). WHO/FAO recommends a daily vitamin B6 intake of 1.3-1.7 mg for adults (16). Vitamin B6 deficiency is associated with the development of inflammatory diseases such as allergy and rheumatoid arthritis, as well as with neuronal dysfunction (82)(83)(84). Vitamin B6 deficiency disrupts the Th1-Th2 balance toward an excessive Th2 response, resulting in allergy (85). Moreover, low plasma vitamin B6 levels, together with increased levels of pro-inflammatory cytokines such as TNF-α and IL-6, have been observed in patients with rheumatoid arthritis (86). However, the mechanism underlying the regulation of inflammation by vitamin B6 is currently unknown. Vitamin B6 contributes to intestinal immune regulation through the metabolism of the lipid mediator sphingosine 1-phosphate (S1P). S1P regulates lymphocyte trafficking into the intestines, especially in the large intestine. Lymphocyte trafficking is dependent on S1P gradient which is created by S1P production and its degradation. S1P degradation is mediated by S1P lyase that requires vitamin B6 as a cofactor. The administration of vitamin B6 antagonist impairs S1P lyase activity and creates an inappropriate S1P gradient, resulted in impairing lymphocyte migration from lymphoid tissues and reducing the numbers of lymphocytes in the intestines (87). The lymphocytes located between gut epithelial cells are known as intraepithelial cells (IELs) which are involved in the protection against pathogens (88). Therefore, vitamin B6 is important role for immunosurveillance in the intestines. Vitamin B6 is abundant in fish, chicken, tofu, sweet potato, and avocado. Dietary vitamin B6 exists as PLP or PMP; it is converted to free vitamin B6 by endogenous enzymes such as pyridoxal phosphatase and is then absorbed by the small intestine. Although absorption of vitamin B6 through acidic pHdependent and carrier-mediated transport has been shown, an intestinal pyridoxine transporter is yet to be identified in any mammalian species (11). After the absorption of free vitamin B6, it enters the blood and is converted back to PLP or PMP. Microbial vitamin B6 is synthesized as PLP from deoxyxylulose 5-phosphate and 4-phosphohydroxy-L-threonine or from glyceraldehyde-3-phosphate and D-ribulose 5-phosphate (10). In the large intestine, bacteria-derived PLP is converted to free vitamin B6, which is absorbed by passive transport, transported to the blood, and distributed throughout the body. VITAMIN B7 Vitamin B7 (biotin) is a cofactor for several carboxylases that are essential for glucose, amino acid, and fatty acid metabolism (89). For example, vitamin B7 is an essential cofactor for acetyl-CoA carboxylase and fatty acid synthase, which are enzymes involved in fatty acid biosynthesis (90,91). Thus, vitamin B7 likely influences immunometabolism. WHO/FAO recommends a daily vitamin B7 intake of 30 µg for adults (16). Vitamin B7 suppresses gene expression by binding to (biotinylating) histones; these genes include that encoding NF-κB, which is a major signaling molecule for the production of several proinflammatory cytokines (e.g., tumor necrosis factor alpha, IL-1, IL-6, IL-8) (92,93). Nuclear transcription of NF-κB is activated in response to vitamin B7 deficiency (94), suggesting that biotinylation of histones suppresses the expression of genes encoding pro-inflammatory cytokines in NF-κB signaling (Figure 3). Therefore, vitamin B7 has anti-inflammatory effects by inhibiting NF-κB activation, and dietary vitamin B7 deficiency causes inflammatory responses via enhanced secretion of proinflammatory cytokines (95,96). Vitamin B7 is abundant in foods such as nuts, beans, and oilseed. However, raw egg-white contains a large amount of avidin, which binds strongly to vitamin B7 and prevents its absorption in the gut (97). Therefore, vitamin B7 deficiency can be caused not only by insufficient vitamin B7 intake, but also by excessive intake of raw egg-white. Dietary biotin exists as a free protein-bound form or as biocytin (11). In the small intestine, biotinidase releases free biotin from the bound protein and the free biotin is absorbed via the biotin transporter SMVT (98). Vitamin B7 is also produced by intestinal bacteria as free biotin synthesized from malonyl CoA or pimelate via pimeloyl-CoA (99,100). Bacterial free biotin is absorbed by SMVT expressed in the colon (23,101). Metagenomic analysis has shown that Bacteroides fragilis and Prevotella copri (Bacteroidetes); Fusobacterium varium (Fusobacteria) and Campylobacter coli (Proteobacteria) possess a vitamin B7 biosynthesis pathway (10). In contrast, Prevotella spp. (Bacteroidetes), Bifidobacterium spp. (Actinobacteria), and Clostridium, Ruminococcus, Faecalibacterium, and Lactobacillus spp. (Firmicutes) lack such a pathway (Table 1); however, they do express free biotin transporter (10,102), suggesting that these bacteria also utilize dietary and bacterial vitamin B7 and therefore may compete with the host. Thus, free biotin may influence the composition of the gut microbiota, because biotin is necessary for the growth and survival of the microbiota. Indeed, biotin deficiency leads to gut dysbiosis and the overgrowth of Lactobacillus murinus, leading to the development of alopecia (103). Furthermore, vitamin B7 production appears to proceed in a cooperative manner among different intestinal bacteria; Bifidobacterium longum in the intestine produces pimelate, which is a precursor of vitamin B7 that enhances vitamin B7 production by other intestinal bacteria (104). VITAMIN B9 Vitamin B9 (folate), in its active form as tetrahydrofolate, is a cofactor in several metabolic reactions, including DNA and amino acid synthesis. WHO/FAO recommends a daily vitamin B9 intake of 400 µg for adults (16). Owing to the high requirement of vitamin B9 by red blood cells, vitamin B9 deficiency leads to megaloblastic anemia (23). Vitamin B9 deficiency also inhibits the proliferation of human CD8 + T cells in vitro by arresting the cell cycle in the S phase and increasing the frequency of DNA damage (105). Moreover, vitamin B9 contributes to the maintenance of immunologic homeostasis. Regulatory T cells (Treg) express high levels of vitamin B9 receptor (folate receptor 4 [FR4]). Administration of anti-FR4 antibody results in specific reduction in the Treg cell population (106), suggesting that the vitamin B9-FR4 axis is required for Treg cell maintenance. In vitro culture of Treg cells under vitamin B9-reduced conditions leads to impaired cell survival, with decreased expression of anti-apoptotic Bcl2 molecules, although naïve T cells retain the ability to differentiate into Treg cells; this suggests that vitamin B9 is a survival factor for Treg cells (87). Consistent with these findings, deficiency of dietary vitamin B9 results in reduction of the Treg cell population in the small intestine (107,108). Since Treg cells play an important role in the prevention of excessive immune responses (109), mice fed a vitamin B9-deficient diet exhibit increased susceptibility to intestinal inflammation (107). Foods such as beef liver, green leafy vegetables, and asparagus contain high levels of vitamin B9. Vitamin B9 exists as both mono-and polyglutamate folate species in the diet (110). Folate polyglutamate is deconjugated to the monoglutamate form and then absorbed in the small intestine via folate transporters such as proton-coupled folate transporter (PCFT) (11,111). In the intestinal epithelium, folate monoglutamate is converted to tetrahydrofolate (THF), an active form and cofactor, before being transported to the blood (111). In commensal bacteria, a vitamin B9 metabolite, 6formylpterin (6-FP), is produced by photodegradation of folic acid (116). Like the vitamin B2 metabolite 6hydroxymethyl-8-D-ribityllumazine, 6-FP binds to MR1, but unlike 6-hydroxymethyl-8-D-ribityllumazine it cannot activate MAIT cells (62,117). An analog of 6-FP, acetyl-6-FP, is an antagonist of MR1, which inhibits MAIT cell activation (118). As mentioned in the section on vitamin B2, 6-hydroxymethyl-8-D-ribityllumazine activates MAIT cells, which provide defense against pathogens, so vitamin B9 metabolites may suppress excess MAIT cell responses and prevent excessive allergic and inflammatory responses (Figure 2). The quantitative balance between dietary vitamin B2 and B9 and the composition of the microbiota and its ability to metabolize these vitamins may be keys to understanding MAIT-cell-mediated homeostasis in the intestine. VITAMIN B12 Vitamin B12 (cobalamin) is a cobalt-containing vitamin that, in its active forms of methylcobalamin and adenosylcobalamin, catalyzes methionine synthesis (119). WHO/FAO recommends a daily vitamin B12 intake of 2.4 µg for adults (16). Together with vitamin B6 and B9, vitamin B12 plays important roles in red blood cell formation and nucleic acid synthesis, especially in neurons. Therefore, vitamin B12 deficiency causes megaloblastic anemia and nervous system symptoms such as numbness of the hands and feet (119). In terms of host immunity, dietary vitamin B12 deficiency decreases the number of CD8 + T cells and suppresses natural killer Tcell activity in mice; supplementation with methylcobalamin improves these conditions (120), suggesting that vitamin B12 contributes to the immune response via CD8 + T cells and natural killer T cells. Beef liver, bivalves, fish, chicken, and eggs contain high levels of vitamin B12. Dietary vitamin B12 exists in complex with dietary protein and is decomposed to free vitamin B12 by pepsin in the stomach. Free vitamin B12 is absorbed by the epithelial cells of the small intestine via intrinsic factor (IF), a gastric glycoprotein. Inside the epithelial cells, IF-vitamin B12 complex is decomposed to free vitamin B12 by lysosome and then released into the blood, where it is converted to the active form and distributed throughout the body (121, 122). CONCLUSION B-vitamin-mediated immunological regulation is specific to different immune cells and immune responses: that is, different B vitamins are required for different immune responses (Figure 4). It was once thought that B vitamins were obtained only from the diet; however, we know now that this is not the case and that the intestinal microbiota is also an important source of vitamins. Within the intestinal microbiota, not all bacteria produce B vitamins and some bacteria utilize dietary B vitamins or B vitamins produced by other intestinal bacteria for their own needs; therefore, there may be competition between the host and the intestinal microbiota for B vitamins (Figure 4). Research in this field is complicated, because not only does the composition of the intestinal microbiota vary among individuals, but also the composition of the diet can alter both the composition and function of the intestinal microbiota. Therefore, vitamin-mediated immunological maintenance also varies among individuals. Further examinations in this field are needed, and the new information uncovered will help to develop a new era of precision health and nutrition.
6,238.4
2019-04-17T00:00:00.000
[ "Biology" ]
A B2 SINE insertion in the Comt1 gene (Comt1B2i) results in an overexpressing, behavior modifying allele present in classical inbred mouse strains Catechol-O-methyltransferase (COMT) is a key enzyme for dopamine catabolism and COMT is a candidate gene for human psychiatric disorders. In mouse it is located on chromosome 16 in a large genomic region of extremely low variation among the classical inbred strains, with no confirmed single nucleotide polymorphisms (SNPs) between strains C57BL/6J and DBA/2J within a 600-kB window. We found a B2 SINE in the 3′ untranslated region (UTR) of Comt1 which is present in C57BL/6J (Comt1B2i) and other strains including 129 (multiple sublines), but is not found in DBA/2J (Comt1+) and many other strains including wild-derived Mus domesticus, M. musculus, M. molossinus, M.castaneus and M. spretus. Comt1B2i is absent in strains closely related to C57BL/6, such as C57L and C57BR, indicating that it was polymorphic in the cross that gave rise to these strains. The strain distribution of Comt1B2i indicates a likely origin of the allele in the parental Lathrop stock. A stringent association test, using 670 highly outbred mice (Boulder Heterogeneous Stock), indicates that this insertion allele may be responsible for a difference in behavior related to exploration. Gene expression differences at the mRNA and enzyme activity level (1.7-fold relative to wild type) indicate a mechanism for this behavioral effect. Taken together, these findings show that Comt1B2i (a B2 SINE insertion) results in a relatively modest difference in Comt1 expression and enzyme activity (comparable to the human Val-Met polymorphism) which has a demonstrable behavioral phenotype across a variety of outbred genetic backgrounds. Catechol-O-methyltransferase (COMT) plays a regulatory role in catecholamine neurotransmission, particularly in the case of dopamine, by facilitating degradation (Tunbridge et al. 2004). Dopamine is known to play a role in reward-seeking behavior, cognition and motor activity (Goldman-Rakic et al. 2000;Schultz 2001;Yang et al. 2003). COMT is therefore an attractive candidate molecule for involvement in these processes. The most frequently examined human polymorphism in COMT is a Valine to Methionine substitution. It has been ascertained that the Val 158 Met SNP results in a change in enzyme activity, but not mRNA levels, with Val 158 homozygotes having approximately 1.4-fold greater COMT activity than Met 158 homozygotes in the prefrontal cortex (Chen et al. 2004). COMT has been associated, with varying degrees of robustness, to a number of disorders, including schizophrenia (Shifman et al. 2002) and obsessive compulsive disorder (Pooley et al. 2007), and is also of interest in research into cognition (Egan et al. 2001;Tunbridge et al. 2006) and aggression (Rujescu et al. 2003). The homologous mouse gene, previously called Comt, has been recently renamed Comt1. Research in mice has shown a link between Comt1 expression and cognitive (Papaleo et al. 2008) and aggressive phenotypes (Fernandes et al. 2004;Filipenko et al. 2001;Gogos et al. 1998). In Comt1 knockout mice a variety of phenotypic changes have been reported, including increased anxiety (Gogos et al. 1998), improved working memory, set-shifting performance and greater acoustic startle reactivity (Papaleo et al. 2008) and lower weight and greater motor activity (Haasio et al. 2003). An exploratory and habituation phenotype characterized by increased sifting and chewing has also been found in the mice heterozygous for the Comt1 deletion (Babovic et al. 2007). Mice overexpressing Comt1 also display a mild phenotype, being less active in the open field but showing no differences in prepulse inhibition (PPI) of the startle response (Stark et al. 2009). In mouse, Comt1 is situated on chromosome 16, in an area with very little genetic variation between inbred mouse strains (Yang et al. 2007). However, a Comt1 expression difference in the nucleus accumbens and striatum between strains has been noted, with consistently higher expression in the C57BL/6J when compared to the DBA/2J mouse, except for probe sets at the far 3 untranslated region (UTR) (Grice et al. 2007;Korostynski et al. 2006). An outbred highly recombinant mouse stock is an optimal way to stringently test mice for a phenotypic effect resulting from genotypic or expression differences (Chia et al. 2005). The Boulder Heterogeneous Stock (HS) mice (McClearn & Hofer 1999) were generated from 8 inbred progenitor strains and now have over 65 generations of accumulated recombination, creating a highly variable genetic background on which to examine phenotype. This allows phenotypic differences between the progenitor strains (including C57BL/6J and DBA/2J) to be associated with genetic loci. We have identified a polymorphism in Comt1 that mediates the expression difference observed between strains and provides a possible resolution for the conflicting expression data between different probe sets. Additionally, we examine whether this Comt1 polymorphism is associated with a behavioral phenotype using the HS mouse stock. Animals DNAs from 44 different inbred strains of mice were purchased from the Jackson Laboratory (Bar Harbor, ME, USA; http://www.jax.org/ dnares/). Male C57BL/6J and DBA/2J animals used to prepare hippocampal extracts for COMT1 activity assays were bred in the SPF facility at the Institute of Psychiatry, London, from original stocks obtained from the Jackson Laboratory via Charles River UK. Male, HS (McClearn & Hofer 1999) mice, excluding albinos, were obtained from the Institute for Behavioral Genetics, University of Colorado at Boulder (Boulder, USA) and shipped to the UK in 8 batches (80-100 mice per batch) at the age of approximately 8 weeks. The average age of HS mice at the start of open field testing was 90.24 ± 2.92 days (mean ± SD), and all HS mice were from generations 64-72. DNA was prepared from spleens of male C57BL/6J, DBA/2J and HS mice. RNA was prepared from hippocampi of male C57BL/6J and DBA/2J mice. The hippocampus was chosen as the source for the mRNA for this study as this is a key area of the brain involved in behaviors such as learning and memory, anxiety and aggression (Fernandes et al. 2004). All animal works were licensed under the Animals (Scientific Procedures) Act 1986, reviewed by the ethical review panel of the Institute of Psychiatry and the Home Office inspectorate, and are in accordance with the European Communities Council Directive of 24 November 1986 (86/609/EEC). Tissue collection Animals were killed by cervical dislocation and were immediately dissected and tissues snap frozen and stored at −80 • C until use. Primers, sequencing and genotyping Primers shown in Fig. 1 were designed based on the RefSeq cDNA sequence NM 001111062, which is a composite of partial cDNA sequences of strain C57BL/6 origin, and were used to amplify a region of between 239 and 475 bp. PCR reactions contained 20 ng of template, 0.33 μM of each primer, 200 μM each nucleotide, 50 mM KCl, 0.1% Tween-20, 1.5 mM MgCl 2 , 35 mM Tris base and 15 mM Tris-HCl in a final volume of 30 μl, using a touchdown protocol with annealing temperature beginning at 55 • C and stepping down to 50 • C. PCR products were sequenced after cleanup with an Exo-SAP kit (USB Corporation, Staufen, Germany). The B2 insertion was identified using RepeatMasker (http://www.repeatmasker.org). Sequence is a composite of RefSeq cDNA sequence NM 001111062 and sequence of the B2 insertion found in this study. Sequencing of additional strains shows identical sequence in those with the insertion, except for some differences in the length of the poly-A run at the end of the B2 element. Primers used for the sequencing and genotyping are displayed in color, as is the position of the B2 SINE insertion in the Comt1 B2i allele. Primer pair 5 -AGGCTGCATTGAGGC-3 (Common L primer) and 5 -GAAACTGAACATATCCAGATA-3 (Right primer) were used to assay a panel of inbred strain DNAs and 670 HS DNAs for Comt1 B2i by agarose gel electrophoresis. To ensure correct calling of heterozygote genotype we used an additional PCR reaction with primers 5 -AGGCTGCATTGAGGC-3 (Common L primer) and 5 -TCCGTAACAAGATCTGATGCCCTC-3 (MidB2R, in B2 sequence) to assay the presence or absence of Comt1 B2i . Gene expression Profiles of the hippocampi of 265 of the HS male mice were determined using the Affymetrix GeneChip ® Mouse Exon 1.0 ST Array (Santa Clara, CA, USA) in experiments which will be fully described elsewhere. Briefly, RNA was extracted using TRIzol reagent (Invitrogen, Paisley, UK) and labeled and hybridized using the Affymetrix WT synthesis and labeling system according to the manufacturer's recommended protocols. The resulting data were normalized and summarized using the RMA sketch method of the Affymetrix power tools and further quality controlled and analyzed using the R packages 'Affy' (Gautier et al. 2004) and 'Exonmap' (Okoniewski et al. 2007). RACE Length of the 3 UTR of Comt1 mRNA in C57BL/6J mice was determined using the Invitrogen 3 RACE system for rapid amplification of cDNA ends. Briefly, RNA was extracted using the Qiagen All-Prep DNA/RNA mini kit (Crawley, UK). Using the 3 RACE kit, cDNA was synthesized according to the manufacturer's instructions. A 'one-sided' PCR was performed using 5 -AGGCTGCATTGAGGC-3 (Common L primer) and a universal amplification primer that binds to the 3 end of the cDNA. Length of the resulting PCR product was determined using agarose gel electrophoresis. All visible bands were extracted using Qiagen QIAquick gel extraction kit (Crawley, UK) and sequenced. Enzyme assay COMT1 enzyme activity was assayed in crude protein extracts as previously described (Tunbridge et al. 2007). Briefly, tissue was thawed on ice and homogenized in 25 mM Tris pH 7.4, 50% v/v glycerol and protease inhibitors (Complete Protease Inhibitor Cocktail Tablets, Roche, Burgess Hill, UK). Fifty micrograms of total protein was incubated at 37 • C for 30 min in 100 mM Tris pH 7.4, 5 mM MgCl 2 , 100 mM catechol and 2 mM dithiothreitol, supplemented with 3.6 μCi per reaction of 3 H-S-adenosylmethionine (specific activity: 5-15 Ci/mmol; Perkin Elmer, Waltham, MA, USA). Reactions were stopped with 1 volume of 1 N HCl and tritiated methylated catechol was extracted by mixing thoroughly with Monoflow 1 scintillation fluid (National Diagnostics, Atlanta, GA, USA). Samples were measured using a liquid scintillation counter. Each data point is the mean of four replicates, the individual results of which were highly correlated (R values between 0.955 and 0.992). Specific activity is expressed as counts per minute incorporated in 30 min/50 ug protein with background values (obtained by assaying protein extraction buffer only) subtracted. Behavior A comprehensive behavioral battery was conducted on males from a number of inbred strains as well as BXD and HS mice (Galsworthy et al. 2005(Galsworthy et al. , 2002Lad et al. 2007Lad et al. , 2009). This large-scale experiment included 670 HS mice. The battery, which is described in detail in Lad et al. (2009), included eight behavioral tests: activity monitoring in the home-cage (1st and 23rd hour after transfer to a fresh cage); open field; novel object exploration; elevated plus-maze; light/dark box; puzzle box; Morris water maze; tail suspension test. Statistical analysis From our battery of eight behavior tests (Lad et al. 2009), we selected 54 measures for association testing with Comt1 B2i . For details of the behavioral measures used for association and their selection see Lad et al. (2009). Association was tested by one-way analysis of variance (ANOVA) with genotype as a categorical variable, implemented using the lm() function of R. Multiple testing was addressed using the false discovery rate approach (Storey & Tibshirani 2003). Novel Comt1 allele No confirmed SNPs were identified in a survey of the Comt1 exons in 12 strains (A/J, AKR/J, BALB/cJ, C3H/HeJ, C57BL/10J, C57BL/6J, DBA/2J, ISCamEi, ISCamRK, RIIIDm-Mob, RIIIS/J and PWD/Ph). A length polymorphism of nearly 200 bp was identified in the 3 UTR of Comt1 between C57BL/6J and DBA/2J, possessing the long and the short alleles, respectively. Sequence analysis of the PCR product from the above 12 inbred strains and two additional strains (NOD/LtJ and NON/LtJ) indicated that the length difference was because of a single insertion, consistent with the C57BL/6J sequence (RefSeq NM 001111062), found to be present in seven of the strains sequenced. RepeatMasker showed that the length difference is because of an insertion of a B2 SINE of family 1t (Jurka et al. 2005;Smit 2005;Smit et al. 1996Smit et al. -2004. The consensus sequence (Smit 2005) lacks 2 G residues at the 5 end of our sequence and there are further probable discrepancies in the AT-rich 3 portion. Excluding these regions, BLAT searching of the core 153 bp of the insertion (which deviates from the B2 Mm1t consensus at five positions) identifies a single perfect match which is at positions 136082980-136083132 on Mm 5, which would be a candidate parent for this insertion. By inspection the likely insertion site would be ATTT/A and the target site duplication would consist of a run of 15 As. We have named this allele Comt1 B2i . Therefore, the presence of Comt1 B2i was surveyed in the strains listed in Fig. 2. Gene expression Hippocampus consortium data (Overall et al. 2009; http://www.genenetwork.org), produced using the Affymetrix MOE430v2 array, shows very strong cis expression QTL (eQTL) signals for probe sets 1449183 at (LOD 7.3) and 1418701 at (LOD 30), but with opposite directions of effect. Comt1 B2i is associated with increased expression for probeset 1449183 at, whereas the Comt1 + allele is associated with increased expression for probe set 1418701 at. A replication in the outbred HS animals, using a different array platform and population, showed an additive effect of genotype on expression. Genotype group means of standardized array signal intensities across the Comt1 gene are shown in Fig. 3. RACE PCR of the 3 UTR of Comt1 mRNA in C57BL/6J produced a band of approximate length 300 bp. Preliminary sequencing of the product indicates that the 3 UTR includes the B2 SINE. Furthermore, the 3 UTR ends at the 3 end of the B2 insertion. Outbred behavior and genotyping We surveyed Comt1 B2i and behavior of 670 male mice from the outbred HS population (McClearn & Hofer 1999). The progenitors of this population are eight inbred strains (A, Means ± standard error of the mean for specific Comt enzyme activity. Enzyme activity is around 1.7-fold higher in C57BL/6J (Comt1 B2i ) compared to DBA/2J (Comt1 + ) (t = 3.43, df = 14, P = 8 × 10 −3 ). C57BL/6, BALB/c, AKR, DBA, C3H, Is/Bi and RIII), and all these are represented in Fig. 2 except for the last two, of which the probable nearest surviving relative is shown. Four of the eight progenitor strains are thus known to be Comt1 B2i and therefore substantial numbers of each allele should be present in the population. We found 53 homozygotes for Comt1 + , 333 Comt1 + /Comt1 B2i and 284 Comt1 B2i homozygotes, giving an allele frequency of 0.672 for Comt1 B2i in the whole population. The novel object exploration test was the only behavioral measure to show a significant association with Comt1 B2i (Duration: F 2,572 = 8.7, P = 1 × 10 −4 ; Frequency: F 2,572 = 6.1, P = 2 × 10 −3 , Fig. 5). Both the duration and the frequency of exploration of the novel object were greater in Comt1 + compared to the Comt1 + /Comt1 B2i and Comt1 B2i genotype groups. The Comt1 B2i is therefore dominant, at least with respect to the behavioral phenotype. There were no differences in anxiety measures of the open field test, performed on the previous day in the same arena, or in any of the other behavioral tasks in our battery. Discussion We have found that a B2 SINE in Comt1 (Comt1 B2i ) present in some inbred strains but not others is the likely cause of the expression difference between these strains, being the only confirmed variation within the gene between C57BL/6J and DBA/2J. Comt1 B2i is associated with an increase in specific enzyme activity, as well as changes in behavior related to exploration. These findings suggest that a modest difference in Comt1 expression levels can have a significant behavioral phenotype, in line with previous findings (Fernandes et al. 2004). Sequencing indicates that the insertion site and the B2 SINE insert sequence are identical across Comt1 B2i strains, except for possible length variation in the flanking poly A runs, suggesting identical origin of Comt1 B2i . Comparison of the strain distribution of Comt1 B2i with what is known of the breeding history of the inbred strains (Beck et al. 2000) makes it immediately evident that although C57BL/6J is Comt1 B2i , several very closely related strains (C57L, C57BR) are not. Therefore, Comt1 must have been polymorphic in the cross between Miss Lathrop's female 57 and male 52 that gave rise to these strains (Beck et al. 2000). Comt1 B2i is not present in our sampling of wild-derived Mus domesticus, M. musculus, M. molossinus and M. castaneus strains. Those classical inbred strains that do contain the insertion are almost all known to have ancestry from the Castle and Little stocks, suggesting that Comt1 B2i actually arose in the Lathrop stock around the start of the 20th century. The exception is the pair of strains NOD/LtJ and NON/LtJ, which are of 'Swiss' origin, without a known connection to Castle's stocks. The frequency of insertional mutagenesis by SINEs in mouse is unknown, but at least one similar case has been reported in the literature: Alas1 contains a B2 SINE insertion in DBA/2J but not C57BL/6J (Chernova et al. 2008). Research into the most common human SINE, Alu, hypothesizes a retrotransposition rate of 1 new insertion per 20 births (Cordaux & Batzer 2009). Additionally, transposition is much more active in the mouse genome when compared to the human genome, with transposons being found to be responsible for about 10% of spontaneous mutations (Guenet 2005). Using Affymetrix exon array data, we showed in the hippocampi of the HS that Comt1 B2i is associated with high gene expression signal for probes 5 of the insertion site but low expression for 3 located probes. The opposite relationship is seen in Comt1 + mice, while Comt1 + /Comt1 B2i mice have intermediate expression at these loci. Microarray analysis using the Affymetrix MG U74Av2 microarray with a single Genes, Brain and Behavior (2010) 9: 925-932 Comt1 probe set (98535 at) found variation between eight strains in the hippocampus (Fernandes et al. 2004). C57BL/6J had the highest Comt1 expression and DBA/2J the lowest, and this difference correlated with an aggressive phenotype. Subsequent microarray studies showed similar strain differences in Comt1 expression in the nucleus accumbens (Grice et al. 2007) and striatum (Korostynski et al. 2006). In the latter study it was noted that a probe set located further 3 in the gene showed a strain difference in the opposite direction. This relationship is also clearly visible in hippocampus across the BXD recombinant inbred panel (Overall et al. 2009; http://www.webQTL.org) which shows strong (Mendelian) cis-genetic effect for the probe set 1418701 at (DBA/2J allele increasing expression) and probe set 1449183 at (C57BL/6J allele increasing expression). One possible explanation for the expression difference is that the B2 insertion may lead to a new polyadenylation site resulting in a modified 3 UTR. A 3 RACE conducted on C57BL/6J mRNA shows a transcript ending at the 3 end of the B2 insertion, 460 bp shorter than the reference sequence (NM 001111062). Polyadenylation of the transcript occurs at the polyadenylation signal (AATAAA) contained with the B2 sequence. No evidence was found for a longer 3 UTR in strains bearing the insertion, suggesting that the shorter 3 UTR is the dominant transcript. In animals with the insertion, the shorter 3 end could result in no signal from the furthest out probe sets, accounting for the low signal seen in Comt1 B2i mice. Based on our current data and that of previous studies it is clear that there is a difference in transcript structure and abundance between inbred strains with Comt1 + compared to Comt1 B2i . We have showed that this polymorphism is associated with functional consequences (Fig. 4) as hippocampal COMT1 enzyme activity is substantially greater in C57BL/6J (Comt1 B2i ) than in DBA/2J (Comt1 + ). This may be mediated to some extent by the presence of a shorter 3 UTR in Comt1 B2i . This difference in enzyme activity is particularly worth noting in comparison to the most studied polymorphism in humans, the Val/Met, which produces a 1.4-fold difference in protein activity with significant behavioral differences. As the difference we found is at a magnitude of around 1.7-fold, we would expect to see similar levels of behavioral differences in an outbred population of mice. Given the key role played by COMT1 in dopamine metabolism, we tested whether Comt1 B2i might have a behavioral effect. We used HS outbred mice to perform a stringent association test, in which the effect of Comt1 B2i locus is examined against a large panel of different highly heterozygous genetic backgrounds, generated by over 65 generations of accumulated recombination from eight inbred progenitor strains (Boulder Heterogeneous Stock, McClearn & Hofer 1999). We investigated the behavioral phenotypes of the HS mice using a test battery (Lad et al. 2009) which includes tests of baseline activity, and measures relating to anxiety, depression and cognition. The majority of these measures did not show an effect of Comt1 B2i ; however, there was an association with novel object exploration. Comt1 B2i/B2i and Comt1 +/B2i mice spent less time exploring, and made fewer visits to a novel object than do Comt1 +/+ mice. Comt1 B2i does not seem to alter anxiety levels as no differences were seen in any of the classical anxiety tasks used in the battery (open field, elevated plus-maze or light/dark box). However, we cannot definitively exclude an anxiety effect as the novel object task was performed in a potentially aversive environment, given that mice had only one previous exposure to the open field and may not have fully habituated to the novel arena. Further experiments testing novel object exploration in a familiar (home-cage) environment could be used to address this issue. Behavioral differences were to be expected in the light of previous research, where several transgenic Comt1 mice have been engineered. A knockout of the gene produces a remarkably mild phenotype in terms of basic behavior (Babovic et al. 2007;Gogos et al. 1998), although associations with cognitive phenotypes are more robust (Papaleo et al. 2008). More recently, strains overexpressing Comt1 have been produced by bacterial artificial chromosome (B AC) transgenesis (Stark et al. 2009) and transgenic overexpression of the higher activity Val allele of the heavily studied Val 158 Met human polymorphism (Papaleo et al. 2008). In the Stark et al. study, PPI was the main phenotype of interest and no effect of Comt1 overexpression was noted, although these findings are inconsistent with those of Papaleo et al. (2008) who showed a reduction in PPI in COMT-Val-tg overexpressors and an increase in acoustic startle reactivity, but no change in PPI in Comt1 knockout mice, compared with their respective wild-types. Additionally, open field was studied and a small difference in total distance traveled noted. Papaleo et al. also found impairments in several cognitive measures, including attentional set shifting and working memory in COMT-Val-tg overexpressors, compared with wild-type controls, although these mice were created on a mixed genetic background. Given the results obtained in the Comt1 and COMT-Val-tg transgenic mice, it is perhaps surprising that we found associations only with object exploration. Impaired emotional reactivity was observed in Comt1 knockout mice (Gogos et al. 1998) but this effect was only seen in female mice and the present study used males. However, the behavioral effect we have observed is no doubt just part of the phenotype attributable to Comt1 B2i , but without more specific phenotyping of higher cognitive functions and behavioral analysis in both male and female mice, our results should be considered preliminary. Given our previous observation of a correlation across inbred strains between Comt1 expression with intermale aggression, it would be of interest to test these mice in the resident-intruder paradigm. Furthermore, the results of Papaleo et al. (2008), and the associations between cognitive function and COMT in humans, suggest that it would be worthwhile to investigate the phenotype of these mice in detailed cognitive tasks. However, given that cognitive tests are generally labour-intensive, and the high-stress nature of the resident-intruder paradigm, studies of this type require specific testing, rather than as part of a test battery as used here. Our data taken together provide evidence that the Comt1 B2i is itself linked to the changes in object exploration. The association of Comt1 B2i with decreased exploration holds across a large panel of outbred genetic backgrounds. If the responsible allele were at another locus, even one quite closely linked to Comt1, it is likely that recombination would have occurred between the causative locus and the Comt1 gene, and that the association with Comt1 B2i would therefore have been lost. Furthermore, our survey of the exons of the Comt1 gene showed no other confirmed polymorphisms in a set of strains that represent as far as possible the progenitors of the Comt1 locus, which is in an extensive region of identity-by-descent across the classical inbred strains (Yang et al. 2007). Between C57BL/6J and DBA/2J, there is no known polymorphism within the Comt1 gene. The molecular mechanism for the altered COMT1 enzyme activity remains unclear. Given the complexity of the microarray results, it is unlikely to be because of a simple difference in mRNA abundance. The longer insertion-bearing transcript is polyadenylated at the 3 end of the insertion, resulting in a shorter 3 UTR. This may lead to alternate processing by miRNAs, or may result in an altered ratio of COMT1 protein isoforms. Other instances of insertional mutagenesis by retrotransposition have been associated with physical phenotypes (Duhl et al. 1994;Ho et al. 2004), and the mechanism for these events is not always clear. Further dissection of the effects of this allele, both in terms of behavioral phenotype and molecular biology, will be informative about the normal function of the Comt1 gene.
6,010.2
2010-11-01T00:00:00.000
[ "Biology" ]
Expression of recombinant human Apolipoprotein A-IMilano in Nicotiana tabacum Apolipoprotein A-IMilano (Apo A-IMilano) is a natural mutant of Apolipoprotein. It is currently the only protein that can clear arterial wall thrombus deposits and promptly alleviate acute myocardial ischemia. Apo A-IMilano is considered as the most promising therapeutic protein for treating atherosclerotic diseases without obvious toxic or side effects. However, the current biopharmaceutical platforms are not efficient for developing Apo A-IMilano. The objectives of this research were to express Apo A-IMilano using the genetic transformation ability of N. tabacum. The method is to clone the coding sequence of Apo A-IMilano into the plant binary expression vector pCHF3 with a Flag/His6/GFP tag. The constructed plasmid was transformed into N. tabacum by a modified agrobacterium-mediated method, and transformants were selected under antibiotic stress. PCR, RT-qPCR, western blot and co-localization analysis was used to further verify the resistant N. tabacum. The stable expression and transient expression of N. tabacum were established, and the pure product of Apo A-IMilano was obtained through protein A/G agarose. The results showed that Apo A-IMilano was expressed in N. tabacum with a yield of 0.05 mg/g leaf weight and the purity was 90.58% ± 1.65. The obtained Apo A-IMilano protein was subjected to amino acid sequencing. Compared with the theoretical sequence of Apo A-IMilano, the amino acid coverage was 86%, it is also found that Cysteine replaces Arginine at position 173, which indicates that Apo A-IMilano, a mutant of Apo A-I, is accurately expressed in N. tabacum. The purified Apo A-IMilano protein had a lipid binding activity. The established genetic modification N. tabacum will provide a cost-effective system for the production of Apo A-IMilano. Regarding the rapid propagation of N. tabacum, this system provides the possibility of large-scale production and accelerated clinical translation of Apo A-IMilano. Graphical Abstract Introduction Atherosclerosis can result in coronary and peripheral artery diseases, such as stroke or heart attack. Epidemiological studies have demonstrated an inverse correlation between the levels of high-density lipoprotein (HDL) cholesterol, the so called "good cholesterol, " with the risk of atherosclerosis (Kontush 2020; Chen et al 2020). As the principal component of HDL, Apolipoprotein A-I (Apo A-I) is believed to play an important role in the prevention of atherosclerosis via the process of reverse cholesterol transport (RCT) and anti-inflammatory function (Gaddis et al. 2018;Jackson et al. 2021; Barrett et al. 2019). Normally, low levels of HDL cholesterol would be a high risk of atherosclerosis. However, researchers noticed some inhabitants in an Italian town with low levels of HDL cholesterol but did not affect by atherosclerotic diseases Weisgraber et al. 1980). Further approach figured out that these subjects expressed a variant of Apo A-I, designated as Apo A-I Milano , with its arginine at position 173 replaced with cysteine (Weisgraber et al. 1983). Since the discovery of the variant Apo A-I Milano , researchers have put effort into figure out whether Apo A-I Milano possesses superior atheroprotective effects than the wild type of Apo A-I thus could be developed into a therapeutic. Kaul et al. (Kaul et al. 2004) verified treatment with Apo A-I Milano /phospholipid complex could rapidly improve the endothelial dysfunction in hypercholesterolemic Apo E-null mice. Studies on animal injury models (Ibanez et al. 2008;Kaul et al. 2003;Marchesi et al. 2008;Parolini et al. 2008;Speidl et al. 2010) and atherosclerosis patients (Nissen et al. 2003) indicated an infusion of ETC-216 (the complex of recombinant Apo A-I Milano with 1-palmitoyl-2-oleoyl phosphatidylcholine) or its mimetic resulted in plaque regression and reduction of reperfusion injury. Expression of Apo A-I Milano in Apo B/human Apo A-II (h-B/A-II) transgenic mice showed similar atheroprotective features with that expression of Apo A-I gene (L. Wang et al. 2006). However, gene therapy with macrophagespecific expression of Apo A-I Milano exerted a superior effect in the treatment of atherosclerosis in Apo A-I/ Apo E double-knockout mice after bone marrow transplantation than Apo A-I (L. Wang et al. 2006). In another study, an infusion of HDL Milano twice with a 4-day interval showed better anti-inflammatory and plaque stabilizing properties than HDL wild type in the treatment of atherosclerotic New Zealand White rabbits (Ibanez et al. 2012). Apo A-I Milano also showed an anti-oxidant activity that distinguished from Apo A-I wild type (Bielicki and Oda 2002). Recently, a report demonstrated that intravenous delivery of human recombinant Apo A-I Milano to the APP23-transgenic mouses reduced their β-amyloid cerebral deposition indicating potential ability to ease Alzheimer (Fernandez-de Retana et al. 2017). Bioactive experiments and clinical trials require a great amount of Apo A-I Milano . Furthermore, considering the prevalent population with arthrosclerosis, application of Apo A-I Milano in the future will also require sufficient Apo A-I Milano . Thus, it is essential to develop a cost and capacity efficient manufacturing platform. Recombinant expression of Apo A-I Milano has been achieved in E. coli (Li et al. 2005;Persson et al. 1998;Zhuang et al. 2006), yeast (Zhang et al. 2008). However, copurification of the recombinant Apo A-I Milano with host cell protein is a problem when expressed in E. coli. Purification methods have been optimized to improve the production process (Hunter et al. 2008a(Hunter et al. , 2007(Hunter et al. , 2008bNord 2000). Various expression strategies supply more and more opportunities to the production of pharmaceutical proteins and enzymes of commercial interest in both prokaryotic and eukaryotic species. Among the different expression platforms involving different organisms, plants have long been potential taken as an attractive platform for the production of unlimited number of recombinant proteins, including pharmaceutical proteins, such as monoclonal antibodies, vaccines, and enzymes. The first recombinant plant-derived pharmaceutical protein proved to be human serum albumin expressed in transgenic N. tabacum and potato plants in 1990 (Sijmons et al. 1990). Compared to traditional approaches in molecular farming of pharmaceuticals, the plant expression systems obviously showed advantages including their low costs, particularly when considering large-scale production. Microbial and animal cell cultures require specific equipment and electric energy supply, while plants can synthesize any protein and metabolite from CO 2 and inorganic chemicals using solar energy. In addition, the limited risks of contamination by viruses or pathogens can be minimized. Moreover, protein purification can be eliminated when suitable plant tissue containing recombinant protein is used as food, such as lettuce leaves, tomato fruit. Importantly, plants also displayed capable of conducting complex post-translational modifications required for recombinant pharmaceutical proteins, including N-glycosylation, which is substantially similar to that found in mammalian cells. Many plant species have been tested for their ability to produce recombinant pharmaceutical proteins, including Nicotiana species, safflower, tomato, potato, soybean, alfalfa, spinach, A. thaliana, corn, and rice. N. tabacum is one of the ideal expression systems in plant bioreactor based on several practical advantages over other crops. It produces significant leaf biomass (up to 100 t of leaf biomass per hectare), has high soluble protein content and is a non-food crop. In addition, various methods of protein expression could be carried out in N. tabacum, including transient or stable expression via the agrobacterium. So far, it has been reported that stable nuclear transformation in Nicotiana tabacum has commonly been used as an excellent production platform of some therapeutic antibodies (Sack et al. 2015;Buyel et al. 2017). More importantly, the stable transformation of transgenic N. tabacum requires neither costly fermenters, nor vacuum infiltration equipment, nor sterile conditions. Although N. tabacum taken as an ideal plant for the production of medicinal proteins has many advantages, yet several challenges need to be addressed to achieve comparable efficiency as the mammalian system. The relatively low expression frequency was one of the main challenges in promoting N. tabacum is a significant recombinant protein production system. So far, there is still a lack of N. tabacum culture system for the expression of Apo A-I Milano . In order to explore the method of producing Apo A-I Milano in the plant reactor, in this study, the model plant N. tabacum tissue culture technology and Agrobacterium mediated genetic transformation were used to obtain transformed plants. The surviving transformed plants were verified by PCR and RT-PCR technology. The obtained positive plants were harvested seeds and then continued to be planted, propagated and identified to obtain a stable genetic N. tabacum line. Meanwhile, the expression, subcellular localization and purification of Apo A-I Milano protein in N. tabacum were completed by transient transformation, fluorescence labeling and chromatography. This study was the first to report the transient and stable expression of Apo A-I Milano protein in N. tabacum. Construction of plant expression Vector pCHF3-Flag-Apo A-I Milano , pCHF3-Apo A-I Milano -GFP, pCHF3-GFP, and pCHF3-His6tag-GFP-TEV-Apo A-I Milano The DNA of the Apo A-I Milano gene was synthesized by GENEWIZ Company (GENEWIZ, China). To facilitate the detection of the target protein by western blot, a 3 × Flag tag was added to the N-terminus of the recombinant protein. Phanta Max Master Mix PCR kit (Cat. No. P525, Vazyme, China) was used and PCR was performed using gene-specific primers (forward 5'-CGG GGG ACG AGC TCG GTA CCA TGG TTA ACG ACT ACA AAG ACG ATG ACG ACA AGG ACT ACA AAG ACG ATG ACG ACA AGG ACT ACA AAG ACG ATG ACG ACA AGG ATG AGC CTC CTC AATC-3'and reverse 5'-GCA GGT CGA CTC TAG ATC ATT GAG TAT TAA GCT TCT T-3') by the following protocol: 98 °C for 2 min, followed by 35 cycles of amplification (94 °C for 40 s, 58 °C for 30 s, 72 °C for 30 s, with the final elongation step at 72 °C for 5 min.). The PCR product was purified with a Gel Extraction Kit (Transgenes, China), then cloned into the pCHF3 vector (kindly provided by Tobacco Research Institute of Chinese Academy of Agricultural Sciences) using ClonExpress ® Ultra One Step Cloning Kit (Vazyme, China). The empty pCHF3 was digested with KpnI and XbaI to obtain pCHF3-Flag-Apo A-I Milano . The recombinant constructs were transferred into E. coli DH5α competent cells. Grown colonies were detected by the PCR method using the specific forward and reverse primers (forward 5'-GCA AGT GGA TTG ATG TGA TAT-3' and reverse 5'-TAA GCT TCT TAG TAT ATT CTTC-3'). Then, the clone was sequenced to confirm the correct sequence, which is a 934 bp long fusion gene of vector, 3xFlag and Apo A-I Milano . The gene Apo A-I Milano was driven by the control of the strong cauliflower mosaic virus (CaMV) 35S promoter in pCHF3. Then, the construction was transformed into Agrobacterium tumefaciens (A.tumefaciens) strain GV3101 (Biomed, China) using freeze-thaw method (Fig. 1). To further detect the distribution and localization of the target protein in tissues and cells, we also constructed a plasmid expressing pCHF3-Apo A-I Milano -GFP fusion protein and took pCHF3-GFP as the control. Simultaneously, pCHF3-his6tag-GFP-TEV-Apo A-I Milano plasmid was also constructed for the purification of the target protein Apo A-I Milano and its amino acid sequence determination and analysis. The construction method was the same as that of pCHF3-Flag-Apo A-I Milano . The recombinant plasmid was cloned and identified by GENEWIZ Company (Fig. 1). Preparation of agrobacterium strains harboring pCHF3-Flag-Apo A-I Milano for infiltration To prepare the appropriate scale A. tumefaciens, GV3101 harboring pCHF3-Flag-Apo A-I Milano was cultured in 50 ml of YEB medium supplemented with 100 µg/ml of spectinomycin and 50 µg/ml of rifampicin. Then, the cultures were incubated at 28 ℃ in 230 rpm constant shaking condition overnight. To prepare infiltration buffer, the A. tumefaciens culture mentioned above were harvested by centrifugation at 6000 rpm for 5 min at room temperature, then, prepared infiltration buffer (10 mM MES, 150 μM AS, 10 mM MgCl 2 ) dissolved the suspension and the OD 600 was adjusted to 0.8 to 1.0. Then, the culture was incubated at room temperature without any agitation for at least 3 h before infiltration. Preparation of A. tumefaciens GV3101 for pCHF3-Apo A-I Milano -GFP and pCHF3-GFP was as same with pCHF3-Flag-Apo A-I Milano . The growth conditions of N. tabacum and preparation of N. tabacum leaves disks for stable transformation Seeds of N. tabacum were obtained from China National GeneBank (ID: CNSebb2006170), Seeds of Hong Hua Da Jin Yuan (HD) were obtained from TRI of the Chinese Academy of Agricultural Sciences (ID: HD), originally donated by the Chinese Academy of Agricultural Sciences for collection of seeds. Seeds were surface-sterilized using 75% alcohol for 1 min 30 s and 10% sodium hypochlorite for 15 min followed by washing with autoclaved distilled water 3 times. The seeds were then grown on Murashige and Skoog (MS) medium. The pots were placed in a growth chamber under controlled conditions of 25-30 ℃with 16 h light/8 h dark photoperiod. All plant materials used in this experimental study abide by the national safety implementation measures and management regulations in the process of planting, transformation, sampling and testing, these regulations include "Safety Administration Implementation Regulation on Agricultural Biological Genetic Engineering" and "Tobacco and Tobacco Products-Detecting Method of Genetically Modified Organism Contents (GB/T 24,310-2009)". The transformation of N. tabacum was performed by co-cultivation as described previously (Tang et al. 2005). The explants were subcultured in different mediums for shoot induction and root induction. Briefly, the leaf discs of 1 cm in diameter were prepared from the cultivation seedlings and incubated for 10 min in A. tumefacien solution (OD 600 = 0.6-0.8). The leaf discs were then blotted onto filter paper to remove excess bacterial suspension. The infected leaves were plated on the co-cultivation medium (MS with 1 mg/L 6-BA, 0.1 mg/L IAA) with the veins facing up, and cultured in the dark at 25 °C for 3 days. Then, leaf discs were placed upside down on S1 medium (MS with 1 mg/L 6-BA, 0.1 mg/L IAA, 500 mg/L cefotaxime sodium and 50 mg/L kanamycin) for 2-3 weeks. Then, the leaf discs were transferred to S2 medium (MS with 0.5 mg/L 6-BA, 0.05 mg/L IAA, 500 mg/L cefotaxime sodium, and 50 mg/L kanamycin) for 1-2 weeks. Then, seeding grow from the callus was then transferred onto S3 medium (MS with 0.5 mg/L 6-BA, 0.02 mg/L IAA, 500 mg/L cefotaxime sodium and 50 mg/L kanamycin) for 1-2 weeks and then transferred onto R medium (MS with 500 mg/L cefotaxime sodium and 50 mg/L kanamycin) until the root was detected. All tissue culture experiments were conducted in a growth chamber at 25 ℃ and a photoperiod of 16 h/8 h day/ night. The well-rooted transgenic plants were transferred to soil under a controlled photoperiod of 16 h light/8 h dark at 25 ℃. N. tabacum seeds were sterilized with 75% alcohol for 1 min and 30 s and 10% sodium hypochlorite for 15 min followed by washing with autoclaved distilled water 3 times. After disinfection, sow seeds into MS medium ( Fig. 2A), and moved the seedlings were to a tissue culture flask when they grew to about 0.5 cm; when the number of leaves reached about 8 leaves (Fig. 2B), green leaves were selected for agrobacterium infection. pCHF3-Flag-Apo A-I Milano was transferred into N. tabacum leaves using agrobacterium-mediated method and co-cultured for 3 days (Fig. 2C). The co-cultured leaves were then inoculated on S1 medium (Fig. 2D), and the generation of tufted buds could be seen about 2 weeks later (Fig. 2E). The tuft buds in S1 medium were transferred to S2 and S3 medium, and after about 2 weeks of culture, the tuft buds grew into young seedlings (Fig. 2F). When the resistant seedlings grew to about 3 cm, small seedlings were cut and transferred to R medium to induce rooting, and roots were generated and seedlings gradually formed about 2 weeks later (Fig. 2G). After the seedlings were grown, the seedlings were transplanted into the flowerpots in the greenhouse (Fig. 2H). Molecular characterization of stable transgenic N. tabacum Genomic DNA was extracted from the leaves of putative transgenic N. tabacum lines using a Plant DNA extraction Kit (CWbio Inc., China) following manufacture's protocol. PCR analyses were performed using primer sequences (forward 5'-GCA AGT GGA TTG ATG TGA TAT-3' and reverse 5'-TAA GCT TCT TAG TAT ATT CTTC-3) to identify positive transgenic plants. The cycling schedule of PCR was 95 ℃ for 10 min; 30 cycles of 95 ℃ for 1 min, 60 ℃ for 1 min, and 72 ℃ for 50 s, with a final extension at 72 ℃ for 10 min. PCR products were electrophoresed on 1.5% agarose gel, then stained with ethidium bromide and visualized under UV light. The amplified DNA fragment including vector, 3xFlag and Apo A-I Milano was 934 bp. For fluorescence quantitative analysis of transgenic N. tabacum, total RNA from N. t tabacum leaves was extracted after quick-freezing in liquid nitrogen. The cDNA was reversely transcribed (PrimeScriptTM RT reagent Kit with gDNA Eraser, Code No. RR047A, Takara, Japan) and analyzed by fluorescent quantitative PCR. Actin (NT-L25) was selected as the reference gene, and the primer sequence was as follows: Actin-F: GCT AAG GTT GCC AAG GCT GTC; Actin-R: TAA GGT ATT GAC TTT CTT TGT CTG A; The PCR primer sequence of the Apo A-I Milano target gene was F: AGC CTC CTC AAT CTC CTT GG; R: TTG CTT ACC AAG AGC AGA ACCT. Total RNA was extracted from stable transgenic and non-transgenic N. tabacum leaves tissues using the RNA extraction kit (Transgene, China) according to the manufacturer's instruction. First-strand cDNA was synthesized after genomic DNA was eliminated by DNase I. RT-qPCR was performed using the following first-strand cDNA as template using the procedure: 95 °C for 300 s; 40 cycles of 95 °C for 10S, 60 °C for 30S; 95 ℃ for 15 s, 60 ℃ for 60 s. For Western blotting of the stable transformation, proteins were extracted from the transformation and non-transformed leaves of N. tabacum using lysis buffer (Thermo, USA) and protease inhibitor. The samples were centrifuged at 14,000 × g for 15 min before loading on 4% stacking and 12% separating SDS-polyacrylamide gel (SDS-PAGE) after boiling at 100 °C for 10 min. The mouse monoclonal antibody against human Apo A-I (Santa Cruz, USA) was used as the primary antibody. The antibody was diluted to 300-fold and used to incubate the electrophoretically separated protein extract and the electroimprinted membrane. The goat anti mouse (Proteintech, USA) antibody diluted 5000-fold was used as the second antibody. Infiltration of N. tabacum using a syringe for transient transformation N. tabacum grown under constant light conditions for 4 weeks in a greenhouse was taken to infiltrate using syringe according to the described by Abd-Aziz, N. et al. (Abd-Aziz et al. 2020). Briefly, the infiltration buffer with OD 600 of 0.8-1.0 containing A. tumefaciens strain GV3101 harboring pCHF3-Flag-Apo A-I Milano synthesized by GENEWIZ Company were respective injected into the leaf with a syringe without a needle. Then, the plants were cultured in a 24 h dark condition. At least 3 days post-infiltration culture before the following treatment including analysis of mRNA and protein expression (Fig. 3). RT-PCR and western blotting of transient transformation Total RNA was extracted from transient transgenic and non-transgenic N. tabacum leaves tissues using the RNA extraction kit (Transgene, China) according to the manufacturer's instruction. First-strand cDNA was synthesized after genomic DNA was eliminated by DNase I. PCR kit (TB Green ® Premix Ex Taq ™ , Code No.: RR420A, Takara, Japan) was used and PCR was performed using the following first-strand cDNA as a template using the procedure: 95 °C for 300 s; 30 cycles of 95 °C for 15S, 45 °C for 30S, and 72 °C for 60S; and 72 °C for 300 s for a final extension. The amplified PCR products were analyzed by 1% TAE Agarose gel. (Forward 5'-ATG GTT AAC GAC TAC AAA GACG-3' and reverse 5'-TCA TTG AGT ATT AAG CTT CTT AGT -3'). For Western blotting, the steps were the same as those for stable transgenic N. tabacum. Seeds of Nicotiana benthamiana (N. benthamiana) were obtained from China National GeneBank (ID: CNS0440294), N. benthamiana plants grown in a growth chamber under controlled conditions of 25-30 ℃, 70% relative humidity with 16 h light/8 h dark photoperiod. All plant materials used in this experimental study abide by "Safety Administration Implementation Regulation on Agricultural Biological Genetic Engineering" and "Tobacco and Tobacco Products-Detecting Method GV3101 containing pCHF3-Apo A-I Milano -GFP, pCHF3-GFP, and ER marker plasmid were, respectively, grafted into 10 ml YEB liquid medium (yeast extract 4.0 g/L, mannitol 10.0 g/L, NaCl 0.1 g/L, MgSO 4 0.2 g/L, K 2 HPO 4 0.5 g/L, pH = 7.0) and cultured at 170 rpm for 1 h. Then, the supernatants were removed and collected by centrifugation at 4000 rpm for 4 min. The bacteria were re-suspended with 10 mM MgCl 2 (with 120 μM AS) suspension and OD 600 was adjusted to about 0.6. N. tabacum plants with good growth conditions were selected, and agrobacterium containing marker plasmids and agrobacterium containing pCHF3-Apo A-I Milano -GFP/ pCHF3-GFP vector plasmids were suspended together for the operation. The endoplasmic reticulum (ER) localization signal protein was Sper, its amino acid sequence was MKTNLFLFLFLIFSLLLSLSSAEF. The mixture was mixed in a ratio of 1:1, and injected from the lower epidermis of N. benthamiana leaves with a 1 ml syringe without the spear head and made notes. The injected N. benthamiana plants were cultured under low light for 2d, and the N. benthamiana leaves injected with labeled agrobacterium tumefaciens were made into glass slides, which were observed under a laser confocal microscope (Nikon, Japan) and photographed. The Sper excitation light was 561 nm and the emitting light was 580 nm. Chloroplast fluorescence signal excitation wavelength was 640 nm and the emission wavelength was 675 nm. Purification of expressed target proteins from transient transformation When the GV3101 Agrobacterium with the target gene had an OD 600 value of 0.8-1.0, let it stand for 3 h at room temperature. After the standstill was completed, the N. tabacum leaves in good condition were injected with a 1 ml needleless syringe. After the injection was completed, culture was in the greenhouse for 72 h for the sample. Put 40 fresh leaves (7 g) into liquid nitrogen and ground to powder, add a lysis buffer (Thermo, USA) with protease inhibitor to the powder on ice; then centrifuged for 15 min to take the supernatant, and added Flag antibody (Sigma-Aldrich, USA) to mix overnight at 4 °C. After that added protein A/G (Thermo, USA) to the supernatant, mixed for 3 h at 4 °C, the samples were centrifuged at 800 × g. Then collected protein A/G and washed them with 1 × PBS. After 3 times, the protein was eluted with Tris-HCl (PH = 7.4). Diluted a portion of the eluted protein by 10 times was used for the BCA protein concentration determination. Determination of protein purity by SDS-PAGE This experiment was conducted by protein purity determination SDS-PAGE method according to 《Guide to Protein purification》(Second Edition, Edited by Richard R. Burgess and Murray P. Deutscher. 2009. Elsevier Inc.). In brief, the purity of purified Apo A-I Milano and Flag fusion protein was detected by SDS-PAGE gel staining. The purified protein solution is subjected to SDS-PAGE. Followed by Coomassie brilliant blue staining and then decolorized to enter the automatic gel imaging system (Tanon-3500R, Shanghai Tanon Technology Co., Ltd., China) for exposure using white light source to obtain gel images. and the image is saved as a TIFF file. ImageJ (NIH) is used to quantify the grayscale of the purified protein in the gel image (Alonso Villela SM et al. 2020), and the ratio of the purified Flag-Apo A-I Milano to the total protein was obtained, which was calculated as the purity of the purified protein. The amino acid sequence of Apo A-I Milano in N. tabacum was analyzed by mass spectrometry The fusion protein produced according to step 2.5 was purified with a His protein purification kit (Thermo, USA), and then the amino acid sequence of the fusion protein was analyzed according to the following steps: the protein solution was reduced with 2 µl 0.5 M Tris (2-carboxyethyl) phosphine (TCEP) (Sigma, USA) at 37 °C for 60 min and alkylated with 4 µl 1 M iodoacetamide (IAM) at room temperature for 40 min in darkness. Five folds volumes of cold acetone (Sinopharm, China) were added to precipitate protein at − 20 °C overnight. After centrifugation at 12000 g at 4 °C for 20 min, the pellet was washed twice by 1 ml pre-chilled 90% acetone aqueous solution. Then, the pellet was re-suspended with 100 µl 10 mM Triethylammonium bicarbonate (TEAB) (Sigma, USA) buffer. Trypsin (Promega, USA) was added at 1:50 trypsin-to-protein mass ratio and incubated at 37 °C overnight. The peptide mixture was desalted by C18 ZipTip (Shimadzu Corporation, 5010-21,701, Japan), and lyophilized by SpeedVac (Thermo Scientific, Savant SPD1010, USA). The sequence of the fusion protein was confirmed at Qingdao Sci-tech Innovation Quality Testing Co. Ltd. Activity detection of target proteins Dimyristoyl Phosphatidylcholine (DMPC) dry powder was suspended in TBS (PH = 7.4, 3.5 mg/mL) at a concentration of 1.2 mg/ml. It oscillated violently on the vortex oscillator for 3-5 min to form multilayer liposomes. The purified protein sample was diluted to 0.17 mg/ml. The 200 µl target protein samples and 50 µL of DMPC liposome were incubated in 24 ℃ water baths for 10 min. Total divided into three groups: negative control group DMPC + TBS (PH = 7.4, 3.5 mg/ml); DMPC + purified target protein in the experimental group; Positive control group DMPC plus standard substance (in this experiment, the mass ratio of Apolipoprotein to DMPC liposome was 1:2, to be exact, Apolipoprotein final concentration: 0.12 mg/ml; DMPC liposome final concentration: 0.24 mg/ml). The absorbance value at 325 nm was measured at room temperature, every 2 min, and monitored for 60 min until the absorbance value stabilized. The decrease in absorbance of three independent samples ± SD was plotted over time. Statistical analysis All data are expressed as mean ± standard deviation, the mean comparison between the two groups was performed by t test, and a two-tailed test P < 0.05 was considered statistically significant. mRNA expression in stable transgenic N. tabacum Fluorescence quantitative PCR was used to analyze the expression levels of target genes in stable transgenic N. tabacum. The reference gene NT-L25 was used for correction and standardization. The results showed the relative expression level of P478 batch N. tabacum (Fig. 4B), which was used for further detection and analysis. Protein detection in stable genetic N. tabacum SDS-PAGE was used to analyze the protein in the leaves of stable transgenic N. tabacum. Compared with the wild-type N. tabacum leaves, the expected band appeared at 30 KDa in transgenic P478 batch N. tabacum leaves, whereas no obvious band was found in wild-type lines. The expected band appeared at 28 KDa in the positive control, as shown in Fig. 4C. It can be preliminarily proved that Apo A-I Milano was expressed in N. tabacum leaves. Transient transformation and analysis of Apo AI Milano in N. tabacum mRNA expression in transient transgenic N. tabacum To determine whether there is Apo A-I Milano transcription in transgenic N. tabacum, the expression of Apo A-I Milano in transgenic N. tabacum was detected by RT-PCR. As can be seen from Fig. 5, RT-PCR analysis verified that these N. tabacum were positive for transgenic N. tabacum. Transient expression in N. tabacum Apo A-I Milano produced Transient expression could provide fast protein expression within 3-5 days, which confers this strategy overcome some drawbacks and challenges associated with stable expression, including inadequate protein expression, time cost and so on. In this study, we used the vector pCHF3-Flag-Apo A-I Milano to generate a recombination protein in N. tabacum. The total protein was extracted from the leaf of 3 day post-infiltration and western blot was performed using Apo A-I monoclonal antibody (Santa Cruze Biotechnology Inc.). The results showed that recombinant protein was produced in N. tabacum at approximately 30 kDa (Fig. 6A). Purification and purity of the transient transformation fusion protein Flag-Apo A-I Milano The Flag-Apo A-I Milano protein was purified using proteinA/G agarose. Western blot results showed that the purified protein solution had only one clear band, and the size was consistent with the expected theoretical value (30 KDa), indicating that the target protein was purified to a higher degree. The protein concentration was determined by the bicinchoninic acid (BCA) method, according to the standard protein curve, the protein concentration of Flag-Apo A-I Milano after purification was calculated to be 0.84 mg/ml, a total of 0.4 mg (Fig. 6B). Coomassie blue staining of recombinant Flag-Apo A-I Milano proteins in SDS-PAGE (Fig. 6C). SDS-PAGE of Flag-Apo A-I Milano Purified Protein (Fig. 7A), Image of the SDS-PAGE of Flag-Apo A-I Milano Purified Protein (Fig. 7B), Quantification of the grayscale of the purified protein in the gel image with Image J (Fig. 7C). Lane M is standard protein marker. Lane 1 presents Flag-Apo A-I Milano protein purified by 12.5% SDS-PAGE gel. Lane 2 presents the liquid that flows out of the extracted total leaf protein after passing through Protein A/G column, namely, the efflux liquid. Lane 3 presents total leaf protein Zhao et al. Bioresources and Bioprocessing (2023) 10:4 Subcellular localization of Apo A-I Milano in N. benthamiana by confocal laser microscopy To analyze the subcellular localization of Apo A-I Milano , pCHF3-Apo A-I Milano -GFP plasmid was constructed, and an empty pCHF3-GFP vector was injected into N. benthamiana leaves as the control, respectively, and observed by a laser microscope. It was found that the control group had strong green fluorescence signal in the cells, while pCHF3-Apo A-I Milano -GFP had green fluorescence in the endoplasmic reticulum of cells in N. benthamiana (Fig. 8), indicating that Apo A-I Milano was located in the endoplasmic reticulum. Determination of amino acid sequence of Apo A-I Milano fused in N. tabacum To confirm the accuracy of the expression of the target protein in N. tabacum, the amino acid sequences of Apo A-I Milano and Flag fusion proteins were determined and analyzed. The peptides were re-dissolved in solvent A (A: 0.1% formic acid in water) and analyzed by Orbitrap Fusion coupled to an EASY-nanoLC 1200 system (Thermo Fisher Scientific, MA, USA). Tandem mass spectra were processed by PEAKS Studio version 10.6 (Bioinformatics Solutions Inc., Waterloo, Canada). The amino acid sequence of the Apo A-I Milano present in the fusion protein is shown in Fig. 9 and yielded 86% coverage, it is also found that Cysteine replaces Arginine at position 173 (marked with red box), which indicates that Apo A-I Mlano , a mutant of Apo A-I, is accurately expressed in N. tabacum. Activity detection of Flag-Apo A-I Milano protein The activity of Flag-Apo A-I Milano can be determined by dimyristoyl phosphatidyl choline (DMPC) turbidimetric clarification assay, which was used to measure the abilities of Flag-Apo A-I Milano protein to combine with lipids. When Flag-Apo A-I Milano was combined with DMPC, the turbidity of the reaction system decreased. The faster the turbidity decreased, the better the Flag-Apo A-I Milano could bind lipids. As can be seen from Fig. 10, the absorbance value of the blank control group only decreased a little, and the absorbance value of the standard product decreased fastest. The absorbance value of the purified target protein decreased at a rate similar to that of the standard product. Results from this assay showed similar trend in lipid binding activity for both the Flag-Apo A-I Milano sample derived from N. tabacum and human Apo A-I protein control. Discussion Pharmaceutical and clinical studies have indicated the potential functions of Apo A-I Milano in reducing atherosclerosis (Ibanez et al. 2008;Nissen et al. 2003;Parolini et al. 2008;L. Wang et al. 2006), preventing restenosis after coronary stenting (Kaul et al. 2003;Speidl et al. 2010), reducing myocardial ischemia ) and easing features of Alzheimer's disease (Fernandez-de Retana et al. 2017).Clinical application of Apo A-I Milano in the future will require a large amount of high quality and cost-effective Apo A-I Milano . Thus, various manufacturing systems have been developed. The basic quality of such a system is the ability to express the bioactive target protein. Other superior qualities of an appropriate production system include easily and cheaply for maintenance, cost-efficient, high productivity and the ability for large scale manufacturing. In generalbioreactors could be classified into microorganism bioreactors, plant-based and animal-based bioreactors. The advantage of animal-based bioreactors is their ability to produce bioactive therapeutic proteins with high human compatibility. However, the cost is quite high for the construction and maintenance of an animal or animal cell bioreactor system (Y. Wang et al. 2013). Microorganism production systems are easy to operate and scale up. However, different translational modifications may lead to the expression of non-soluble and/ or non-functional proteins (Swartz 2001). Plant-based Fig. 9 Tandem mass spectrometric coverage of the Apo A-I Milano . The representation of Apo A-I Milano amino acid sequence through three sequencing results is shown in Fig. 9. The amino acids contained in each blue line represent the identified amino acids, and several blue lines indicate how many times they have been identified. The amino acids of the Apo A-I Milano present in the fusion protein used in this study showed 86% coverage Fig. 10 Kinetics of the interaction of Flag-Apo A-I Milano with DMPC. The changes in turbidity were monitored by the change in absorbance at 325 nm at 2 min intervals for the initial 60 min and plotted as a function of time. The DMPC turbidity clearance assay was used to measure the abilities of Flag-Apo A-I Milano protein to combine with lipids. The level of ability was represented by the following: green curve represents the absorbance value of the blank control group, and the blue curve represents the absorbance value of the standard product. The red curve represents the absorbance value of the purified target protein bioreactors, such as transgenic plants and plant tissue culture systems, offer cheap and easily scalable production of materials (Xu et al. 2016). Plant bioreactors can be divided into stable expression systems and transient expression systems according to whether exogenous genes can be stably inherited by offspring. Stable expression systems are the key technology for obtaining stable genetic transgenic plants. Compared with the stable expression system, transient expression does not need to integration foreign genes into the genome, which has the advantages of a short expression cycle and high expression volume (Nosaki et al. 2021), and is an important means for later functional analysis. N. tabacum is a model plant, with advantages, such as easy planting, short growth period, high yield per plant, easy transfer of exogenous genes and mature genetic transformation system, so it can reduce production costs and provide possibilities for large-scale production of exogenous proteins. Therefore, we constructed three expression systems in N. tabacum at the same time to confirm the expression of Apo A-I Milano at the mRNA and protein levels, as well as determination of its amino acid sequence. The purpose was to quickly determine the expression characteristics and protein structure and function of Apo A-I Milano through the transient expression system, and to construct a stable expression system of Apo A-I Milano in N. tabacum, the purpose was to observe the genetic stability of Apo A-I Milano expression by subculturing Apo A-I Milano positive seeds harvested in a stable expression system. We also designed the fusion expression of Apo A-I Milano and Flag, and obtained the target protein with purity of 90.58% ± 1.65. N. tabacum was selected under antibiotic stress. PCR and RT-qPCR were performed to examine the presence of the Apo A-I Milano gene. Moreover, the expression of Apo A-I Milano was analyzed by Western blot. Humanized proteins and plant derived proteins have great differences in post-translational modification, especially in glycosylation modification. Several literatures have reported glycosylation modification of plant chassis. The glycosylation of humanized proteins and plant derived proteins on endoplasmic reticulum is basically the same, but the glycosylation of proteins located in Golgi matrix is quite different (Schoberer et al. 2018). This study identified the organelle localization of the target protein through subcellular localization. The results showed that the Apo A-I Milano protein was located on the endoplasmic reticulum of N. bentamiana. It was speculated that the glycosylation modification of the Apo A-I Milano protein after expression in N. bentamiana should be consistent with that in mammalian cells, providing a basis for the next activity identification. the subcellular localization studies were performed with N. bentamiana rather than N. tabacum to avoid the huge overexpression in N. tabacum that hampers subcellular localization studies, so I chose N. bentamiana is used for subcellular localization. In the experiment, to more accurately clarify the expression of Apo A-I Milano fusion protein in N. tabacum, the purified Apo A-I Milano fusion protein was sequenced and identified by tandem mass spectroscopy, according to the preliminary analysis, the amino acid sequence of Apo A-I Milano expressed in N. tabacum was compared with that of normal human Apo A-I sequence (274aa, Uniprot/Swiss prot: P02647.1), and the coverage rate was 86% which was analyzed together with the above other identification results, it shows the accuracy and authenticity of the expression of Apo A-I Milano in N. tabacum. Next, we further analyzed the post-translational modification of the protein expressing Apo A-I Milano in N. tabacum, O-and N-terminal glycosylation and its functional analysis and clinical trials, to further promote the possibility of its clinical application and accelerate the pace of its clinical application. The protein activity and function were analyzed by the DMPC turbidity clarification test. Meanwhile, post-translational modification, and protein function tests of exogenous proteins are underway. Although many scientists have recently done a lot of work on plant bioreactors and achieved unprecedented results, especially in the expression of medicinal proteins, some biopharmaceuticals beneficial to human health have also been discovered, including monoclonal antibodies and vaccines, some progress has also been made in the plant seed expression system (Nykiforuk et al. 2011). However, the expression of Apo A-I Milanno in the model plant N. tabacum is the first report. It is currently superior to other expression systems in terms of performance and yield. In addition, although using transgenic plants to produce medical protein provokes some concerns, such as the need to improve the amount of heterologous protein expressed, the doubt about the differences in the method of glycosylation, with the huge market demand and the tireless efforts of scientific research personnel, accompanied by the optimization and extensive use of the system step by step, it will come true using plant bioreactors to produce medical protein. In conclusion, we presented the establishment of an N. tabacum culture system suitable for Apo A-I Milano expression. Our future work will focus on Apo A-I Milano bioactivity characterization. The final aim will be large scale production of bioactive Apo A-I Milano . The N. tabacum culture system appears to provide a viable, cost-efficient, and environmentally friendly platform for the production of pharmaceutically bioactive proteins.
9,016
2023-01-21T00:00:00.000
[ "Biology", "Environmental Science" ]
Ultrasonic energy attenuation characteristics in plastic deformation of 2219-O aluminum alloy In the ultrasonic-assisted metal forming process, the dislocations within the material are easier to move due to the absorption of ultrasonic energy, which can effectively promote material flow and improve the formability of components; this phenomenon is called the ultrasonic softening effect. The ultrasonic softening effect is generally treated as homogeneous at the whole materials for simplicity, while the attenuation of the ultrasonic energy along the propagation direction will bring inhomogeneous distribution of softening degree. In addition, the absorption of the ultrasonic energy by the material is also affected by the dislocation movements in the metal plastic processing procedure, resulting in the variation of the ultrasonic attenuation characteristics in the material with the plastic deformation, and the current research has little concerned about it. In this paper, the ultrasonic attenuation properties in 2219-O aluminum alloy with plastic strain were investigated. The influence of the dislocations and the dislocation movements caused by plastic deformation on the ultrasonic attenuation was characterized. The pre-strain specimen was designed to indicate the degree of plastic deformation of the material, and the specimen thickness direction was defined as the propagation direction of the ultrasonic energy. The experimental results and the microstructure observation showed that the absorption of ultrasonic energy by the material increases firstly and then decreases with the plastic strain increasing, which is related to the evolution of movable dislocations within the material. In order to accurately describe the ultrasonic energy attenuation characteristics in plastic deformation, the hardening equation of 2219-O aluminum alloy considering ultrasonic propagation distance and plastic strain was built, and the model accuracy was verified based on the experimental data. Introduction Ultrasound has the advantages of high frequency, concentrated sound energy, and strong propagation direction [1]. Some physical, chemical, and biological properties or states of the material can be altered by the action of ultrasonic energy on the material [2]. Studies have shown that applying ultrasonic vibration during metal processing can effectively reduce material yield stress and flow stress [3][4][5][6][7], reduce interfacial friction between mandrel and blanks [8,9], and improve the surface roughness of formed components [10]. At present, the research of the ultrasonic-assisted forming process mainly focuses on mechanism analysis, simulation modeling, and process exploration. Many research focused on revealing the acoustic softening mechanism, and the homogeneous softening effect in the material is supposed. Siddiq et al. [11] modified the evolution law of crystal plasticity by considering the effect of acoustic softening due to high-frequency vibration. This model exhibited good predictions for ultrasonic-assisted plastic deformation of polycrystalline aluminum. Yao et al. [12] combined the thermal activation model Arrhenius equation with the Gibbs free energy equation and proposed a unified acoustic plastic model to account for the acoustic softening phenomenon, which could accurately predict the stress-strain curves of aluminum specimens in the ultrasonic-assisted upsetting test. Wang et al. [13] proposed a mechanism that the athermal dislocation dynamics may change at the microstructure level during ultrasonic-assisted deformation, and the acoustic softening effect on the Hall-Petch behavior was modeled by incorporating a power function of acoustic energy density into the dislocation ejection work. The model could accurately predict the Hall-Petch slope in ultrasonic-assisted micro-tension at the lower strains. Based on the acoustic theory [14], Shi et al. [15] investigated the attenuation of ultrasonic energy propagation in friction stir welding. The results showed that the energy attenuation caused by the absorption of the workpiece material exhibited an exponential attenuation law. The mechanical properties of the material at different positions are various, resulting in an inhomogeneous distribution of ultrasonic softening [2]. In the previous research on the ultrasonic-assisted spinning of ribbed components, our group compared the final rib heights with the simulation and the experiments; uniform material soften properties with the ultrasonic field were supposed, and the result showed the trend is correct, but somewhat different [16], as shown in Fig. 1. One of the important reasons is that the attenuation of ultrasonic energy when propagating in the material is not considered. Therefore, the real ultrasonic attenuation characteristics should be investigated to improve the accuracy of the simulation model. In this paper, the attenuation characteristics of ultrasonic energy of 2219-O aluminum alloy in plastic deformation were theoretically analyzed. An experimental platform was built to measure the ultrasonic transfer attenuation degree. The relationship between ultrasonic transfer efficiency and plastic strain was investigated and explained based on microstructure observation. The hardening equation of 2219-O aluminum alloy considering ultrasonic propagation distance and plastic strain was established, and the experimental results validated its accuracy. Ultrasonic attenuation parameter definition In order to clarify the attenuation characteristics of ultrasonic energy along the propagation direction in the plastic deformation, the propagation of ultrasonic energy in the workpiece needs to be explained first. According to the acoustic theory [14], the attenuation of the ultrasonic wave along the propagation distance is exponential attenuation in a solid medium. The schematic diagram of the propagation is shown in Fig. 2. Based on Fig. 2, the ultrasonic attenuation expression can be presented as where U(x) represents ultrasonic energy at propagation distance x, U 0 is the initial input of ultrasonic energy, α is the attenuation coefficient, and x is propagation distance. The theoretical analysis in this research is based on the following assumptions: (1) Ultrasonic attenuation occurs in the direction of propagation and is one dimensional; (2) The input of ultrasonic energy is constant; (3) The proportion of environmental consumption of ultrasonic energy is ignored. In this study, the ultrasonic transfer efficiency is defined to characterize the attenuation characteristics of ultrasonic energy. According to Eq. (1), it is assumed that under the influence of plastic deformation, the output ultrasonic energy U out can be expressed as where U out is the output ultrasonic energy, U 0 is the initial input ultrasonic energy, and t is workpiece thickness. Due to the ultrasonic attenuation is composed of various attenuation forms, it is difficult to measure the ultrasonic attenuation coefficient α. When the workpiece thickness is determined, e − t is constant, that is, U out ∕U 0 is a fixed value. Therefore, we can directly measure the energy ratio of input and output to obtain the ultrasonic transfer efficiency. Ultrasonic energy U is related to frequency, amplitude, and medium and can be expressed as where m is the workpiece density, and and f are the ultrasonic vibration amplitude and frequency, respectively. As shown in Eq. (3), since the ultrasonic is transmitted in the fixed material, the only parameter that can be changed is the amplitude. Thus, the parameter of ultrasonic transfer efficiency is introduced, which is defined as the ratio of output amplitude to input amplitude 0 . Based on Eq. (2), the specific expression of ultrasonic transfer efficiency can be obtained: In Eq. (4), parameter can be labeled according to experiments. Experimental principle Penetration method is generally used to measure ultrasonic attenuation property. And the scheme of measuring the ultrasonic transfer coefficient is shown in Fig. 3. The ultrasonic tool was placed on the material surface, and a piezoelectric acceleration sensor was installed on the corresponding position of the other side to collect ultrasonic vibration signals. In the metal forming process, the strain continues to increase from elastic stage to plastic stage, under the action of the external load. It is difficult to use in situ methods to measure the ultrasonic attenuation effect, and a method of fixed strain value was used in this research. First, the attenuation efficiency under ultrasonic loading conditions was measured, then the strain value was sequentially increased to obtain the relationship between strain and attenuation characteristics. If the interval of strain values is sufficiently small, it can be regarded that the attenuation characteristic in the continuous deformation process is obtained. The following experimental schemes are designed: (1) Pre-strain test: the uniaxial tensile test was carried out to obtain samples with different pre-strain, which can be used to characterize the plastic deformation; (2) Measurement experiment of ultrasonic transfer efficiency: The different ultrasonic amplitudes were applied to the pre-strain samples, and the ultrasonic energy transfer coefficient was measured. Experimental material 2219-O aluminum alloy with 3 mm thickness was used in this study, and the chemical composition is shown in Table 1. The uniaxial tensile specimen was cut along the 0° direction from the rolled sheet according to the ASTM B211/B211M-2019 standard, and the specimen gauge length is 50 mm, as shown in Fig. 4. The tensile stress-strain curve of 2219-O aluminum alloy was obtained from SUNS electronic universal testing machine, as shown in Fig. 5. And the corresponding parameters are shown in Table 2. Pre-tensile specimen preparation Six groups of samples were selected for research. The uniaxial tensile test was carried out at a tensile rate of 1.5 mm/ min. Since the ultimate tensile displacement of the sample was about 9 mm, as shown in Table 2. The samples were stretched from 2 to 7 mm, respectively, and six samples of numbers 0-5 with different pre-strains were obtained, as shown in Table 3. Experiment platform The energy transfer coefficient measurement test platform is shown in Fig. 6. The selected ultrasonic vibration device is HJ20-3500, as shown in Fig. 6a. The ultrasonic frequency of the ultrasonic generator is 20 kHz, and the ultrasonic generator can change the ultrasonic amplitude by adjusting the power; the minimum ultrasonic amplitude can be set to 3 μm. The data acquisition (DAQ) system includes a piezoelectric acceleration sensor, data acquisition card, and acquisition software, as shown in Fig. 6b. The selected piezoelectric sensor is SA-IE50G, which has a measurement frequency range of 0.2-25 kHz and a sensitivity of 99.5 mV/g. Data acquisition hardware and software includes a data acquisition card, constant current adapter, and DAQ data acquisition software. The piezoelectric sensor was glued to the upper surface of the pre-strain sample to measure the vibration signal transmitted by ultrasonic vibrations along the thickness of the sample, and then the sample was fixed on the upper platform by the adhesive method. The ultrasonic tool was in contact with the lower surface of the pre-strain sample to apply ultrasonic amplitude. During the test, the ultrasonic amplitude can be adjusted to 3, 6, and 9 μm by changing the power of the ultrasonic generator. Then, the data acquisition system was used to collect the vibration signal, and the frequency spectrum of the vibration signal was further analyzed to obtain the ultrasonic vibration amplitude data. TEM observation was used to obtain the dislocation configuration at different strains under the ultrasonic amplitudes; the samples were prepared from the tested specimen, and the selected observation position was the cross section of the position marked by the box line in Fig. 7. The cross section was thinned to 40 µm with a precision ion thinning instrument (PIPS 695). Ultrasonic energy attenuation characteristics The pre-strain results obtained in the uniaxial tensile test are shown in Table 3. Sample #0 is the initial stage of plastic deformation. In order to explore the relationship between ultrasonic attenuation and plastic deformation, it is assumed that ultrasonic energy is not lose at all in sample #0, which means that U out = U in . The results of partial data of the ultrasonic energy signal measured are shown in Fig. 8a. The MATLAB spectrum analysis program was written to conduct spectrum analysis on the data within a fixed time. The obtained amplitude is shown in Fig. 8b, and its corresponding frequency is around 20 kHz, which proves that the frequency of ultrasonic keeps constant in the propagation process. The input ultrasonic amplitudes were 3 μm, 6 μm, and 9 μm, and the corresponding output amplitudes were measured respectively. The results of ultrasonic transfer efficiency changing with pre-strain are shown in Table 4. Figure 9 shows the variation law of ultrasonic transfer efficiency with pre-strain and input ultrasonic amplitude. It can be seen that the values of ultrasonic transfer efficiency do not change significantly with the change of the input ultrasonic amplitude. But the ultrasonic transfer efficiency decreases with the increase of pre-strain, but reaches the minimum value at a certain pre-strain, and then continues to increase with the increase of pre-strain. In the plastic deformation stage, the dislocation atoms are easily activated, due to the input of ultrasonic energy, which makes the deformation of the material easier. Figure 10 shows the tensile stress-strain curve of 2219-O aluminum alloy with and without ultrasonic vibration. The shaded envelope area represents the total reduction in stress under different ultrasonic amplitude conditions. (a) Transfer coefficient measurement system (b) Data acquisition system It can be seen from Fig. 10 that stress level reduction varies with strain increases, which means that there is a variation in the absorption of ultrasonic energy by the material as the plastic deformation progresses. And the variation trend of stress reduction with the increase of strain due to ultrasonic energy can be obtained, as shown in Fig. 11. The variation trend of stress reduction remains the same under different ultrasonic amplitude conditions, but with the increase of ultrasonic amplitude, the degree of stress reduction increases. Combining the stress reduction trend in Fig. 11 with the ultrasonic transfer efficiency in Fig. 9, we can find that when the ultrasonic energy transfer efficiency decreases, the corresponding ultrasonic energy absorption degree increases, which further proves that in the plastic deformation process, the absorption of ultrasonic energy by the material is not monotonically increasing with the increase of strain. Microstructure analysis In order to explore the non-monotonic phenomenon, dislocation density and its configuration of samples with different pre-strains were observed. The internal dislocation configuration of 2219-O aluminum alloy under different prestrain conditions is shown in Fig. 12, in which the short rod structure with orthogonal sparse distribution is the second phase CuAl 2 [17]. It can be seen from Fig. 12 that most dislocations present the irregular linear distribution. Relevant studies show that [18] the proportion of ultrasonic scattering attenuation caused by the precipitated phase and grain boundary is very small, which is mainly caused by internal friction of dislocation damping. Therefore, our study mainly analyzes the reason of internal friction caused by dislocation configuration. Generally, as the pre-strain increases, the dislocation density and its configuration also change. Sample #0 in Fig. 12 is the original dislocation configuration with linear distribution, and it can be seen that there are a small number of dislocations in the material. In samples #2 and #3, the dislocation density increases gradually because of metal work hardening. It is worth noting that the dislocation distribution of sample #1 is similar to that of sample #3, and the ultrasonic transfer efficiency of sample #1 is close to that of sample #3. While the dislocation density of sample #2 is lower than that of sample #3, the ultrasonic transfer efficiency of sample #2 is lower than that of sample #3, which further indicates that the ultrasonic transfer coefficient is correlated with dislocation density. As the deformation continues to increase, dislocation walls at the bottom right of the TEM image of sample #4 will form, when the dislocation multiplication and entanglement persist to a certain extent. During the formation process of the dislocation walls, the dislocation distribution is obviously different, which is manifested that the dislocation density at the dislocation walls is larger, while the dislocation density nearby is relatively small. In addition, with the further increase of pre-strain, there will be obvious dislocation cell formation at the top right of the TEM image of sample #5. The dislocation density distribution is lower inside the dislocation cell and higher in the wall of the dislocation cell. Related studies have shown that mobile dislocations slip under strain while interacting with dislocation walls and eventually transform into immobile dislocations and dislocation walls [19]. Besides, the vibration of movable dislocation at its position is the main factor causing energy consumption [20]. Therefore, combined with the analysis of experimental results and microscopic observation results, it can be seen that with the occurrence of deformation, the increase of the above three kinds of dislocations leads to the significant increase of the overall dislocation density of the material. The increase of movable dislocation density leads to the increase of ultrasonic energy loss. When the structures such as dislocation walls and dislocation cells are formed due to the multiplication and entanglement of dislocations, the movable dislocation densities decrease gradually, and the ultrasonic energy loss decreases accordingly, which means that the degree of ultrasonic attenuation decreases. Therefore, taking the small range of the corresponding strain of sample #4 as the lower limit of ultrasonic transfer efficiency, the transfer efficiency gradually increases after the formation of dislocation walls, dislocation cells, and other structures. The experimental results show that the ultrasonic energy transfer efficiency varies with the plastic deformation. And the analysis of the microscopic observation results shows the internal cause of the variation law of the ultrasonic transfer efficiency. In order to accurately describe the ultrasonic energy attenuation characteristics in plastic deformation, the corresponding model needs to be established. Establishment of hardening equation According to the analysis of the material microstructure, after the generation of dislocation cell, it can be believed that with the continuous increase of deformation, the ultrasonic energy transfer efficiency will continue to improve until it approaches 1. Based on the simulation deformation strain results in existing studies, the maximum plastic strain value at the position of the inner ribs is about 0.1 [16]. Therefore, it is assumed that the transfer coefficient reached 1 when the true strain was 0.1 and remained constant if the strain continues to increase. Based on the above analysis, in order to carry out complete parameter fitting, the range of prestrain was expanded. Not only the pre-strain data obtained in the experiment was used, but also a data point was set, that is, when the pre-strain is 0.15, the ultrasonic transfer efficiency is 1. To further accurately describe the law of ultrasonic energy transfer coefficient changing with strain, the Sigmoid function was used to describe the change of ultrasonic energy transfer efficiency in a small strain range of 0-0.2. The definition of the Sigmoid function can be expressed as This function has the characteristics of monotonic and continuous, and it is easy to divide the predicted value into two parts using this function. In other words, the center point can be defined as the limit value, and the corresponding output y can be divided into two parts on the left and right sides of the limit value. Therefore, on the basis of the experimental results, the Sigmoid function is suitable for describing the variation law of ultrasonic transfer efficiency that first decreases and then increases with the increase of pre-strain. According to the Sigmoid function, the ultrasonic energy transfer efficiency can be defined as where A and B are fitting coefficients, and 0 is the pre-strain value at the minimum transfer efficiency. The data point with ultrasonic energy transfer efficiency of 1 when the strain was set at 0.15 was added to the original experimental data, and the transfer efficiency results of different input ultrasonic amplitudes changing with strain were fitted according to Eq. (6), and the fitting results were shown in Fig. 13. The R-square goodness of fit in the first half can reach 0.962, 0.730, and 0.928, respectively, and the overall fitting result is good. The results of fitting parameters are shown in Table 5. It is considered that ultrasonic transfer efficiency is only related to strain, since the results of ultrasonic transfer efficiency do not change significantly with the input amplitude. The values of fitting parameters A and B can be obtained by averaging the fitting results. In the first half of the monotonically decreasing part, the fitting parameters A and B are defined as A 1 and B 1 , while in the second half of the monotonically increasing part, the fitting parameters A and B are defined as A 2 and B 2 . In the previous work, based on the study of the ultrasonicassisted uniaxial tensile process of 2219-O aluminum alloy, we have established the hardening equation of 2219-O aluminum alloy with ultrasonic softening effect, which can be expressed as [16] where is the stress without ultrasonic vibration, which can be expressed as g( ) 1−m represents the stress reduction caused by ultrasonic vibration: where m is 0.77, and is ultrasonic amplitude. Due to the occurrence of ultrasonic energy attenuation, the ultrasonic amplitude is varied at different positions in the propagation process, and the change of ultrasonic amplitude is related to the propagation distance, strain, and input amplitude. Based on Eqs. (4) and (6), the expression of ultrasonic amplitude changing with the propagation distance and strain can be obtained: where f x 0 ( ) is the expression of the transfer efficiency changing with strain obtained by the experiment when the material thickness is x 0 , 0 is the input ultrasonic amplitude, and x is the propagation distance. Based on the above deduction and analysis, the hardening equation of 2219-O aluminum alloy considering ultrasonic attenuation characteristics is shown in Eq. (11). x 0 is 3 mm in this study, and the specific expression of f x 0 ( ) is shown in Eq. (6). The specific parameters were fitted by the experimental results of attenuation, and the established hardening equation could provide a theoretical model for the simulation of ultrasonic-assisted plastic forming considering ultrasonic attenuation. Verification of the hardening equation In order to verify the correctness of the corrected equation, the stress-strain data obtained in the uniaxial tensile test were respectively substituted into the existing hardening equation considering the ultrasonic softening effect shown 11), the ultrasonic propagation distance was set to 0.1 mm to ensure that the transfer distance was basically the same as that in Eq. (2). On this basis, using the tensile test data when the ultrasonic amplitude is 3 μm, the stress-strain curve under the action of ultrasonic was obtained by calculation, as shown in Fig. 14. Figure 14 shows the comparison between the experimental results and the calculation results of the hardening equations. It can be seen that the calculation results of Eqs. (2) and (11) are in good agreement with the experimental results, which proves the accuracy of the established modified hardening equation. The variation trend of stress in Fig. 14 corresponds to the variation law of ultrasonic transfer efficiency in Fig. 9. When the true strain is about 0.05, the ultrasonic transfer efficiency is the lowest, which means that the ultrasonic energy absorption effect is more obvious at this point, corresponding to further ultrasonic softening, so the stress will first decrease and then increase. And the stress-strain curve after the strain is 0.05 is more consistent with the experimental data, which indicates that the hardening equation considering ultrasonic attenuation is more in line with the actual ultrasonic-assisted plastic forming process. By changing the x in Eq. (10), the stress-strain curves under different ultrasonic propagation distances can be calculated, as shown in Fig. 15. With the increase of ultrasonic propagation distance, the tensile stress gradually increases, which indicates that the ultrasonic softening effect is gradually weakening in this process. And this change is consistent with the attenuation form of ultrasonic energy in plastic deformation described in Eq. (6). Besides, under the condition of different ultrasonic propagation distances, the ultrasonic energy efficiency also shows a trend of first decreasing and then increasing with the increase of strain, and with the increase of ultrasonic propagation distance, this variation trend is more obvious. Therefore, the hardening equation established in this paper can also characterize the acoustoplastic properties of materials under different ultrasonic propagation distances. Conclusion In this paper, the influence of the dislocation within the materials and the dislocation movements caused by plastic deformation on the ultrasonic attenuation characteristics was studied. The experimental platform for measuring ultrasonic transfer efficiency was established, and the variation law of ultrasonic transfer efficiency related to plastic strain was obtained, which was further explained based on microscopic analysis. Finally, the hardening equation of 2219-O aluminum alloy considering ultrasonic propagation distance and plastic strain was established, and the model accuracy was verified based on the experimental data. The specific conclusions are as follows: 1. The degree of ultrasonic attenuation is affected by the thickness of the material and also related to the plastic strain of the material. For 2219-O aluminum alloy, the ultrasonic transfer efficiency decreases first and then increases with the increase of pre-strain. When the prestrain reaches 0.05, the ultrasonic transfer efficiency is the lowest. 2. In the plastic deformation stage, the absorption of ultrasonic energy by movable dislocation density is the reason for the variation of ultrasonic transfer efficiency. The mobile dislocation density will gradually increase with increasing strain, but due to dislocation entanglement, movable dislocations will gradually transform into dislocation cells and dislocation walls, resulting in nonmonotonic changes in ultrasonic transfer efficiency. 3. The hardening equation of 2219-O aluminum alloy considering ultrasonic propagation distance and plastic strain can provide more accurately express to the acoustoelastic properties of materials under different ultrasonic propagation distances. This hardening equation can be applied to the ultrasonic-assisted spinning process, and the influence of the ultrasonic attenuation characteristics on the forming height of the rib can be explored, so as to obtain a more accurate simulation result. Data availability Not applicable. Code availability Not applicable. Declarations Ethical approval I certify that the paper follows the guidelines stated in the journal's "Ethical Responsibilities of Authors." Consent to participate Not applicable. Consent for publication Yes. Competing interests The authors declare no competing interests.
5,943
2022-12-23T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
About a Method of Calculation of Importance Degree of Geometrical Characteristics to Identify a Human Face on the Basis of Photo Portraits The paper offers a new algorithm to find coefficients which determines importance degree of the values of geometrical characteristics used to identify a human face on the basis of photo portraits. Significance of coefficients determining importance degree of the values of geometrical characteristics for identification is explained. Determination of coefficients determining importance degree of the values of geometrical characteristics for identification leads to the reduction in the number of values of insignificant geometrical characteristics, as well as to the improvement of identification quality and to the decrease in time spent for identification. Introduction Modern information and communication technologies (ICT) enable the development of various areas of great importance, as well as of biometric technology. The expansion of the fields of application of these technologies plays an important role in preventing a number of dangerous incidents. It is obvious that the prevention of dangerous manifestations, such as the prevention of international terrorism, transnationals organized crime, as well as illegal weapon and drug transportation is one of the main duties of each state. One of the methods in detecting and neutralizing hazardous manifestations is just the advantages of biometric identification technologies. Biometric technologies particularly strengthen reliable control passport-visa control and other identification documents. Information on the dynamics of biometric technology market given by the world-famous International Biometric Group, gives a way to say: Taking into account the unique characteristics of a person chosen separately, biometric technology was organized on the basis of biometrics [1,8,9]. People differ significantly from each other for the sizes and the arrangement of such face elements as eyes, eyebrows, noses, ears, mouths, etc. Therefore, the first approach to the problem solution of automatic person face identification by photo portrait was based on the selection and comparison of some anthropometric face peculiarities. This method has been used in experimental criminalities for years. This technique was especially effective when a person did not have a photograph except the one in a passport [2]. Paper [3] is devoted to the recognition of a human face on the basis of a photo portrait. For face recognition based on a photo portrait, the authors developed 19 anthropometric face points. These points are chosen from the point of resistance to slight changes (caused by the angle, light, facial expression, cosmetics, age, and so on). An algorithm has been developed for calculating the values of the distances between these points and of geometrical characteristics of the human face. It is shown that the difference of the developed algorithm from the other existing ones is that compared with the other photos stored in the database it works even in the absence of any other information about the person except the image described in the photo [10,11]. The development principles of "Recognition" biometric identification system (RBIS) are explained on the basis of algorithm given to identify a human on the basis of photo portraits in paper [4], and a database with a developed structure is organized for it. Various sized images of n persons and individual data for each (first name, middle name, last name, date of birth, eye colour, height and etc.) were included in the database. The paper also describes an algorithm for default addition of the values of the geometrical characteristics, for search and identification of an image of a human face on the basis of photo portrait in the database [12,13]. The paper [5] provides information about the algorithms developed for normal distribution of the values of geometrical characteristics used in the recognition of a human face on the basis of photo portrait and to define trust interval of the geometrical characteristics. It is shown that the determination of normal distribution of geometrical characteristics is of great importance for various reasons. m selection value is randomly taken from the values of geometrical characteristics and its normal distribution is investigated [6,14,13,15]. The paper [5] provides information about the algorithms developed to define trust interval of the values of geometrical characteristics in the recognition of a human face on the basis of photo portrait. On the basis of the conducted researches, the real interval values of distances between the anthropometric points of a human face are established. With the help of fuzzy calculation, interval values of geometric characteristics values, proper to the same points, are found [6,7,15]. Finding coefficients which determine importance degree of the values of geometrical characteristics used to identify a human face on the basis of photo portraits is of great importance for the recognition process from the various views. Determination of coefficients determining importance degree of the values of geometrical characteristics for identification leads to the reduction of the number of values of insignificant geometrical characteristics, as well as to the improvement of identification quality and to the decrease of time spent for the identification. Problem Statement A new algorithm is proposed in this paper to find the coefficient which determines importance degree of the values of geometrical characteristics. Let us explain the essence of the algorithm. The values of geometrical characteristics of n quantity used for the identification are divided into the clusters of m quantity for the same sign. To determine the importance degree of the values of geometrical characteristics an identification process is carried out temporarily replacing each value of geometric characteristics of each person with the other values taken from the replacement interval, and the impact of the replacement in the recognition process is assessed. Importance Degree Algorithm Note that, calculating the distance between the other photo portraits existing in the database and the two points in 16-dimensional space, the photo portrait of any person is compared with the following formula in the work [5]. In this paper, the formula (5) is replaced with the formula (1). The aim of the replacement is to accelerate the identification process and to reduce the time spent for the identification. Including the coefficient which determines importance degree of the values of geometrical characteristics, into the formula (1), we can increase the importance of the recognition and may not take into account insignificant geometrical characteristics. When in a database it is too many records, then this replacement very important. Including the coefficient (4) into the formula (1), the following distance formula is achieved: Experimental Test As it is mentioned above, a large number of experiments have been carried out at TBIS on the basis of above mentioned algorithm in order to calculate the coefficients which determine importance degree of the values of geometrical characteristics used to identify a human face on the basis of photo portraits. In The values proper to the given formulas have been calculated for other persons in this way, as well. The software system is capable to detect the most similar faces comparing any photo portrait of any person uploaded to the system with other existing ones in the base. Note that the rumours regard to the identity of the hero of the mysterious "Mona Liza" by the prominent Italian artist Leonardo da Vinci is still not calming down. The disputes in connection with who is described in the portrait have been going on over more than 500 years. The portraits of Leonardo da Vinci (figure 5) and Mona Liza (figure 6) painted in different years were included in the system database by the authors as an experiment. Two versions of identification process were carried out through the system. In the 1stversion the portrait of Mona Liza was included in the system base for identification and compared with the other ones existing in the database. Initially, the most similar portraits were Mona Liza (100%) and the portrait of Leonardo da Vinci (99.5%). In the2nd version the portrait by artist was included in the system for identification. In this case, the most similar portraits were the portrait of Leonardo da Vinci himself (100%), and then the portrait of Mona Liza (99.5%). Conclusions A new algorithm has been proposed to find coefficients which determine importance degree of the values of geometrical characteristics used to identify a human face on the basis of photo portraits: 1. A formula is given to calculate distances between the wanted photo portraits possessing m number of geometric characteristics with the points of photo portraits in the base; 2. A formula is given to calculate a step in appropriate intervals of each cluster in order to determine importance degree of the values of geometrical characteristics of each photo portrait for identification; The given algorithm leads to the reduction in the number of values of geometrical characteristics used for identification, as well as to the improvement of identification quality and to the decrease in time spent for the identification.
2,037
2012-08-09T00:00:00.000
[ "Computer Science" ]
GutMicrobiota and Host Nuclear Receptors Signalling Systemic homeostasis in animals is maintained by a network of complex signalling pathways involving several kinds of endogenous molecules/metabolites. Over the years, the role of microbiota present in the digestive tract in animal physiology has been under focus and path-breaking findings have been reported. It seems that the gut microbiota has an influence in perhaps almost all the physiological functions, including the central nervous system in animals. The means by which the microbiota impinges control on the host system biology is manifold and complex. However, one of the mechanisms involve microbiota-derived metabolites that functions as ligands to modulate host tissue gene expression via the nuclear receptors (NRs), which is a novel way of exerting control over the host physiology. Few of the host NRs, such as the pregnane X receptor (PXR), farnesoid X receptor (FXR) and peroxisomeproliferator activated receptors (PPARs) gene transcriptional activities have been demonstrated to be modulated by the binding of microbial-secreted metabolites acting as ligands. Such interactions control vital functions in the host such as intestinal epithelial barrier protection, immune tolerance and anti-inflammatory responses. In this article, recent important findings in understanding gut microbiota-derived metabolites and select host NRs signalling will be briefly reviewed. Introduction Animals, including humans require the integrative functions of various physiological systems in the body in order to provide optimal conditions for growth and development and ultimately survival.Physiological systems consist of the various organs and tissues working in tandem to achieve desired results.However, the enormous microbiota mass (∼1 kg) found in the mucosal surface of the human colon is also seen as another active 'endocrine organ' of the body [1].The gut microbiota is an amalgamation of diverse species of microbes including, bacteria, fungi, archaea and viruses, with the bacterial population (100 × 10 12 ) dominating the microbial family [2,3].The gut bacteria exhibits huge phylogenetic diversity and dynamic composition amongst individuals and is influenced by age and various environmental conditions [4].The relationship between the host and microbiota exhibit mutualism which ultimately contributes to dual survival.The microbiota influences several functions in the host including gastrointestinal physiology and shapes both innate and adaptive immunity [4].Pathways and mechanisms whereby such functions are regulated, however, are less understood.Findings over the last few years revealed that the gut bacterial counterpart secretes metabolites that have modulatory effect and is one of the novel and effective means of controlling various host physiology [5,6].A multitude of substrates are derived from the diet as well as from host metabolism which the gut microbiota processes and modifies to generate the active metabolites.In turn some of these metabolites have specific effect on the host molecular events through various mechanisms, including ones mediated by members of the nuclear receptors (NRs) superfamily of transcription factors [7].Microbial metabolites serve as robust signals in the proper functioning and maintenance of the intestinal epithelial cells (IECs) integrity, including the immunocytes leading to immune tolerance and anti-inflammatory responses amongst various other benefits [7].However, most of the gut microbial metabolites are unidentified and even majority of the known ones are not fully characterized yet.It is estimated that roughly 10% of gut microbial-derived metabolites are present in the mammalian blood circulation [5][6][7].Metabolites secreted by the gut microbiota can originate directly from the breakdown of dietary components, from microbial de novo biosynthesis and also from host-derived molecules that are transformed by the microbiota. Here an attempt has been made to review the new findings with respect to microbiota-secreted metabolites and its modulation of the host NRs signalling pathways. Microbial Metabolites and Farnesoid X Receptor (FXR) One of the ways by which the secreted microbial metabolites exert their effect is by interacting with the target host nuclear receptors (NRs) that lead to regulated gene expression and consequent biologic effects [7].NRs represent a superfamily of eukaryotic ligand-activated transcription factors with widespread role in biological processes [8].Many of the secreted microbial metabolites acting as ligands diffuse inside the host intestinal cells and interact with the specific NRs.This interactive system provides a direct influence of the microbiota on the host physiology which can have control on the health and disease status of individuals (Figure 1).Moreover, it is now well-known that changes in the microbiota composition and diversity impact the host physiology and is often associated with the onset and progression of diseases such as obesity, cancer, atherosclerosis and inflammatory bowel diseases (IBDs) [9]. Bile acids (BAs) made by the liver are stored in the gall bladder in animals and are meant to act as detergent that aid in the digestion and absorption of lipids in the small intestine.However, a fraction of the secreted BAs undergo biotransformation (from primary BAs to secondary BAs, such as cholic acid into deoxycholic acid) by the resident gut bacteria which then act as ligands for FXR (NR1H4), also known as bile acid receptor and regulate its gene transcriptional activity [10].BAs-activated FXR regulate general metabolism in the host such as lipid, glucose metabolism and hepatic autophagy, including communication between the gut microbial communities [11].An interesting study by Li et al showed that an antioxidant tempol remodels the gut microbiota by specifically decreasing Lactobacillus population and reduces obesity in mice with concomitant increase in the level of intestinal BA called tauro-β-muricholic acid (T-β-MCA), a FXR antagonist [12].The finding also showed that tempol did not reduce obesity in FXR null mice indicating that T-β-MCA-induced antagonism of intestinal FXR activity mediates the anti-obesity effect.A related study in mice subjected to high-fat diet (HFD)induced non-alcoholic fatty liver disease (NAFLD) showed that upon treatment with tempol and consequent increase in intestinal Tβ-MCA level and FXR inhibition led to reduced hepatic triglyceride accumulation due to lesser circulating ceramide level [13].Repression of ceramide expression genes as a result of FXR inhibition led to reduced hepatic lipogenesis.Moreover, FXR null mice on HFD showed reduced hepatic triglyceride content and that administration of C16:0 ceramide to tempol-treated HFD mice stimulated NAFLD [13].These finding, though preliminary, demonstrate a link between gut microbiota-derived BAs and down regulation of FXR transcriptional activity in the control of obesity and NAFLD.In the future, it might be possible to alter the gut microbiota composition by either drug or diet targeting in order to modulate BAs composition that may act as agonists or antagonists of FXR that could be helpful in human health and disease. The expression profile of genes involved in host bile acid synthesis, conjugation, and reabsorption is altered by the resident gut microbiota.Regulation of bile acid synthesis and homeostasis is known to be controlled by FXR [14,15].Investigation by Sinal et al has shown that the expression of FXR is upregulated in the ileum by a gut microbiota along with its target gene Shp and Fgf15 in normal mice compared to germ free mice [14].Moreover, the upregulation of FXR and its target was only observed in the ileum and not in the liver indicating a role played by the gut bacteria in regulation of FXR expression in the ileum.In addition, microbial diversity in the gut controls bile acids level in the small and large intestine.In fact BAs composition in different organs, including the blood circulation is markedly different in control and germ-free animals [15].In an interesting study by Parseus et al, it was observed that in mice, the gut microbiota stimulated HFD-induced obesity via the FXR-mediated action [16].Fxr −/− mice fed on a HFD for 10 weeks failed to develop obesity in contrast to conventionally-raised (CONV-R) wild-type mice which gained significantly more weight than germ-free (GF) wild-type mice.Interestingly, the secondary BA profiles and the faecal microbiota composition were altered between Fxr −/− and CONV-R wild-type mice.Thus it seems plausible that the gut microbiota promotes dietinduced obesity via BAs activation of FXR, and that FXR may contribute to increased adiposity by modulating the composition of the gut microbiota.An obvious question arises with regard to the proportion of secondary BAs availability in the gut with respect to the primary BAs.Since secondary BAs are exclusively generated by the microbiota, however, some of the primary BAs also function as FXR agonists.Moreover, it can only be hypothesized as to what could happen during significant alterations in the level either of the two types of bile acids in the gut with respect to FXR functions in the host. Microbial Metabolites and Aryl Hydrocarbon Receptor (AHR) Dietary tryptophan metabolism by resident gut microbiota, particularly Lactobacilli spp.yield indole, an aromatic bicyclic molecule and its metabolites that are potent agonists of the host aryl hydrocarbon receptor (AHR).AHR is a cytoplasmic, ligand-activated receptor expressed in many cell types, including intestinal epithelial cells (IECs), immunocytes and has crucial roles in maintenance of intestinal mucosal homeostasis and immune response [17].By applying an integrated in vitro ligand binding, qPCR, protein-DNA interaction and ligand structure activity of the ligand-binding domain (LBD) of AHR confirmed that indole and 3-methyl indole is selective human-AHR agonist [17].However, these metabolites failed to significantly activate mice AHR probably due to a bimolecular (2:1) stoichiometry between indole and the LBD of human AHR.Moreover, cell lines study showed activation of several AHR target gene expression such as Cyp1a1 and Cyp1b1 upon binding of indole to human AHR [17,18].In an interesting report by Rothhammer et al it was demonstrated that interferon type 1 (IFN-1) along with tryptophan metabolites indole, indoxyl-3-sulfate, indole-3-propionic acid (IPA) and indole-3aldehyde exert activation of the AHR in the astrocytes to suppress inflammation of the central nervous system in mice [19].Interestingly, absence or deficiency of AHR and its microbialderived ligands alter gut microbiota composition and turnover of IECs [20].In fact, microbiotaderived indole and its derivatives through binding to AHR stimulate specific innate lymphoid cell (ILC) populations particularly the group 3 ILC (ILC3s) [21].ILC3 cells through interleukin 22 are important in antimicrobial peptides (AMPs) synthesis and secretion that restrict gut pathogen survival [21].In another study on the microbiota-derived metabolites from the mouse gut, 5-hydroxy-L-tryptophan and salicylic acid were identified as potential activators of AHR by mass spectrometric methods [22].However, it remains to know how these two metabolites affect gut homeostasis. Over the years it has been thought that the AHR signalling has to be tightly regulated as uncontrolled or prolonged ligand activation (due to reduced ligand metabolic clearance) or constitutive AHR activation may compromise gut homeostasis [23].AHR signalling in the host is controlled by cytochrome P450 enzymes (CYP) such as CYP1A and CYP1B sub-families of enzymes because they metabolize AHR ligands and attenuate its activation and downstream signalling.On the other hand, CYP1A and CYP1B gene expression is controlled by AHR itself.In a recent report, it was observed that CYP1 enzymes also control cellular AHR ligand availability [24].Dysregulated CYP1A1 gene expression in mice leads to significant reduction in the cellular level of AHR agonists, whereas a constitutive expression in the IECs led to the disappearance of AHR-dependent ILC3s and T helper 17 cells which increases the chance of infection of the intestine [24].This shows that the immune cells are dependent on optimal AHR signalling for their specific roles in the intestine.Gut microbiota metabolizes dietary polysaccharides, mainly cellulose by fermentation to generate short-chain fatty acids (SCFAs) such as acetate, butyrate and propionate.Butyrate and propionate, but not acetate control AHR gene expression in HT-29 cells [25].Also, SCFA-activated AHR has a control over the gut microbial composition in AhR +/+ (wild-type) and AhR −/− (knock-out) mice.SCFAs were also demonstrated to modulate AHR activity in an indirect manner via the G-protein coupled receptors (GPCRs), which uses SCFAs as ligands [25].Few diseases are known to occur and progress due to breakdown in gut homeostasis as a result of compromised microbiota and AHR signalling.A recent report by Lamas et al demonstrated relationship between gut microbiota and the host caspase recruitment domain family member 9 (CARD9) [26].CARD9 is a known susceptible gene for inflammatory bowel disease (IBD) which has a role in immunity against microorganisms.Microbiota from Card9(-/-) null mice are unable to metabolize tryptophan into indoles that serves as AHR agonists.Moreover, it was observed that gut inflammation was reduced after introduction of three Lactobacillus strains that metabolize tryptophan in mice.In individuals with IBD, decreased production of AHR ligands was observed in microbiota, particularly those with CARD9 risk alleles associated with IBD [26].Collectively, these studies indicate a strong regulatory influence of the AHR ligands on host cells, including the immune cells associated with gut immune homeostasis.Breakdown of AHR signalling in the intestinal cells may lead to inflammatory reactions and immuno-pathologies of the gut. At present, limited number of NRs has been demonstrated to have a direct role in acting as a ligand-binding protein for the microbiota-secreted metabolites in the gut [Table 1].As mentioned earlier, butyrate generated by the gut microbiota can act as peroxisome proliferatoractivated receptor-γ (PPAR-γ; NR1C3) agonist [27].In a recent finding, propionate secreted by gut bacteria also has been shown to be an activator of PPAR-γ [28].PPAR-γ is a known antiinflammatory mediator that attenuates several pro-inflammatory signalling pathways stimulated by transcription factors such as NF-κB and AP-1.Hence butyrate and propionate-mediated activation of PPAR-γ gene transcriptional activity could have a protective role in prevention of gut inflammation and regulation of immune tolerance [29].Also, accumulated evidences suggest presence of specific bacterial strains, though not identified in the gut which influences PPARγ gene regulatory activity [30].The gut microbial metabolism of dietary tryptophan yield an indole derivative called indole 3-propionate (IPA) which exhibit high affinity binding to its cognate pregnane X receptor (PXR; NR1I2) in the IECs as investigated by Venkatesh et al few years back [31].The mice gut bacterium Clostridium sporogenes secreted IPA has anti-inflammatory and barrier protection role in the intestine mediated via the PXR.Moreover, Nr1i2 −/− mice show compromised epithelial barrier protection in the gut indicating the important role of this receptor in maintaining barrier function in the host [32].Although other indole derivatives such as indoxyl sulfate and indole-3-acetate are also secreted by the gut bacteria, however, apart from IPA, data supporting activation of intestinal PXR by them is lacking. Conclusion The gut microbiota is a complex, dynamic and important constituent of our body exhibiting mutualism and has varied impact on our physiological system culminating in optimal health.Few of the gut bacterial species have been demonstrated to secrete metabolites that have significant effects on intestinal cell homeostasis and innate and adaptive immunity.The microbialsecreted metabolites in the gut include SCFAs, indole derivatives and secondary bile acids which act as natural ligands for the host NRs such as FXR, AHR, PXR and PPARγ.Such interactions have important consequences in providing intestinal epithelial barrier protection and antiinflammatory responses.Dysbiosis in the microbial populations in the gut can have unfavourable outcome for the host in the form of disease onset such as IBDs, cancer, diabetes and obesity.Presently, limited numbers of metabolites synthesized and secreted by the gut microbiota are known and characterized that influence host biology through the nuclear receptors.An important aspect with respect to this is the available concentration of such metabolites in the gut and the mechanism and regulatory features by which they enter into the IECs to modulate the NRs.Also, it is not sure as to what could be the approximate number of secreted metabolites present in the gut and what novel pathways they probably utilize to interact with the host.Apart from affecting the host physiology itself, metabolites secreted by one microbial species most likely shall also influence other gut microbial species in variable forms which makes the entire microbiome immensely complex and dynamic [33].Moreover, it is visualized that microbial metabolites also influence other microbial communities across taxa in the gut by signalling pathways that probably may have a role in maintaining their optimal population which benefits the host.In conclusion, the future is no doubt extremely bright in this area of research, but the challenge remains knowing the exact numbers of different microbiota-secreted metabolites in the host gut and also the microbial genera and species.More precisely, finding out the total numbers and characterization of these metabolites which actually modulate NRs activity, including the target genes in the host may pay the dividend in finding out their roles in host physiology as well as in several pathologies.doi:10.11131/2017/101316 Figure 1 : Figure 1: A schematic picture showing gut microbiota-secreted metabolites binding to host nuclear receptor (NRs) and its modulation of various functions (favourable effects) by regulating specific gene expression.On the other hand, dysbiosis and alteration in the type of secreted metabolites in the gut can give rise to unfavourable effects in the form of disease onset in the host. Table 1 : Select gut bacteria-secreted metabolites and their target host nuclear receptors.
3,803.8
2017-12-14T00:00:00.000
[ "Biology", "Medicine", "Environmental Science" ]
Effect of Spectral Power Distribution on the Resolution Enhancement in Surface Plasmon Resonance For wavelength interrogation based surface plasmon resonance (SPR) sensors, refractive index (RI) resolution is an important parameter to evaluate the performance of the system. In this paper, we explore the influence of spectral power distribution on the refractive index (RI) resolution of the SPR system by simulating the reflectivity curve corresponding to the different incident angles of the classical Kretschmann structure and several different spectral power distribution curves. A wavelength interrogation based SPR system is built, and commercial micro-spectrometers (USB2000 and USB4000) are used as the detection components, respectively. The RI resolutions of the SPR system in these two cases are measured, respectively. Both theoretical and experimental results show that the spectral power distribution has a significant effect on the RI resolution of the SPR system. Introduction Surface plasmon resonance (SPR) is the resonant oscillation of conduction electrons at the interface between the noble metal and dielectric stimulated by the incident light [1]. Noble metals have negative permittivities such as gold and silver, and dielectric materials such as liquids, gases, or solid are used [2]. Because of the rapid enhancement of electromagnetic fields near the metal structure, SPR based optical sensors are exceedingly sensitive to small changes in the refractive index (RI) of the metal interface and have the capability of label-free real-time sensing [2]. In recent years, SPR sensors have been widely used in many fields, such as drug selection [3], clinical diagnosis [4], food detection [5], and environmental monitoring [6], which have become a standard biophysical tool [7]. According to different detection methods, SPR sensors can be divided into four types: wavelength interrogation, angle interrogation, intensity interrogation, and phase interrogation. Among them, wavelength interrogation based SPR sensors have great advantages over the other interrogation methods of SPR sensors, such as miniaturization, SPR imaging technology [8], multi-channel [9], and multi parameter measurement. However, wavelength interrogation based SPR sensors, also called spectral SPR sensors, use spectrometers as detectors, which largely limits their RI resolution. It is an important issue to improve the RI resolution of spectral SPR sensors. The RI resolution of sensors has the minimum change in the parameter to be determined which can be resolved by a sensing device [10]. For spectral SPR sensors, the RI resolution can be expressed as dividing the measurement accuracy of the resonance wavelength by the RI sensitivity of the sensors. The measurement accuracy is defined as the standard deviation of multiple measurements, which is limited by the widths and the signal to noise ratio of the SPR measurement curve [11]. When the width of the SPR measurement curve increases, the uncertainty of measurement accuracy increases. Similarly, the measurement accuracy will decrease as the signal to noise ratio of the SPR measurement curve decreases. However, these two factors are directly affected by the spectral power distribution. The spectral power distribution here refers to the spectral power distribution of the system, which is related to the light source, the components of the system, and the response of the charge coupled device (CCD). On the one hand, the SPR measurement curve is also affected by the spectral power distribution and reflectivity of the SPR sensor. Once the parameters of the SPR sensor are determined (using the same SPR sensor), the width of the SPR measurement curve is only related to the spectral power distribution. On the other hand, the wavelength interrogation SPR system uses the spectrometer as the detection component. The stronger the spectral power distribution is, the higher the signal-to-noise ratio of the SPR measurement curve will be. For the RI sensitivity of the sensors, J. Homola [12] studied it in detail. The RI sensitivity is defined as the ratio of the change in the resonance wavelength to the change in the refractive index when the refractive index of the sample is changed slightly. Generally, the RI sensitivity increases with an increase in the resonant wavelength. However, the difference of spectral power distribution will lead to the different displacements of the resonance wavelength and then affect the RI sensitivity. Therefore, the RI resolution of the SPR system is mainly affected by the spectral power distribution and is different when operating at different resonant wavelengths. Furthermore, for different spectral power distributions, the optimal resonant wavelength (the optimal RI resolution) of the SPR system will also change. The paper is organized as follows: the design of the study, the setting, the type of materials involved, a clear description of all interventions and comparisons, and the type of analysis used are given in Section 2. The noisy spectra of SPR affected by different spectral power distributions are given in Section 3. The measurement accuracy and RI sensitivity on the resonance wavelength of different spectral power distributions are studied and the optimal resonant wavelength are presented in Section 4. In Section 5, comparative experiments on USB2000 (Ocean Optics) spectrometer and USB4000 (Ocean Optics) spectrometer are presented. In the last of this paper, the results, discussion, and conclusions are given. Methods In this paper, the influence of the spectral power distribution on the measurement accuracy and RI sensitivity of the SPR system is investigated by the simulation model, and the influence of spectral power distribution on the resolution of RI is analyzed. The attenuated total reflection (ATR) method together with the Kretschmann configuration [13] is often used in SPR measurements. As for SPR sensor based on the prism, the configuration includes a high RI dielectric (coupled prism K9), a chrome layer with the 10 nm thickness, and a gold layer with the 40 nm thickness. As shown in Fig. 1, a light beam from the halogen lamp passes through the coupling prism. If the resonance condition is satisfied, when the evanescent wave vector matches the surface plasma wave vector exactly, the SPR spectrum demonstrates a dip locating at the resonance wavelength. When the refractive index of the object is changed, the condition of surface plasmon resonance is changed, and the resonant wavelength is red or blue shifted. Furthermore, the SPR system based on USB2000 (Ocean Optics) and USB4000 (Ocean Optics) is built, and the relationship between the RI resolution and the wavelength under the influence of different spectral power distributions is obtained to verify the theory. The CCD (such as Sony ILX511B and Toshiba TCD1304AP) is used as the detection element in the micro spectrometer, and their response in the 600 nm -900 nm band decreases. So, the spectral power distribution detected on the spectrometer shows a downward trend in this band. The actual measured spectrum always contains noise. The noise mainly affects the measurement accuracy. The largest source of noise for the SPR sensor system is typically detector noise [15]. The noise from the spectrometer mainly can be divided into three major categories, namely readout noise, dark noise, and photoelectron noise. Because three types of noise are independent from each other, the total noise can be expressed as where N is the total noise of the spectrometer, R N is the readout noise which is related to the electronic circuitry used to read the signal from CCD, D N is the dark noise that depends on the accumulation of dark electron and is determined by the integration time and temperature, and P N is the photoelectron noise which relies on the intensity of the spectrum and obeys Poisson statistics. The SPR measurement curve is the product of the spectral power distribution and system reflectance. Considering the noise, the SPR measurement curves could be obtained as follows: In addition, in terms of resonance wavelength calculation, we adopt the qualitative method. And the RI resolution curve corresponding to the USB2000 spectrometer and the USB4000 spectrometer is obtained by polynomial fitting. Modeling An N-layer structure was presented in [14] where k n is the complex value of the RI and k is the permittivity of the kth layer with a thickness k d . The characteristic matrix of the N-layer structure can be expressed as 1 11 12 and therefore, the reflectance is The typical SPR reflectivity curves at different incident angles are shown in Fig. 2. As the incident angle decreases, the SPR reflectivity curve shifts to a longer wavelength. The incidence angles are 48.5°, 48°, 47.5°, 47°, 46.6°, 46.1°, 45.5°, 44.8°, 44°, and 43.1°, respectively. Measurement accuracy As mentioned in Section 1, the RI resolution is closely related to the measurement accuracy. The spectral width and noise of SPR will affect the measurement accuracy. When the SPR curve becomes wider or the noise level of the curve is larger, it is more difficult to measure the resonant wavelength precisely, thus reducing the measurement accuracy. However, the spectral power distribution mainly determines the width and noise level of the SPR curve. It can be seen from Fig. 4 to Fig. 6 that even with the same refractive index, the difference of the spectral power distribution forms different widths and noise levels of the SPR measurement curves. To demonstrate this effect, we utilize the simulated SPR measurement curves affected by three different spectral power distributions which have been built in Section 3. The 1000 times SPR measurement curves are simulated, and the resonance wavelengths of each incident angle are calculated. By calculating the standard deviation of each resonance wavelength, the measurement accuracy of the resonant wavelength at each incident angle can be obtained. The measurement accuracy curves can be obtained by using three-order polynomial fitting, as shown in Fig. 7. As can be seen from Fig. 7, the standard deviations increase monotonically with the resonant wavelength, which means that the measurement accuracy decreases gradually. This is mainly because the width of the SPR curve increases as the resonant wavelength increases. Moreover, comparing the three fitting curves, we can find that the measurement accuracy of the same resonance wavelength is different due to the difference of the spectral power distribution. In the range between 500nm and 700 nm, the measurement accuracy of SPD3 is much better than that of others (the low and the better), because the corresponding SPR curve has the higher signal to noise ratio. Similarly, the measurement accuracy of SPD1 is higher than that of the other two in the range after 700 nm between 700 nm and 900nm. RI sensitivity A slight change in the refractive index of the analyte will cause an offset of the SPR resonant wavelength. The RI sensitivity of the SPR sensor can be defined as the ratio of the change in the resonance wavelength to the change in the refractive index of the analyte, when the refractive index of the analyte changes slightly. In order to obtain the RI sensitivity curves, the refractive index of the analyte increases from 1 to 1.000001, and the resonance wavelength of each incident angle is calculated. Thus, the RI sensitivity of the resonance wavelength corresponding to each angle is obtained. Finally, the three-order polynomial is used to obtain the RI sensitivity curves, which represent the RI sensitivities corresponding to the resonant wavelength, as shown in Fig. 8. With an increase in the resonance wavelength, the RI sensitivity will increase accordingly. For the same resonance wavelength, the larger the slope of the spectral power distribution is, the greater the RI resolution will be. RI Resolution The RI resolution can be defined as the minimum refractive index change of an analyte that can be detected. According to the definition of Homola, the RI resolution can be represented as n n S    (5) where n  is the RI resolution of the sensors,  is the standard deviation of the resonant wavelength, and n S represents the RI sensitivity of the sensors. Previously, we have obtained the measurement accuracy and RI sensitivity of the SPR system affected by three different spectral power distributions. By using (5), the RI resolution curves can be obtained, as shown in Fig. 9. We find the lowest point of each curve, which represents the best RI resolution and the corresponding resonance wavelength in this situation. It can be seen that the RI resolution of the SPD1 shows a downward trend in the full-wave band, which shows that the RI resolution will be better with an increase in the resonance wavelength. When the resonance wavelength is 786.11 nm, the RI resolution of SPD2 is the best, whose value is 6 2.83 10   . Similarly, the best resonance wavelength corresponding to SPD3 is 730.2 nm, and the resolution of the SPR system can be achieved as 6 2.61 10   . Compared with operating at other resonant wavelengths, the RI resolution increases. The experimental results show that even with the same measurement system, spectral power distribution will affect the RI resolution of SPR system and the position of the optimal resonant wavelength of the system. Comparative experiments based on the self-designed wavelength SPR system A self-designed wavelength SPR system with an adjustable incident angle is built, and its structure is shown in Fig. 10. A tungsten halogen lamp (A) is used as a light source. The light is then collimated by an optical fiber collimator (B,  5 mm, SMA905). After being polarized by the polarizer (C,  25.4, extinction ratio 500 : 1), the light is incident to the surface of the SPR module (D). The SPR module is in the Kretschmann geometry: a right-angle prism (K9) coated with 10 nm thick chromium film on the sensor surface and 40 nm thick gold film on the surface of the chromium film. After the SPR phenomenon occurs on the surface of the gold film, the reflected light enters the fiber collimator (E,  5 mm, SMA905). Finally, it is received by a CCD-based spectrometer (F, G). The spectrometers we select are USB2000 spectrometer (F, Ocean Optics) and USB4000 spectrometer (Ocean Optics). The USB2000 spectrometer is based on the Sony ILX511B linear CCD chip which contains 2048 pixels, and the USB4000 spectrometer is based on the Toshiba TCD1304AP linear CCD chip which contains 3648 pixels. Both of them are equipped with a 25 μm entrance slit, and the groove spacing is 1.667 μm/line. The spectra of the tungsten halogen lamp collected by USB2000 spectrometer and USB4000 spectrometer are shown in Fig. 11. The SPR measurement curves with multiple different incident angles are shown below. Fig. 13 SPR measurement curves acquired by USB4000 spectrometer (the reflectivity curves from left to right correspond to incident angles θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 , θ 7 , θ 8 , θ 9 , and θ 10 , respectively). By changing the incident angle, from 1  to 10  , the resonance wavelength changes, and the SPR measurement curves acquired through USB2000 spectrometer and USB4000 spectrometer are shown in Figs. 12 and 13, respectively. To ensure the stability of the experimental environment, the maximum measurement value of the tungsten halogen lamp spectrum is approximately kept at 50000 counts in the whole process. We measure it for 1000 times and calculate the standard deviation of each resonance wavelength. The measurement accuracy of the resonant wavelength at each incident angle can be obtained. The refractive index of the analyte changes slightly several times, and the changes in the resonance wavelength are recorded. By linear fitting, the RI sensitivity of each incident angle can be calculated. Then, the RI resolution of the resonant wavelength corresponding to the incident angles is obtained. By three-order polynomial fitting, the RI resolution curves corresponding to USB2000 spectrometer and USB4000 spectrometer are shown in Fig. 14, which shows that the RI resolutions of these two situations are different and the optimal resonance wavelengths of USB2000 spectrometer and USB4000 spectrometer for this SPR system are 747.25 nm and 714.86 nm, respectively. Results We have explored the effect of the spectral power distribution on the RI resolution. The RI resolution is different when the spectral power distribution of the system is different. It can be seen from Fig. 14 that the RI resolution of USB2000 spectrometer is better than that of USB4000 spectrometer. The best RI resolution of USB4000 spectrometer is 6 1.97 10   when the resonance wavelength is 714.86 nm, and the best RI resolution of USB2000 spectrometer is 6 1.85 10   when the resonance wavelength is 747.25 nm. This is because the spectral power distribution of the USB2000 spectrometer is much flatter than that of USB4000 spectrometer and has a higher spectral response. And in the range of decrease, it is similar with the situations of SPD2 and SPD3 in Section 4. The main reason for this phenomenon is that the two spectrometers use different CCDs. From 1000 experiments and the error theory, it can be known that the confidence interval of the optimum resonance wavelength of USB2000 spectrometer is 746.08 nm -748.41 nm, and the optimum resonance wavelength of USB4000 spectrometer is 713.74 nm -715.98 nm. It is due to the difference of the spectral power distribution that the RI resolution is different. It is suitable for improving the RI resolution of a wavelength interrogation SPR system based on a micro spectrometer. It is important to note that the replacement of any component in the system may cause the inaccuracy of the experiment, so it is necessary to remeasure the system when replacing any of the components. Discussion The agreement between the experiment and the theory also proves that for the wavelength interrogation SPR system, the spectral power distribution of the system has a significant influence on the RI resolution of the system. It provides a reference value for the precise measurement of the wavelength interrogation SPR system. However, this method does not apply to the phase, angle, and intensity interrogation SPR systems. Moreover, the optimum resonant wavelength positions of different wavelength modulation systems are different. Conclusions In conclusion, we have explored the effect of the spectral power distribution on the RI resolution. We simulate the SPR measurement curves (same reflectivity curves) affected by three different spectral power distributions, calculate the measurement accuracy and RI sensitivity of each resonance wavelength, and then obtain the RI resolution curves. According to the simulated results and our analysis, the spectral power distribution affects the measurement accuracy and RI sensitivity of the SPR system and then affects its RI resolution. The agreement between the experiment and the theory also proves that for the wavelength interrogation SPR system, the spectral power distribution of the system has a significant influence on the RI resolution of the system. And on the whole, the spectral energy distribution is flatter, and the RI resolution is higher. When we use different spectrometers as detectors, we can first measure the RI resolution and the optimal resonant wavelength. Then the experiment can be set up with this condition. In this way, we can get more accurate results. This paper mainly proposes a method to improve the resolution of the wavelength modulation SPR system. The method provides a reference value for the precise measurement of the wavelength interrogation SPR system. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
4,503.8
2018-09-04T00:00:00.000
[ "Physics", "Materials Science" ]
Standard Reference Materials for Cement Paste, Part I: Suggestion of Constituent Materials Based on Rheological Analysis The purpose of this study was to develop a standard reference material that can simulate the flow characteristics of cement paste. For this purpose, it is important to determine the constituent materials of the standard material for cement paste. Generally, cement paste is a mixture of cement and water. To determine the constituent material of cement paste, it was divided into powder that can replace cement and matrix fluid. With the concept of rheology, which can evaluate the flow properties of selected materials quantitatively under certain mixing conditions, experiments were carried out step-by-step according to material composition combination, stage of aging, and material types. As a result, limestone powder was determined to be a cement substitute, and glycerol and water were determined to be a matrix fluid substitute. After an analysis of the compatibility with the required properties of the particulate standard materials, the finally selected standard reference material was found to satisfy the required performance. Introduction Most metropolitan areas face various challenges, such as rising land prices, a lack of available land, and limits on the horizontal use of land. Accordingly, the demand for super concrete structures has been increasing constantly, giving birth to several high-rise buildings and large-scale structures [1,2]. On the other hand, with the experience of constructing large concrete structures, and although the construction industry has been recognized for its outstanding construction technology and making remarkable achievements, higher-quality construction technology is still needed for the construction of such super concrete structures. Such high-standard construction technology also requires control of the material properties during construction, having the technology for a quantitative evaluation of a construction material. In short, to be able to analyze the quantitative construction technology, it is essential to analyze materials quantitatively. For this purpose, it is important to develop a reference material that represents consistent characteristics that can be quantified under any condition. This makes it possible to secure more economical and effective construction technology and evaluate the construction performance objectively. Therefore, there is a need to develop a consistent quality control material based on a quantification of the concrete flow performance, which is called a standard reference material [3][4][5][6][7]. The development of a standard reference material enables a stable construction performance evaluation, whose quality can be controlled consistently regardless of the manufacturing process in the stage of construction or the contractor. In addition, the calibration of various rheometers that have already been developed to measure the flow condition and flow performance quantitatively with absolute values, not just examining the initial flow performance evaluation based on a relative comparison, will become possible. Ultimately, the performance in the stages of an initial concrete construction can be evaluated quantitatively, making a scientific pre-construction performance evaluation and utilization of the standard reference material possible. In addition, a standard reference material can also be used in various fields, including use as a recycling sample for pipe circuit pumping tests, which examine pumping performance, a standard reference sample for an evaluation of the replacement cycle test for pump equipment degradation evaluation, and a standard sample for quality control in the field of cutting-edge technology, such as 3D digital printing. On the other hand, concrete is a multicomponent material that contains particles with a wide variety of sizes ranging from minute particles, such as cement, to coarse particles tens of millimeters in diameter [5,6]. Therefore, the development of standard reference materials for the quantitative evaluation of cementitious materials and the substitution of the initial flow characteristics of cement paste have been actively carried out, but there is no clear definition on standard reference materials [7][8][9][10]. The primary objective of this study was to draw constituent materials of the standard reference material for cement paste, which is the most essential component of concrete, based on the concept of rheology that enables an evaluation of the flow properties, for a definition of the standard reference material. To develop standard reference materials for cement paste, the properties required by multicomponent standard reference materials, including particles, should be examined. The requirements for a particle-phase standard reference material currently suggested are as follows: (1) no particle separation during the experiment; (2) presentation of a linear Bingham reaction under a wide shear strain; (3) no change in rheological or chemical property between the fluid and particles over a long period of time; (4) sufficient yield stress to prevent the material separation of the aggregate; and (5) almost no double-sided linear response behavior (hysteresis) [7]. This study examined several tentative materials for particle-phase standard reference materials for cement paste and analyzed their rheological properties under certain mixing conditions. This paper describes the result of an evaluation of materials that are suitable for the properties required by a particle-phase standard reference material. Based on the results of this study, along with calibration materials for various rheometer measurement systems, research on the standard reference material should be expanded step-by-step from minute particles to fine aggregates and then to coarse aggregates, including the standard reference material for mortar all the way to the material for concrete [11][12][13]. Experimental Plan Generally, for cement paste, cement powder and matrix fluid can be considered as two representative components. The flow performance of powder that can replace cement when made in the form of a mixture, as well as a matrix fluid in the form of a mixture of cement and water, should be investigated carefully. Table 1 lists the constituent materials used in this study. Limestone powder, blast furnace slag, silica powder, and meta kaolin that showed little reaction to moist conditions and whose mean grain diameter is similar to that of cement were selected as the tentative substitute for cement powder. Table 2 lists the constituents of each material [14][15][16][17]. In addition, corn syrup and glycerol, which have properties similar to the flow performance properties of cement paste matrix, as well as chemical stability showing a consistent viscosity with time, were selected as the tentative substitute for the matrix fluid [18,19]. Table 3 lists the chemical composition of glycerol. A corn syrup product that includes 100% natural pure corn starch was used. As listed in Table 4, the experiment was carried out stage-by-stage. After reviewing the required properties of the standard reference materials, tentative materials for the standard materials were selected, and rheological analysis incorporating composition combinations, initial mixture conditions, and material types were then investigated. At the end, materials selected for the standard reference materials were evaluated based on requirements for particulate standards. The mixture was prepared by blending glycerol or corn syrup selected as a matrix fluid for each of limestone powder, blast furnace slag, silica powder, and meta kaolin. Table 4. Steps for SRM development. Step Contents 1 Step Review particulate standards requirements 2 Step Selection of tentative materials 3 Step · Composition combination · Initial mixture analysis · Analysis by time · Analysis by type (particle sizes and grades) Step Final review of SRM components Experiment Method For this study, a rheology experiment that can evaluate the initial flow performance was conducted primarily to examine the properties required for a particle-phase standard reference material. The ingredients were mixed in four steps spanning 120 s (15 s, 15 s, 30 s, and 60 s) using a high-speed mixer. At the end of each step, the ingredients were kneaded for even mixing of the ingredients. The rheological properties were tested at a consistent temperature (20 • C) and time using an Anton Paar Rheometer ( Figure 1). Generally, the rheology is determined by the relationship between the shear stress and shear rate that affect the materials. This study used the Bingham model given in Equation (1) to determine the plastic viscosity and yield stress. In this method, the plastic viscosity is defined with the inclination of the shear stress, shear rate, and yield stress as the y-intercept, which is determined by regression analysis. where τ, η, . γ, and τ 0 are the shear stress, plastic viscosity, shear rate, and yield stress, respectively. Before beginning the experiment, the samples were rotated for 60 s at a shear rate of 50 s −1 for the homogenization of all the ingredients and they were then given 10 s rest to reach equilibrium. The shear rate was increased from 0.1 s −1 to 40 s −1 and then decreased to 0.1 s −1 , and on the upward and downward curve, the shear resistance put on a spindle by the rotation velocity divided into 10 phases was measured. A serrated spindle, 50 mm in diameter, was used to prevent the separation and slip of the ingredients [20][21][22]. the upward and downward curve, the shear resistance put on a spindle by the rotation velocity divided into 10 phases was measured. A serrated spindle, 50 mm in diameter, was used to prevent the separation and slip of the ingredients [20][21][22]. Rheology Analysis According to the Constituent Composition System First, the role of each substance of the cement substitute and the matrix fluid substitute was analyzed to determine the constituent mixture composition of the particle-phase standard reference material to be developed. The first question to examine for the constituent composition is whether to make the ingredient composition for determining the standard reference material a two-component composition system or a three-component composition system. For this review, three combinations were investigated as listed in Table 5. Here, limestone powder was used for the cement substitute. Limestone powder was believed to be the most stable for the entire process of the experiment and has a low level of reaction to moisture. In addition, as corn syrup contains water, glycerol, which does not contain water and enables relatively easier control of the composition proportion, was used to improve the accuracy of the experiment. Figure 2 presents the results of the rheology analysis for each combination. The two-component combination system of limestone powder and glycerol was not analyzed in the experiment because it exceeded the capacity of the rheometer's torque, so it was excluded from the experiment result. Based on the rheology experiment on each of the component systems, their compatibility with the properties required by the particle-phase standard reference material was analyzed. As shown in Figure 2a, double-sided non-linear response behavior (hysteresis) was detected in the two-component combination system of limestone powder and water. On the other hand, the three-component combination system of limestone powder, glycerol, and water satisfied all of the requirements for the particle-phase standard reference material, without any material separation or hysteresis. This indicates that the use of a matrix fluid (glycerol) plays an important role in preventing material separation and double-sided linear response behavior. Therefore, the Rheology Analysis According to the Constituent Composition System First, the role of each substance of the cement substitute and the matrix fluid substitute was analyzed to determine the constituent mixture composition of the particle-phase standard reference material to be developed. The first question to examine for the constituent composition is whether to make the ingredient composition for determining the standard reference material a two-component composition system or a three-component composition system. For this review, three combinations were investigated as listed in Table 5. Here, limestone powder was used for the cement substitute. Limestone powder was believed to be the most stable for the entire process of the experiment and has a low level of reaction to moisture. In addition, as corn syrup contains water, glycerol, which does not contain water and enables relatively easier control of the composition proportion, was used to improve the accuracy of the experiment. Figure 2 presents the results of the rheology analysis for each combination. The two-component combination system of limestone powder and glycerol was not analyzed in the experiment because it exceeded the capacity of the rheometer's torque, so it was excluded from the experiment result. Based on the rheology experiment on each of the component systems, their compatibility with the properties required by the particle-phase standard reference material was analyzed. As shown in Figure 2a, double-sided non-linear response behavior (hysteresis) was detected in the two-component combination system of limestone powder and water. On the other hand, the three-component combination system of limestone powder, glycerol, and water satisfied all of the requirements for the particle-phase standard reference material, without any material separation or hysteresis. This indicates that the use of a matrix fluid (glycerol) plays an important role in preventing material separation and double-sided linear response behavior. Therefore, the three-component composition system is essential for the particle-phase standard reference material to be developed. three-component composition system is essential for the particle-phase standard reference material to be developed. Rheology Analysis in Early Stage of Aging The flow performance in the early stage of aging was analyzed by mixing each of the selected materials in specific proportions. The objective of this analysis was to evaluate the compatibility with the four required properties: (1) no particle separation during the experiment; (2) presentation of a linear Bingham reaction within a wide shear strain; (3) sufficient yield stress to prevent material separation of the aggregate; and (4) almost no double-sided linear response behavior (hysteresis). Figure 3 shows the results of the experiment for each composition. Shear thinning, which refers to a decrease in plastic viscosity following an increase in shear rate, occurred for all compositions of glycerol and corn syrup in the case of meta kaolin [23][24][25]. In contrast, for silica powder, a shear thickening phenomenon that refers to a rise in plastic viscosity following an increase in shear rate was observed in all compositions, showing a very low yield [26][27][28]. For limestone powder and blast furnace slag, a linear Bingham reaction was observed in the entire range of shear strains of all of the corn syrup and glycerol compositions. The compatibility of each composition in the early stages of aging could be identified considering the four requirements for a particle-phase standard reference material as shown in Table 6. From the results, limestone powder and blast furnace slag were found to be tentative cement substitutes that fulfill all of the requirements. Rheology Analysis in Early Stage of Aging The flow performance in the early stage of aging was analyzed by mixing each of the selected materials in specific proportions. The objective of this analysis was to evaluate the compatibility with the four required properties: (1) no particle separation during the experiment; (2) presentation of a linear Bingham reaction within a wide shear strain; (3) sufficient yield stress to prevent material separation of the aggregate; and (4) almost no double-sided linear response behavior (hysteresis). Figure 3 shows the results of the experiment for each composition. Shear thinning, which refers to a decrease in plastic viscosity following an increase in shear rate, occurred for all compositions of glycerol and corn syrup in the case of meta kaolin [23][24][25]. In contrast, for silica powder, a shear thickening phenomenon that refers to a rise in plastic viscosity following an increase in shear rate was observed in all compositions, showing a very low yield [26][27][28]. For limestone powder and blast furnace slag, a linear Bingham reaction was observed in the entire range of shear strains of all of the corn syrup and glycerol compositions. The compatibility of each composition in the early stages of aging could be identified considering the four requirements for a particle-phase standard reference material as shown in Table 6. From the results, limestone powder and blast furnace slag were found to be tentative cement substitutes that fulfill all of the requirements. three-component composition system is essential for the particle-phase standard reference material to be developed. Rheology Analysis in Early Stage of Aging The flow performance in the early stage of aging was analyzed by mixing each of the selected materials in specific proportions. The objective of this analysis was to evaluate the compatibility with the four required properties: (1) no particle separation during the experiment; (2) presentation of a linear Bingham reaction within a wide shear strain; (3) sufficient yield stress to prevent material separation of the aggregate; and (4) almost no double-sided linear response behavior (hysteresis). Figure 3 shows the results of the experiment for each composition. Shear thinning, which refers to a decrease in plastic viscosity following an increase in shear rate, occurred for all compositions of glycerol and corn syrup in the case of meta kaolin [23][24][25]. In contrast, for silica powder, a shear thickening phenomenon that refers to a rise in plastic viscosity following an increase in shear rate was observed in all compositions, showing a very low yield [26][27][28]. For limestone powder and blast furnace slag, a linear Bingham reaction was observed in the entire range of shear strains of all of the corn syrup and glycerol compositions. The compatibility of each composition in the early stages of aging could be identified considering the four requirements for a particle-phase standard reference material as shown in Table 6. From the results, limestone powder and blast furnace slag were found to be tentative cement substitutes that fulfill all of the requirements. Evaluation of the Cement Substitute Time elapse rheology analysis was conducted to evaluate rheological and chemical property changes over a longer period, beyond the properties required for a particle-phase standard reference material. This is an important factor that enables an evaluation of the properties of a standard reference material that shows a constant flow performance regardless of time. To conduct time elapse rheology analysis and observe changes in chemical properties, samples were produced after mixing each material and they were sealed and stored at room temperature. First, rheology analysis was conducted for the limestone powder and blast furnace slag combinations selected from the initial aging flow performance analysis with time immediately after mixing on the third day and then on the fifth day as shown in Figures 4 and 5. For all experiments, the samples were analyzed after remixing using a high-speed mixer. Multiple samples were produced to minimize the change in the mixing proportion. Generally, in the case of blast furnace slag, it is known as a material with potential hydraulic properties and is known to exhibit hydraulic properties in the presence of an alkaline environment and coexistence with cement. Based on these characteristics, blast furnace slag was selected as a Evaluation of the Cement Substitute Time elapse rheology analysis was conducted to evaluate rheological and chemical property changes over a longer period, beyond the properties required for a particle-phase standard reference material. This is an important factor that enables an evaluation of the properties of a standard reference material that shows a constant flow performance regardless of time. To conduct time elapse rheology analysis and observe changes in chemical properties, samples were produced after mixing each material and they were sealed and stored at room temperature. First, rheology analysis was conducted for the limestone powder and blast furnace slag combinations selected from the initial aging flow performance analysis with time immediately after mixing on the third day and then on the fifth day as shown in Figures 4 and 5. For all experiments, the samples were analyzed after remixing using a high-speed mixer. Multiple samples were produced to minimize the change in the mixing proportion. Generally, in the case of blast furnace slag, it is known as a material with potential hydraulic properties and is known to exhibit hydraulic properties in the presence of an alkaline environment and coexistence with cement. Based on these characteristics, blast furnace slag was selected as a substitute for cement powder. As a result of rheological analysis over time, the plastic viscosity on the third day was 3 times higher than that on the first day in the combination with corn syrup. Also, on the fifth day, the blast furnace slag mixture was hardened and measurement was impossible. Glycerol has also been found to have higher plastic viscosity over time. In other words, the blast furnace slag was judged to have a chemical reaction due to its latent hydraulic characteristics over time, and was excluded from the candidates for the substitute for cement powder [29][30][31][32]. On the other hand, limestone powder demonstrated consistent rheological properties regardless of the passage of time, both in its mixture with corn syrup and with glycerol. Limestone powder was eventually selected as the cement substitute based on the result of rheological and chemical property analysis over a long period of time. Materials 2018, 11, x FOR PEER REVIEW 7 of 12 substitute for cement powder. As a result of rheological analysis over time, the plastic viscosity on the third day was 3 times higher than that on the first day in the combination with corn syrup. Also, on the fifth day, the blast furnace slag mixture was hardened and measurement was impossible. Glycerol has also been found to have higher plastic viscosity over time. In other words, the blast furnace slag was judged to have a chemical reaction due to its latent hydraulic characteristics over time, and was excluded from the candidates for the substitute for cement powder [29][30][31][32]. On the other hand, limestone powder demonstrated consistent rheological properties regardless of the passage of time, both in its mixture with corn syrup and with glycerol. Limestone powder was eventually selected as the cement substitute based on the result of rheological and chemical property analysis over a long period of time. Evaluation on Matrix Fluid The changes in a sample produced to analyze the chemical properties according to the type of matrix fluid (corn syrup, glycerol) were observed using limestone powder, the selected cement substitute that showed consistent rheological properties with time. As shown in Figures 6 and 7, the mixture with the corn syrup matrix fluid began to show a chemical reaction from approximately two weeks after mixing. On the 30th day, discoloration from mold production and other factors was observed on the surface of the sample. On the other hand, the sample mixed with glycerol showed no change in chemical properties over the 30 days, and the initial flow performance condition was reproduced by carrying out re-mixing. Based on those results, the properties required for a particle-phase standard reference material were evaluated as listed in Table 7. Based on the above results, when the required properties of the particulate standard reference material were evaluated, in the case of limestone powder, a chemical reaction occurred in combination with corn syrup, but all the required characteristics were satisfied in the combination with glycerol. Blast furnace slag reacted chemically in all formulations, such as corn syrup and glycerol. Silica powder did not show a linear Bingham reaction in all formulations, such as corn syrup and glycerol, and low yield stress substitute for cement powder. As a result of rheological analysis over time, the plastic viscosity on the third day was 3 times higher than that on the first day in the combination with corn syrup. Also, on the fifth day, the blast furnace slag mixture was hardened and measurement was impossible. Glycerol has also been found to have higher plastic viscosity over time. In other words, the blast furnace slag was judged to have a chemical reaction due to its latent hydraulic characteristics over time, and was excluded from the candidates for the substitute for cement powder [29][30][31][32]. On the other hand, limestone powder demonstrated consistent rheological properties regardless of the passage of time, both in its mixture with corn syrup and with glycerol. Limestone powder was eventually selected as the cement substitute based on the result of rheological and chemical property analysis over a long period of time. Evaluation on Matrix Fluid The changes in a sample produced to analyze the chemical properties according to the type of matrix fluid (corn syrup, glycerol) were observed using limestone powder, the selected cement substitute that showed consistent rheological properties with time. As shown in Figures 6 and 7, the mixture with the corn syrup matrix fluid began to show a chemical reaction from approximately two weeks after mixing. On the 30th day, discoloration from mold production and other factors was observed on the surface of the sample. On the other hand, the sample mixed with glycerol showed no change in chemical properties over the 30 days, and the initial flow performance condition was reproduced by carrying out re-mixing. Based on those results, the properties required for a particle-phase standard reference material were evaluated as listed in Table 7. Based on the above results, when the required properties of the particulate standard reference material were evaluated, in the case of limestone powder, a chemical reaction occurred in combination with corn syrup, but all the required characteristics were satisfied in the combination with glycerol. Blast furnace slag reacted chemically in all formulations, such as corn syrup and glycerol. Silica powder did not show a linear Bingham reaction in all formulations, such as corn syrup and glycerol, and low yield stress Evaluation on Matrix Fluid The changes in a sample produced to analyze the chemical properties according to the type of matrix fluid (corn syrup, glycerol) were observed using limestone powder, the selected cement substitute that showed consistent rheological properties with time. As shown in Figures 6 and 7, the mixture with the corn syrup matrix fluid began to show a chemical reaction from approximately two weeks after mixing. On the 30th day, discoloration from mold production and other factors was observed on the surface of the sample. On the other hand, the sample mixed with glycerol showed no change in chemical properties over the 30 days, and the initial flow performance condition was reproduced by carrying out re-mixing. Based on those results, the properties required for a particle-phase standard reference material were evaluated as listed in Table 7. Based on the above results, when the required properties of the particulate standard reference material were evaluated, in the case of limestone powder, a chemical reaction occurred in combination with corn syrup, but all the required characteristics were satisfied in the combination with glycerol. Blast furnace slag reacted chemically in all formulations, such as corn syrup and glycerol. Silica powder did not show a linear Bingham reaction in all formulations, such as corn syrup and glycerol, and low yield stress and a chemical reaction occurred. Meta kaolin also showed no linear Bingham reaction in all formulations, such as corn syrup and glycerol, and a chemical reaction occurred in combination with corn syrup Table 8. Based on the result of rheological and chemical property analysis, glycerol was eventually selected as the matrix fluid substitute. and a chemical reaction occurred. Meta kaolin also showed no linear Bingham reaction in all formulations, such as corn syrup and glycerol, and a chemical reaction occurred in combination with corn syrup as listed in Table 8. Based on the result of rheological and chemical property analysis, glycerol was eventually selected as the matrix fluid substitute. (a) (b) and a chemical reaction occurred. Meta kaolin also showed no linear Bingham reaction in all formulations, such as corn syrup and glycerol, and a chemical reaction occurred in combination with corn syrup as listed in Table 8. Based on the result of rheological and chemical property analysis, glycerol was eventually selected as the matrix fluid substitute. Analysis of Selected Constituent Materials by Different Type A limestone powder, glycerol, and water combination was found to be the most suitable for the properties required for a particle-phase standard reference material of cement paste. The particle-phase standard reference material to be developed should have consistent flow performance within the error range, and for such a purpose, the particle size of the limestone powder and the grade of the glycerol need to be considered. For all of the abovementioned experiments, Extra Pure (EP) grade glycerol and limestone powder with a 20 µm grain diameter were used. Currently, three types of limestone powder with 1 µm, 10 µm, and 20 µm particle sizes are manufactured at factories, while glycerol is produced in two grades: Extra Pure (EP) grade and Guaranteed Reagent (GR) grade. Also, rheology analysis was conducted according to the particle size of the limestone powder (1 µm, 10 µm, and 20 µm) and the glycerol grade (EP and GR), as shown in Table 8, to determine the changes in the flow performance of each type. For comparison with other experiments, EP-grade glycerol was used for the analysis according to the limestone powder particle size and limestone powder with a 20 µm particle diameter was used for the experiment by glycerol grade. The results are shown in Figures 8 and 9. In the experiment according to the limestone powder particle size, the 1 µm particle size showed the shear thickening phenomenon of an increasing plastic viscosity with increasing shear rate. These results suggest that the smaller the particle size, i.e., the higher the concentration, the greater the occurrence of shear thickening. It is believed that the smaller the particle size, the higher the plastic viscosity due to the increase in interfacial friction area [10]. For 10 µm and 20 µm particle size limestone powder, all of the requirements for a particle-phase standard reference material were satisfied, but the plastic viscosity increased with decreasing particle size, which was attributed to the increasing friction area among particles with decreasing particle size [7]. The plastic viscosity was slightly higher for GR than EP in the experiment using different glycerol grades, but both types of glycerol fulfilled the requirements for a particle-phase standard reference material. Overall, when considering the average particle size of cement, the results recommend the use of limestone powder with a 20 µm particle size and either glycerol grade. Analysis of Selected Constituent Materials by Different Type A limestone powder, glycerol, and water combination was found to be the most suitable for the properties required for a particle-phase standard reference material of cement paste. The particle-phase standard reference material to be developed should have consistent flow performance within the error range, and for such a purpose, the particle size of the limestone powder and the grade of the glycerol need to be considered. For all of the abovementioned experiments, Extra Pure (EP) grade glycerol and limestone powder with a 20 μm grain diameter were used. Currently, three types of limestone powder with 1 μm , 10 μm , and 20 μm particle sizes are manufactured at factories, while glycerol is produced in two grades: Extra Pure (EP) grade and Guaranteed Reagent (GR) grade. Also, rheology analysis was conducted according to the particle size of the limestone powder (1 μm , 10 μm , and 20 μm ) and the glycerol grade (EP and GR), as shown in Table 8, to determine the changes in the flow performance of each type. For comparison with other experiments, EP-grade glycerol was used for the analysis according to the limestone powder particle size and limestone powder with a 20 μm particle diameter was used for the experiment by glycerol grade. The results are shown in Figures 8 and 9. In the experiment according to the limestone powder particle size, the 1 μm particle size showed the shear thickening phenomenon of an increasing plastic viscosity with increasing shear rate. These results suggest that the smaller the particle size, i.e., the higher the concentration, the greater the occurrence of shear thickening. It is believed that the smaller the particle size, the higher the plastic viscosity due to the increase in interfacial friction area [10]. For 10 μm and 20 μm particle size limestone powder, all of the requirements for a particle-phase standard reference material were satisfied, but the plastic viscosity increased with decreasing particle size, which was attributed to the increasing friction area among particles with decreasing particle size [7]. The plastic viscosity was slightly higher for GR than EP in the experiment using different glycerol grades, but both types of glycerol fulfilled the requirements for a particle-phase standard reference material. Overall, when considering the average particle size of cement, the results recommend the use of limestone powder with a 20 μm particle size and either glycerol grade. Analysis of Selected Constituent Materials by Different Type A limestone powder, glycerol, and water combination was found to be the most suitable for the properties required for a particle-phase standard reference material of cement paste. The particle-phase standard reference material to be developed should have consistent flow performance within the error range, and for such a purpose, the particle size of the limestone powder and the grade of the glycerol need to be considered. For all of the abovementioned experiments, Extra Pure (EP) grade glycerol and limestone powder with a 20 μm grain diameter were used. Currently, three types of limestone powder with 1 μm , 10 μm , and 20 μm particle sizes are manufactured at factories, while glycerol is produced in two grades: Extra Pure (EP) grade and Guaranteed Reagent (GR) grade. Also, rheology analysis was conducted according to the particle size of the limestone powder (1 μm , 10 μm , and 20 μm ) and the glycerol grade (EP and GR), as shown in Table 8, to determine the changes in the flow performance of each type. For comparison with other experiments, EP-grade glycerol was used for the analysis according to the limestone powder particle size and limestone powder with a 20 μm particle diameter was used for the experiment by glycerol grade. The results are shown in Figures 8 and 9. In the experiment according to the limestone powder particle size, the 1 μm particle size showed the shear thickening phenomenon of an increasing plastic viscosity with increasing shear rate. These results suggest that the smaller the particle size, i.e., the higher the concentration, the greater the occurrence of shear thickening. It is believed that the smaller the particle size, the higher the plastic viscosity due to the increase in interfacial friction area [10]. For 10 μm and 20 μm particle size limestone powder, all of the requirements for a particle-phase standard reference material were satisfied, but the plastic viscosity increased with decreasing particle size, which was attributed to the increasing friction area among particles with decreasing particle size [7]. The plastic viscosity was slightly higher for GR than EP in the experiment using different glycerol grades, but both types of glycerol fulfilled the requirements for a particle-phase standard reference material. Overall, when considering the average particle size of cement, the results recommend the use of limestone powder with a 20 μm particle size and either glycerol grade. Conclusions This study aimed to develop a particle-phase standard reference material that can simulate the flow performance of cement paste, the most primary component of concrete. Three constituents, a cement substitute, a matrix fluid substitute, and water, were selected for the composition of the standard reference material to be developed. Considering their physical and chemical properties, limestone powder, blast furnace slag, meta kaolin, and silica powder, which undergo almost no reaction with water, were selected as the cement substitutes. Corn syrup and glycerol, which have a consistent viscosity over time and chemical stability, were selected as the matrix fluids. In the experiment and analysis, compatibility with the properties required for a particle-phase standard reference material considering the mixture composition systems (two-component system, three-component system), stage of aging, time elapse, and material types (particle sizes and grades) was analyzed to obtain the final composition of the standard reference material for cement paste. The results can be summarized as follows: (1) In rheology analysis of the two-component composition system with limestone powder and water, double-sided non-linear response behavior was observed. On the other hand, the threecomponent composition system with limestone powder, glycerol, and water satisfied all of the requirements for a particle-phase standard reference material. Therefore, a three-component composition was found to be appropriate as a standard reference material for cement paste. (2) In rheology analysis of the initial aging stages of each mixture of cement substitute, shear thinning occurred for meta kaolin, and shear thickening occurred for silica powder. Both limestone powder and blast furnace slag were found to fulfill the requirements for the initial aging stage of the particle-phase standard reference material. (3) Changes in the flow performance and chemical properties with time were analyzed. The plastic viscosity increased with time elapse in the blast furnace slag, and the strength was rendered in all samples. However, consistent flow performance and chemical properties with time were obtained with limestone powder, which satisfied all of the properties required for a particle-phase standard reference material. (4) In a sample mixed with corn syrup matrix fluid, a chemical reaction, including mold production and discoloration, occurred. The sample mixed with glycerol exhibited chemical stability with time. (5) Rheology analysis was conducted to examine the changes in flow characteristics by the particle sizes of limestone powder and grades of glycerol. The result showed that the plastic viscosity increased with a decreasing particle size of limestone powder, and all of its samples except for the one with a 1 µm particle size fulfilled the properties required for a particle-phase standard reference material. All grades of glycerol satisfied the requirements for a particle-phase standard reference material. (6) The composition of a standard reference material for cement paste, the primary component of concrete, was examined and the combination of limestone powder as a cement substitute, glycerol as a matrix fluid substitute, and water was found to be the most suitable. Further research to propose constituent compositions of standard reference materials of mortar and concrete will be needed based on the mixture combination suggested by this study.
8,712.2
2018-04-01T00:00:00.000
[ "Materials Science" ]
Theta dependence in holographic QCD We study the effects of the CP-breaking topological θ-term in the large Nc QCD model by Witten, Sakai and Sugimoto with Nf degenerate light flavors. We first compute the ground state energy density, the topological susceptibility and the masses of the lowest lying mesons, finding agreement with expectations from the QCD chiral effective action. Then, focusing on the Nf = 2 case, we consider the baryonic sector and determine, to leading order in the small θ regime, the related holographic instantonic soliton solutions. We find that while the baryon spectrum does not receive Oθ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O}\left(\theta \right) $$\end{document} corrections, this is not the case for observables like the electromagnetic form factor of the nucleons. In particular, it exhibits a dipole term, which turns out to be vector-meson dominated. The resulting neutron electric dipole moment, which is exactly the opposite as that of the proton, is of the same order of magnitude of previous estimates in the literature. Finally, we compute the CP-violating pion-nucleon coupling constant g¯πNN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{g}}_{\pi N\;N} $$\end{document}, finding that it is zero to leading order in the large Nc limit. JHEP02(2017)029 In the electroweak sector of the Standard Model, parity (P), time reversal (T) and charge conjugation (C) can be separately broken, while their combination (CPT) is preserved. Whether some of these discrete symmetries are separately broken also in QCD remains to be experimentally verified. Instantons in the model naturally induce a P-and T-violating topological term proportional to θ TrF ∧ F , where F is the SU(3) field strength and θ is a parameter. In principle, nothing forbids θ from taking a generic value. However, experiments tell us that it should be extremely small. The strongest bound on its value comes from measurements of the neutron electric dipole moment (NEDM) d n . Recent experiments [1,2] give |d n | ≤ 2.9 × 10 −26 e · cm (90% CL). The topological θ angle in QCD could provide the main contribution to the NEDM, since CP-violating effects from the electroweak sector give rise to a dipole moment which is orders of magnitude smaller than the above mentioned experimental bound. A tentative order-of-magnitude theoretical estimate [3,4] gives |d n | ≈ |θ|e m 2 π M −3 N ≈ 10 −16 |θ|e · cm where m π (resp. M N ) is the pion (resp. nucleon) mass. Put together with the above mentioned experimental bound, this gives an unnaturally small value |θ| ≤ 10 −10 for the topological parameter. This is the so called strong CP problem, a possible theoretical resolution of which (a θ angle relaxing to zero dynamically) is provided by the Peccei-Quinn mechanism [5] which would imply the existence of axions [6,7]. From a theoretical perspective, studying how the θ parameter affects the physics of QCD requires going beyond perturbation theory. Lattice techniques find some limitations in this case, since the topological term is imaginary in the Euclidean Lagrangian and a sign problem arises. While relevant results have been obtained expanding, up to few terms, around θ = 0 in the pure Yang-Mills case (see e.g. [8] for a detailed review on the subject), lattice estimates of CP-breaking observables in full QCD, notably estimates of the NEDM (see e.g. [9][10][11][12]), are still plagued by quite large systematic and statistical errors. In this perspective it is important to compare lattice results with model calculations. Famous results arise within chiral perturbation theory, where both the θ-dependent ground state energy density [13] and the NEDM -which turns out to be proportional to the nonderivative CP-violating pion-nucleon couplingḡ π N N [14] -have been computed. Within this approach only the pion cloud contributes to the NEDM, since massive (axial) vector mesons have been integrated out. Another model approach, complementary to the one above, consists in taking 't Hooft's large N c limit where N c is the number of colors. This limit is known not to commute in general with the small quark mass one in which chiral perturbation theory is organized. In the unflavored Yang-Mills case, relevant features of the θ-dependent ground state energy density have been first discussed in [13] and then explicitly realized, to leading order in θ/N c , in a holographic Yang-Mills model in [15]. 1 When N c = ∞ mesons (and glueballs) are non-interacting and stable. At large, finite N c , meson-meson couplings are found to be of order 1/ √ N c , while baryon masses scale as JHEP02(2017)029 N c . This suggests that baryons can be seen as solitons in the effective large N c mesonic Lagrangian [17]. This picture is actually realized within the chiral effective theory (the Skyrme model [18]), whose solitons are identified with the baryons. Static properties of nucleons with N f = 2 massless (resp. massive) flavors have been studied in the seminal paper [19] (resp. [20]). In this context the NEDM has been computed both with N f = 2 + 1 massive flavors [21] and in the N f = 2 mass degenerate case [22]. Differently from the chiral Lagrangian approach, in the Skyrme model virtual pion contributions to the NEDM are subleading in 1/N c . This could be related to the large N c scaling of the CP-breaking pionnucleon coupling. A first estimate gaveḡ πN N ∼ N 1/2 c [23], but a more careful analysis [24] suggested a neatly different scaling,ḡ πN N ∼ N x c , with x ≤ −1/2. A complementary check of the latter suggestion is clearly an interesting issue. Both the chiral Lagrangian and the Skyrme approach miss the effects induced by the whole massive (axial) vector meson tower. To overcome this and other limitations of the effective approach, we consider the large N c QCD model by Witten, Sakai and Sugimoto (WSS) [25,26] where the θ-dependence can be studied from first principle computations using the holographic correspondence. The WSS model is a non-supersymmetric SU(N c ) gauge theory in 3 + 1 dimensions, coupled to N f quarks and a tower of massive (Kaluza-Klein) matter fields transforming in the adjoint representation of SU(N c ). In the regime where a classical dual gravity description is available, these massive fields cannot be decoupled and the UV behavior of the model neatly departs from that of real QCD. Despite this limitation, the WSS model exhibits, at low energy, all the crucial features, like confinement, chiral symmetry breaking and the formation of a mass gap, which appear in QCD. The WSS model provides analytic control on, as well as simple geometrical descriptions of, these highly non-trivial non perturbative effects. It remarkably contains, within a unique framework, different effective QCD models which have been built to describe specific sectors of the theory. This unifying perspective allows, at least qualitatively, to go far beyond the limits of the various effective descriptions. In the original version of the model the quarks are massless. In this case, as it is expected from field theory, any θ-dependence is washed out by a chiral rotation of the quarks. We will discuss in the following how this is realized in the holographic model. A (small) mass term for the quarks can be introduced using a prescription suggested in [27,28]. We adopt that prescription and compute the ground state energy density of the model as well as the topological susceptibility and the pion and η masses as a function of θ, finding agreement with the chiral Lagrangian results (for a recent holographic derivation of these observables in a bottom-up model, see [29]). Then we focus on the baryonic sector. Just like baryons in the large N c limit can be seen as solitons of the chiral Lagrangian, in the WSS model they are identified with instantons of the holographic Lagrangian describing the mesonic sector [30,31]. We compute the θ-corrected holographic instanton solutions, focusing on the N f = 2 case, finding that the baryon spectrum does not get corrections to first order in θ. Currents are instead sensitive to CP-breaking effects. In particular, the dipole term in the nucleon electromagnetic form factor turns out to be different from zero and, as already pointed out in [32], exhibits complete vector meson dominance. We present a review and a detailed JHEP02(2017)029 analysis of the computation of the NEDM reported in [32], complementing it with a novel study of the full momentum dependence of the dipole form factor. Finally, we compute the CP-breaking pion-nucleon couplingḡ πN N finding that it is zero to leading order in the large N c limit. The paper is organized as follows. In section 2 we review the main features of the original WSS model with massless flavors. In section 3 we recall how the U(1) axial anomaly and the chiral effects on the θ term are realized in the model. We also discuss a Horava-Wittenlike solution of the anomalous Bianchi identities involved in the gravitational description of these effects. In section 4 we review the inclusion of the flavor mass term and in section 5 we discuss how it affects the θ-dependent vacuum. In section 6 we focus on the holographic description of baryons in the WSS model. After reviewing the soliton solution describing baryons and its quantization, we compute the shift in the baryonic Hamiltonian due to the θ angle and the flavor mass term, discovering that while the shift is of leading order in m π , it is subleading (O(θ 2 )) in θ. In section 7 we compute the leading order corrections (in θ and in the quark masses) to the instantonic solutions describing baryons. In section 8 we review and discuss how, focusing on the nucleon electromagnetic form factors, these novel solutions can be used to compute the neutron electric dipole moment. Moreover we present a novel analysis of the full electromagnetic dipole form factor. Finally, in section 9 we focus on the CP-violating pion-nucleon coupling. We collect some further technical comments in the appendices. Conventions. Throughout this paper we use the conventions in [26] for the RR forms, scaling them with respect to the standard notation as where 2k 2 0 = (2π) 7 l 8 s gives the ten dimensional Newton's constant, τ p = (2π) −p l −(p+1) s is proportional to the Dp-brane tension and l s ≡ √ α is the string length. Witten-Sakai-Sugimoto model The WSS model is based on a D-brane setup in type IIA string theory. It consists of N c 1 D4-branes wrapped on a circle S x 4 [25] and N f D8−D8-branes placed at fixed antipodal points on the circle [26]. Along the circle, of length 2πM −1 KK , fermions obey anti-periodic boundary conditions. In such a way, at energies E M KK the original (4+1)-dimensional theory on the D4-D8 brane intersection, reduces to pure non-supersymmetric SU(N c ) Yang-Mills in 3 + 1 dimensions coupled to N f massless quarks. Other matter fields, transforming in the adjoint representation, get masses of the order of M KK . The holographic dual description of the above large N c QCD model simplifies if the quarks are treated in the quenched approximation and (unfortunately) if the spurious adjoint matter fields are not decoupled. In this case, the dual picture is provided by a classical gravity background sourced by the wrapped D4-branes and probed (without backreaction) by the D8-branes. The background The relevant type IIA gravity action, in string frame, reads Here F 4 = dC 3 is the RR four-form which is magnetically sourced by the N c D4-branes, φ is the dilaton and F 2 = dC 1 is the RR two-form which, as we will review in a moment, accounts for the topological θ term in the dual field theory. Neglecting its backreaction on the background amounts on working at small θ/N c and getting only the leading order corrections in this parameter [15]. In this paper we will work in this approximation. 2 Treating the F 2 form as a probe, the background has the following features [25]. The string frame metric reads 3) The dilaton and the four-form field strength are given by with the flux quantization condition fixing the value of R as In the formulae above, µ = 0, 1, 2, 3 are the 1+3 Minkowski directions where the Yang-Mills theory is defined, dΩ 2 4 is the metric of a four-sphere S 4 of radius one, U is the transverse radial coordinate U ∈ [U KK , ∞), x 4 is the compact coordinate of length 2πM −1 KK and R is a curvature radius. Moreover, g s is the string coupling and ω 4 is the volume form of the transverse S 4 , of volume V S 4 = 8π 2 /3. The isometry group of S 4 is mapped into a global SO(5) symmetry group in the dual field theory, which acts non-trivially on the adjoint Kaluza-Klein massive modes (signaling that these are, in fact, not decoupled in the limit we are considering). The S x 4 circle shrinks to zero size when U = U KK . Absence of conical singularities at U = U KK is guaranteed if the coordinate x 4 has period The resulting (U, x 4 ) subspace has a cigar-like shape. Most of the relevant physics in the model is captured by this geometry. Regularity and the property g 00 (U KK ) = 0 imply confinement and the formation of a mass gap in the dual field theory [25]. JHEP02(2017)029 The Yang-Mills theory dual to the above background has two distinct mass scales: the Kaluza-Klein scale M KK (which is also the glueball mass scale) and the string tension T s . Their ratio is determined by the parameter λ ∼ T s /M 2 KK . Reliability of the background requires λ 1: this is a further indication that the spurious KK modes cannot be decoupled when the dual description sticks in the classical gravity regime. Reliability of the background also requires e φ to be small: when this condition is violated (namely, at large U ) we should better make use of the eleven dimensional ("M-theory") completion of the model, which is an asymptotically AdS 7 × S 4 solution of eleven dimensional supergravity [25]. The UV 't Hooft coupling and the Yang-Mills θ angle can be related to the gravity parameters by considering the low energy limit of the D4-brane action where F αβ ≡ 2πα F αβ is proportional to the gauge field strength, C 5 is the electric fiveform sourced by the branes (its field strength F 6 is the Hodge dual to F 4 ) and G αβ is the induced metric on the world-volume. Expanding the action to second order in derivatives, considering the UV asymptotics U → ∞ and integrating over the compact x 4 direction one gets the Yang-Mills Lagrangian and k is an integer. The second relation in (2.9) defines θ mod 2π integer shifts (since the integral of C 1 is gauge invariant only modulo 2πZ). Solving the equation of motion for C 1 , treated as an external field on the type IIA background given above, one finds, imposing (2.9), that Since this parameter depends on k, what we actually get on the gravity side is an infinite family of solutions corresponding to possible field theory vacua. This behavior precisely reflects the expected multi-branched structure [13] of the θ-dependent vacuum of the theory. Actually, following standard holographic rules [15], the field theory ground-state energy density f (θ) (related to the on-shell renormalized gravity action) reads, to leading order in Θ 1, Since Θ is proportional to θ +2kπ, for a given value of θ the true vacuum energy is obtained by minimizing the previous expression over k JHEP02(2017)029 As a result, the ground state energy density turns out to be a periodic function of θ, as expected [13]. To any given interval, of length 2π, of possible values of θ, it corresponds a precise value of k. For example, k = 0 when θ ∈ (−π, π) and so on. Notice that the probe approximation for C 1 requires that, in the k = 0 branch, This is actually one of the limits we will work with. When θ 1 the energy density reads with the topological susceptibility given by [15] See [16] for an exact-in-Θ analysis of the ground state energy density and many other physical observables. To simplify the formulae it is sometimes convenient to set M KK = 1 working in the following units . (2.16) Adding probe flavor branes Treating the N f flavor D8-branes as probes on the background requires taking (see e.g. [33]) and neglecting O( f ) corrections on the background fields. This is another limit in which we will work. 4 In the probe approximation, the background metric, dilaton and four-form RR field strength will be kept fixed as in (2.2), (2.4) so that the equations of motion to be solved for, arise from a string frame action of the form The F p+1 are the RR field strengths of the bulk C p forms while the F are the U(N f ) field strengths of the gauge fields living on the D8-branes, F = dA+iA∧A. Powers of differential forms are done by means of the wedge product. The symbol P[g] denotes the pullback of the metric on the D8 worldvolume and the symbol "STr " denotes the symmetrized trace on the gauge group indexes. JHEP02(2017)029 The energy density for the D8-branes (corresponding to the antipodal embedding on the S x 4 circle) is minimized by the u-shaped embedding x 4 (U ) = const. Its physical meaning is remarkable. The U(N f ) × U(N f ) symmetry on the antipodal D8−D8 branes, which in turn corresponds to the classical chiral symmetry group in the dual field theory, is broken to the diagonal subgroup since the two different branches actually join at the tip of the cigar in the background. This is how the holographic model realizes the spontaneous chiral symmetry breaking of the dual QCD-like theory. It is often more convenient to redefine the cigar coordinates in the following way [26] parameterizing the (ũ, ϕ) plane in Cartesian coordinates (y, z) y =ũ cos ϕ , z =ũ sin ϕ . (2.20) The cigar metric then reads with U given as a function of z and y and q(ũ) defined by q(ũ) = 1 Using these coordinates, the antipodal embedding just reads y = 0. Correspondingly, putting the S 4 components of the F field to zero, assuming that the other components do not depend on the S 4 angular coordinates, integrating over S 4 and expanding to second order in derivatives, the relevant action, from (2.18), reduces to and Among all the RR forms F p+1 in (2.18) we are keeping only F 8 (in S C 7 ), dual to F 2 . For a moment let us neglect the S C 7 term (we will discuss in detail its implications in section 3) and focus on the physical meaning of the remaining part of the action (2.22). It provides the holographic description to the mesonic sector of the model. Holographic mesons Let us consider inserting into (2.22) the following expansions for the gauge field JHEP02(2017)029 If we choose the functions φ n (z), ψ n (z) to form complete, suitably normalized sets, the fields ϕ (n) and B (n) µ get canonical mass and kinetic terms in four dimensions. In particular, we set From these conditions, as we review in appendix A, it follows that the B (n) µ modes correspond to massive vectors (resp. axial vectors), for odd (resp. even) n, with masses m 2 n = λ n M 2 KK . For example, B µ is identified with the ρ meson and B (2) µ with the a 1 meson. The scalar modes ϕ (n) for n ≥ 1 get eaten by the B Thus, a remarkable feature of the effective action (2.22) is the fact that it includes automatically, into a unified picture, the low lying modes and the whole tower of massive mesons. All the parameters in the meson action are fixed in terms of the few parameters of the model, i.e. N c , N f , M KK and λ. As we review in appendix A, the effective action for the pion precisely reduces to the chiral Lagrangian and the Skyrme model, with the pion decay constant f π and the coupling e defined as , (2.27) and the pion matrix given by The U(1) A anomaly and flavor effects on θ In this section we describe how the presence of massless quarks in the WSS model erases any physical effect of the θ parameter. Before the reduction on S 4 , the S C 7 term in the action (2.22) reads where we have introduced the one-form ω y = δ(y)dy, in order to extend the D8 integral to the whole spacetime. The equation of motion for C 7 reads By using the Hodge relation 5 we see that the equation of motion above is translated into an anomalous Bianchi identity dF 2 = Tr F ∧ δ(y)dy . JHEP02(2017)029 Notice that the "anomaly" is only driven by the Abelian component of the U(N f ) gauge field, i.e. the hatted field in the decomposition where T a are the SU(N f ) generators. We can formally solve (3.4) by writing Now, as it was already observed in [26], following the results in [34], this form is gauge invariant if we allow for the following combined gauge shifts This actually implies that when D8-branes are present, dC 1 is not a gauge invariant form. The correct gauge invariant combination isF 2 . Moreover, with a gauge shift on the Abelian component of the gauge field on the brane, the components of dC 1 along the cigar directions can be gauged away. Since the integral of dC 1 along the cigar gives the bare θ parameter of the theory, this implies, consistently with field theory expectations, that the bare θ parameter can be rotated away by a chiral U(1) A phase shift of the fermionic fields. This is explicitly realized by considering Integrating along the cigar, eq. (3.7) gives which corresponds to the shift θ → θ + 2N f α , (3.10) after recalling that a gauge transformation with Λ| z=±∞ = ± 2N f α is holographically on the fundamental fermionic fields [26]. Since with a chiral rotation the θ parameter can be rotated away, it is clear that when the model contains (even just one) massless flavors its topological susceptibility as well as any θ-dependence of its observables vanishes. A non-zero θ-dependence can be obtained when the quarks are massive, as in the real world. As we recall in appendix B, the action S C 7 is equivalent to (3.12) JHEP02(2017)029 Considering a zero mode for A z such that we see that using the integrated Bianchi identity forF 2 and its equation of motion d F 2 = 0, the on-shell value of the action above reduces to where χ g is the topological susceptibility of the unflavored model (2.15). As it has been observed in [26] this precisely gives the large N c estimate of the η mass predicted by the Witten-Veneziano formula Being explicit this gives, in our model, Hence, working in the probe approximation requires taking This is then another limit in which we are forced to work. What we just did can be understood in terms of a Stueckelberg mechanism, in which a massless vector field A M "eats" a scalar (from the D8 point of view) field C y . Acquiring a new degree of freedom A M becomes massive, hence explaining the mass of the η arising from the U(1) A anomaly. Horava-Witten solution of the anomalous Bianchi identity In general, the formal solution (3.6) of the anomalous Bianchi identity (3.4) does not solve the equation of motion d F 2 = 0. The main problem is the presence of the delta function. The present setup shares many common points with the Horava-Witten one [35]. As in that case, we can solve the Bianchi identity in a way which is compatible with the equations of motion by writing where Θ(y) is the step function, Θ(y) = |y|/2y, f M N are regular terms vanishing at y = 0 and f zy will be discussed in a moment. The extra terms f M N are necessary to satisfy the equation of motion d F (2) = 0. The Bianchi identity dF 2 = Tr F ∧ δ(y)dy is satisfied provided df = 0, hence one can always put JHEP02(2017)029 A solution f AB of the Bianchi-Maxwell system (i.e. the Bianchi identity and the equation of motion forF AB ), provided it exists, is not unique. In fact it is always possible to add to a given solution the zero mode satisfying df (0) = 0 and d f (0) = 0, for any value of the constant C. Let us thus write which is a consistent boundary condition. Now, the constant C acquires a physical meaning (in terms of the θ parameter) after imposing the boundary condition which gives Let us now add that lim | x|→∞ F M N = 0: from this it follows that the limit | x| → ∞ of the Bianchi-Maxwell system is linear in f AB , hence we can find solutions that vanish at spatial infinity. 6 Moreover, whatever the explicit form of f Recalling that the mixing between the equations is schematically given by 7 we see that y has to be zero, hence only the zero mode f AB , even though we will not need it. According to the observations above, the solution is antisymmetric in y and thus we can first solve the Bianchi-Maxwell system for y > 0 and then continue the solution for negative values. At this point the existence of the solution is obtained JHEP02(2017)029 by a counting: there are three independent equations, while the unknowns are the g A , (3.27) analogously to (3.19). The independent components are three because the Lorentz symmetry relates the µ indexes. The system is solvable having the same number of components and unknowns. WSS model with massive fermions In view of the relation (2.28), defining the pion matrix as a path ordered holonomy matrix and in analogy with the chiral Lagrangian approach, a natural term to add to the effective action (2.22) in order to describe massive quarks is where c is a constant and M is the mass matrix. This term has actually a very precise meaning in string theory [27,28]: it is the deformation due to open string worldsheet instantons stretching between the D8-branes. A basic observation in [27] is that the U(N f ) holonomy matrix U which is the order parameter for chiral symmetry breaking, is not gauge invariant, when embedded in the full string theory model, under gauge transformations of the NSNS B-field. A gauge invariant object can be obtained by multiplying U by e i B where the integral is done over the cigar directions of the background. A way to construct an operator carrying such a phase is to insert an open fundamental string (actually a worldsheet instanton) stretching between the branes. The string worldsheet will be extended along the cigar directions U, x 4 from U = U KK up to a cutoff U = U m which will set the quark bare mass parameter. Introducing such a worldsheet instanton corresponds to deforming the dual gauge theory by a non-local mass term for the fermions. The Nambu-Goto part of the open string action is put on-shell and its exponentiation contributes to the constant c and the mass terms. What remains is just the boundary interaction of the open string with the gauge fields on the D8-branes. The constant, up to an irrelevant normalization factor, reads 8 When the mass term (4.1) is added to the original WSS model, in such a way that all flavor fields get masses, we should expect that the θ dependence emerges again. This is actually what happens. As reviewed in sections 2 and 3, the θ term can be introduced as an integral of C 1 and then removed (in absence of flavor mass terms) via a gauge shift (3.8) JHEP02(2017)029 After this shift, however, S mass becomes The θ-dependence is thus not erased anymore. Moreover, as expected in QCD, the physical θ parameter is not just the coefficient of F ∧ F but the combination In the following we will mostly focus on the mass-degenerate case M ij = m q δ ij , choosing m q to be real. θ dependence of the vacuum energy Let us now see how the mass deformation introduced above modifies the vacuum solution. Let us first notice that the chiral condensate satisfies the Gell-Mann-Oakes-Renner (GMOR) relation [27,28] where c is defined in (4.2). In the particular mass-degenerate case M ij = m q δ ij the GMOR relation implies that In this case the minimum of the energy is found by setting the non Abelian component of the gauge field A to zero modulo gauge transformations. The vacuum will then be described by a pure gauge solution F = 0. The only relevant part of the effective action determining the vacua is that for the A z Abelian component. 9 Together with (2.22), (3.14), the action (4.4) gives Notice that at y = 0, where the gauge fields are defined, from the Horava-Witten-like solution (3.18) it follows that the equation of motion for Aµ does not receive any contribution from SC 7 . This is true for the two following reasons: a) the metric on the cigar directions (y, z) is diagonal at y = 0; b) we are setting Fµy = 0. As a consequence, only the equation of motion for Az receives a contribution from SC 7 via the zero mode components ofFyz. All this will apply also to the instanton solutions we will look for in the following. JHEP02(2017)029 The vacuum solution F µz = 0 can be given in terms of From the equation of motion of A z we actually get the following condition in the massdegenerate case where, also recalling (2.27), we have used κm 2 π = πcm q and eq. (3.16). Equation (5.6) is precisely the same which follows from the chiral Lagrangian approach discussed in [13] (see also [36] for a review). The on-shell four dimensional Lagrangian density on the vacuum solution we have found is We can extract the vacuum solution analytically by considering the following two extreme cases: This is the limiting case which arises if we take the large N c limit before the chiral one. In a sense, this limit is analytically connected with the limit in which the quark mass is so large that the flavors can be integrated out. Correspondingly, the vacuum energy density around θ = 0 goes, to leading order, like which is the same behavior (2.14) as for the unflavored theory. ii) m 2 W V m 2 π : in this limit the solution is unique This limit is actually closer to the phenomenologically acceptable case because m π 135 MeV while m W V ∼ m η 958 MeV. In this case the vacuum energy density f (θ) reads The topological susceptibility of the theory is thus as expected from chiral perturbation theory. JHEP02(2017)029 In any case, expanding the effective Lagrangian (5.4) around the vacuum solution (5.6), we can obtain the following θ-dependent mass spectrum which implies that the masses of the low lying mesons decrease quadratically with θ for small θ. This behavior reflects the general trend already observed in [16] for other mass scales in the unflavored theory. Holographic baryons In this section we first review how baryons are described in the WSS model, recalling the quantization of the moduli space Hamiltonian. Then, we show that the correction to the baryon spectrum due to massive quarks and the θ term is quadratic in θ. In the WSS model, following [37], a baryon vertex is identified with a D4-brane wrapped on S 4 and the baryon number is defined as the charge of that brane. Adding a D4-brane source to the WSS setup implies including a term into the action. This in turn implies that a baryon corresponds to a soliton solution F with non trivial instanton number where B is the space spanned by x 1,2,3 , z. The instanton number n B is then interpreted as the baryon number [30]. To show that this is indeed the case, let us write down the original WSS action (eq. (2.22) without the S C 7 term) separating the Abelian and the non Abelian components (see (3.5)) Here ω is defined as in (2.24), written in terms of just the non Abelian components. It is worth noticing that it is identically zero for N f = 2. Defining a(t) = A/ 2N f and treating it as a time dependent perturbation over the soliton solution with instanton number n B , we obtain in the action a term JHEP02(2017)029 This describes a point-like particle with U(1) V charge equal to N c n B : precisely that of a baryon (a bound state of N c quarks) with baryon number n B . The above holographic picture resembles the Skyrme one, where baryons at large N c are seen as solitons in the chiral Lagrangian [18,19]. The similarity becomes more evident at low energies, since, integrating out the massive vector modes, the effective WSS action reduces to the Skyrme model with the WZW term. The equations of motion following from (6.3) are not easy to solve analytically. A simple static instanton solution, for N f = 2, can be given focusing in a tiny region around z = 0 where one can neglect the curvature of the background setting k(z) ≈ h(z) ≈ 1. In this case the solution is given by a charged BPST instanton [30,38] A cl (6.6) τ a are the Pauli matrices and the index M runs over the four directions x 1,2,3 , z. The instanton solution written above depends on eight parameters: the instanton size ρ, the instanton center of mass position X M = ( X, Z) in the four dimensional Euclidean space, and three SU(2) "angles" related to the fact that the solution can be rotated by means of a global gauge transformation. Substituting the solution (6.5) into the action (6.3) on finds with M 0 giving the baryon mass in the λ → ∞, N c → ∞ limit. This implies that, while X and the gauge group orientations are genuine moduli of the instanton solution, ρ and Z are not; in fact they are classically fixed by minimizing M B as These relations imply that the center of the instanton is classically localized at Z = 0 and its size ρ ∼ 1/ √ λ is very small in the λ 1 regime. This is perfectly consistent with the approximation we have taken to get the above instanton solution. In particular, the latter is obtained by a systematic expansion of the equations of motion in 1/λ, considering a scaling x, z ∼ O(λ −1/2 ), x 0 ∼ O(1) for the space-time variables, and the following scalings for the gauge fields JHEP02(2017)029 In the following discussion we will treat ρ and Z as approximate moduli, allowing them to fluctuate quantistically around their classical values. This is not completely correct because they modify the potential energy, but it remains a good approximate description if the fluctuations are small. Quantization The quantization of the WSS soliton proceeds following the moduli space approximation method as described in [30] and takes inspiration from the Skyrmion quantization [19]. Since M 0 = 8π 2 κ ∝ λN c 1, the baryon is very heavy and the system reduces to a quantum mechanical model for the instanton (pseudo) moduli. In the SU(2) case the oneinstanton moduli space, topologically equivalent to R 4 × R 4 /Z 2 , is parameterized by X M and y I , (I = 1, 2, 3, 4) with the Z 2 action y I → −y I . The instanton size ρ is given by ρ 2 = y I y I and a I = y I ρ −1 are the SU(2) directions. Technically, the above parameters are promoted to time dependent variables and the SU(2) field describing the slowly moving soliton is defined through a "wrong" gauge transformation (6.10) The SU(2) matrix V (t, x, z) is necessary for ensuring that the new time-dependent soliton still solves the equations of motion following from the action (6.3). The only non trivial condition comes from the Gauss's law constraint which actually reduces to D M F 0M = 0 on the solution. This equation can be solved by where the dot is a time derivative, f (ξ) and g are defined in (6.6), a(t) = a 4 (t) + ia a (t)τ a contains the gauge group orientation moduli and the boundary condition has been imposed. Inserting the slowly moving soliton solution into the action (6.3) one gets the quantum mechanical Lagrangian and thus the Hamiltonian JHEP02(2017)029 where They respectively describe a free particle in three dimensions, a harmonic oscillator in one dimension and a harmonic oscillator in four dimensions with an extra centrifugal energy. The eigenfunctions and eigenvalues for the first two pieces are [30] where H (n) are Hermite polynomials. Concerning the third one, switching to spherical coordinates in R 4 , the Laplacian decomposes as 19) and the obvious ansatz for Ψ 3 (y I ) is where Y ( ) are the scalar spherical harmonics on S 3 with eigenvalue ( + 2). Such a wave function has spin and isospin equal to /2 where the spin and isospin operators are identified with the generators of the SO(4) symmetry group acting on the y I These relations imply that only states with I = J appear in the spectrum. A crucial observation is that a I and −a I are identified on the instanton moduli space. If we want to quantize the solitons as fermions we have to require the wave function to be antiperiodic ψ(a I ) = −ψ(−a I ). This selects = 1, 3, 5, · · · to be positive odd integers. The related states have I = J = /2. The solution for R(ρ) can be found by noticing that the centrifugal term in H y modifies the angular momentum as JHEP02(2017)029 Thus, upon substituting →˜ we end up with a regular harmonic oscillator in four dimensions in spherical coordinates. The solution is is the Confluent Hypergeometric Function. The corresponding eigenvalues are A baryon is a state |B, s in the Hilbert space defined by the Hamiltonian H, where s is the (iso)spin of the baryon. The quantum numbers n ρ and n Z describe excited baryons and/or resonances; the case = 1, n ρ = n Z = 0 corresponds to the neutron (with isospin component I 3 = −1/2) and the proton (I 3 = 1/2) and the corresponding wavefunctions are (6.26) Baryon Hamiltonian with quark mass and θ Let us now consider adding to the action (6.3) the mass term for the flavors introduced in (4.1) at θ = 0. This term gives a novel contribution to the baryon Hamiltonian and modifies the WSS soliton solution. At leading order in the small m q limit (let us focus on the simpler case of degenerate quark masses), the contribution can be computed, along the same lines as in [39], from the on-shell value of on the WSS instanton soliton solution (6.5). Here the 1 subtraction corresponds to the subtraction of the vacuum energy (in the case of degenerate masses the minimum is for U = 1), while e iϕ comes from the vacuum θ-dependent contribution discussed in section 5. Let us work in singular gauge, where the A cl z field is given by which is obtained from (6.5) after implementing a gauge transformation A cl The pion matrix is easily computed (we also set X = 0 without loss of generality) as JHEP02(2017)029 The shift in the baryon mass δM B is given by Let us now focus on the N f = 2 degenerate case in the physical mass regime m π m W V , so that, as we found in section 5, we can set ϕ = θ/2 up to subleading corrections in the mass ratio. We define the integration variable y = | x|/ρ and get The integral is evaluated numerically and the final result is The quantum contribution to this mass splitting, that differentiates the various species of baryons, follows in the same way as in [39], so we will skip it. A relevant result of this section is that the baryon Hamiltonian, hence the spectrum, through the mass term piece δM B computed above, gets second order O(θ 2 ) corrections at small θ. The mass splitting δM B at θ = 0 will anyway perturb some of the baryonic properties. In the semiclassical limit it will in fact affect the size of the baryon ρ which will get an O(m q ) correction. When two different quark masses m u , m d are considered the result is modified. First of all we should impose that the pion matrixÛ = e i θ 2 U approachesÛ 0 when | x| → ∞ (the vacuum configuration). The matrixÛ 0 turns out to be: The classical action has to be modified as 35) and the solution A cl z must be computed after a global gauge rotation that satisfies lim |x|→∞ U = U 0 (we could take for instance g(∞) = U 0 and g(−∞) = 1). The result follows easily: An interesting feature of the non-degenerate mass case is that the SU(2) modulus a gets a potential term δM B ∝ Tr M aU 0 a −1 , (6.37) thus giving a mass splitting between states with different isospin. For the case of the proton and the neutron this splitting would be too small compared to the electromagnetic splitting (not included in this analysis), so we ignore this computation. JHEP02(2017)029 7 Mass and θ perturbations to holographic baryons Let us now show how the original WSS instanton solution holographically describing a baryon gets modified by the mass and the θ term. This is done at leading order both in m q and θ. For simplicity we mostly focus on the case of two degenerate masses m u = m d . The equations following from the action given by the sum of (2.22) (with S C 7 given in (3.1)) and (4.1) are The factors N f are displayed explicitly but will soon be substituted by "2". The solution will be decomposed in three different contributions: A vac , A inst and A mass . The first one is the vacuum solution found in section 5 The second one, in the ξ 1 region, is the WSS instantonic solution (6.5) in singular gauge The solution in the remaining range of ξ values will be presented in a moment. The last piece, A mass , is the perturbation due to the presence of the mass term that we wish to compute. Since we are looking for solutions with non trivial field strength F , the components of the RR two-form [F 2 ] AB with A, B = y are given by the Horava-Witten solution (3.18). The component zy, instead, will be kept to be the same as in the vacuum To determine the perturbation A mass , we expand the equations of motion to first order in m q (with A mass being of O(m q )). The resulting equations for the mass perturbation will JHEP02(2017)029 be mixed by the presence of the Chern-Simons terms, making it very difficult to find a solution. The following arguments will enable us to simplify the problem. There are three different regions in which we can divide the space: ξ 1, ρ ξ 1 and ρ ξ. 10 We will call them respectively the flat, the overlapping and the asymptotic region. The flat region is where the curvature of the metric can be neglected. This is where the WSS BPST-like instanton solution (6.5) has been obtained. This solution has the scaling with λ reported in (6.9). In the asymptotic region the original WSS instanton solution gets modified. Far from the origin the warp factors k(z) and h(z) cannot be neglected anymore and the asymptotic solution, 11 in singular gauge, that replaces A inst in (7.6) reads 4π r , (7.10) and the functions ψ n , φ n are the same that have been introduced in the meson sector in section 2.3 (see also appendix A) and λ 0 ≡ 0. Actually, since the asymptotic expansions above contribute to the currents in the WSS model [41], they account for the meson contributions to e.g. the form factors. From (7.9) we see that there is a suppression of an overall λ factor for each field; moreover the functions G( x, z, X, Z) and H( x, z, X, Z) are of order ∼ e −r in r, ∼ 1/z in z and ∼ 1/r in r, ∼ 1/z 2 in z respectively. In the overlapping region the solution is again (7.9) but with the functions G and H replaced by the flat Green's function G flat = −1/4π 2 ξ 2 ; the maximum value of the fields is reached when ξ approaches ρ, so the scaling is precisely (6.9), but here this behavior is reached as an upper limit (see table 1). 10 There is also another "large scale" region ξ > log λ/MKK , beyond the asymptotic one, where nonlinear effects become important, for example for the computation of the baryon charge form factor at large distance [40]. The existence of this large scale is ignored in the present computation and it does not affect the neutron electric dipole moment computation in the following section, at least for large λ. 11 This is a little bit different from the one in [41] because we have not considered the gauge group orientation moduli yet (this will be done in the following section); moreover here all the moduli of the solution are time independent. JHEP02(2017)029 Flat Overlapping Asymptotic Region ξ 1 ρ ξ 1 ρ ξ Solution BPST instanton function G flat functions G and H Scaling λ scaling λ scaling (limit) z and r scaling With this in mind let us look at the Chern-Simons terms in equations (7.1)-(7.4); in the asymptotic region all of them will be negligible as they are quadratic in the fields, in the other two regions however some of them have to be considered. If we look at (6.9) we conclude that, whenever an A 0 is present in a Chern-Simons term, its λ scaling is lowered, so the leading terms will be those with µ = 0. In fact in the equations for the µ = 0 components all terms are of the same order in λ, while in those for the µ = i or z components, the Chern-Simons terms happen to be suppressed as 1/λ with respect to the Yang-Mills terms, hence we will drop them in the following. Now we are ready to write down the equations for the mass perturbation (gauge fields without superscript are A inst or the ones in (7.9), our convention is ε 0123z = −ε 0123z = 1). Up to subleading terms they read where and · · · denote terms which do not contribute to the trace. The notation mass means "pick up the linear contribution in m q ". For now we work in the static gauge and we admit no time dependence for A mass (so the indexes "ν" in the equations above become "j"). The above system of equations can be divided into four parts: i) Abelian space component equations (7.12), (7.15). Abelian field: space components A consistent solution to the set i) can be found with the ansatz A mass i = 0. We will verify in the end this assumption. Let us first notice that (7.15) can be rederived starting from the effective action for the Abelian component A z , which, to first order in the mass deformation, reads (7.18) Focusing on the N f = 2 mass degenerate case and using the condition (5.6), we see that the equation of motion (7.15) reads Writing the equation as above, we have neglected the mass term for A mass z , which would arise from the effective Lagrangian (7.18). Recalling that A z dz is holographically related to the η field, we see that this term actually corresponds to the η mass. To leading order in the small quark mass limit, the latter is given by the Witten-Veneziano relation (3.16), which shows, in turn, that the squared η mass is a parameter of O( f ). Since we are working in the probe approximation, the η mass term is thus subleading. We will return to this point in section 8.5 where we will see that the η mass term can be used to regularize the integral which defines the full electromagnetic dipole form factor. (7.21) When r → ∞ the function α approaches a constant α → π, so the source term vanishes. The standard way to solve this equation is to use the Green's function The solution is given by the following integral u(r) = 2cm q κ sin ϕ JHEP02(2017)029 The above solution is sufficient to identically solve equation (7.12), hence we can put A mass i to zero: the ansatz claimed at the beginning was correct. It may be interesting to see the asymptotic solution for large λ. Changing variables r = ρy, since ρ tends to zero, from (7.23) we get that far away from r = 0 the solution can be approximated by In the following we will focus on the phenomenologically acceptable regime m π m η where (for N f = 2) ϕ ≈ θ/2. Non Abelian field: time component Let us now look at equation (7.13). To first order in m q the equation for the perturbation is the following r . (7.25) In static gauge the only field excited by this perturbation is A mass 0 . Let us consider the following ansatz When plugging this ansatz into the equations, the ( x − X) · τ piece factorizes and we are left with a partial differential equation for W 27) It is worth noting that this equation has been derived using as background the BPSTlike instanton solution (6.5), valid in the "flat" part of the geometry. Nevertheless, one can check that in the "asymptotic" region one would obtain precisely the expansion of equation (7.27) for large z. Thus, this equation is correct in the whole range of the radial variable. There are two possible approaches that can be used to solve equation (7.27): a) numerical PDE analysis; b) expansion in the eigenfunctions ψ n . The latter, which we are going to describe here, provides interesting insights about the physical content of our results [32]. The direct numerical analysis will be used later in the review of the calculation of the NEDM. The last term in the l.h.s of eq. (7.27), being essentially the l.h.s of the eigenvalue equation for the ψ n (2.26), suggests an expansion of the form W (r, z) = ∞ n=1 R n (r)ψ n (z) . (7.28) JHEP02(2017)029 Inserting the expansion into the equation, using the eigenvalue equation (2.26) and the orthonormality conditions on the ψ n we find [32] With the solution of (2.26) and (7.21) in hand, one can obtain an approximate solution of the above system by truncating it at some level m. The solution in the "flat region" In order to gain intuition on the physical meaning of the solution, let us consider the flat region around z = 0, where we can neglect the curvature effects driven by the functions h(z), k(z). In this limit the equation (7.27) reads (7.31) Let us also consider the r 0 limit, where the function u(r) is given by eq. (7.24). In this limit a solution of the above equation is simply As a result we can write (setting X = 0, which we can do without loss of generality) The above expression recalls that of an electric dipole term in the five dimensional space (at z = 0) induced by the θ parameter. As we will see in section 8, this is precisely what contributes to the electric dipole term in the dual four dimensional gauge theory. Non Abelian field: space components The solutions we have discussed above exhaust the list of leading O(θ) corrections to the original WSS instanton solution. At first order in m q , however, we have also to consider the corrections coming from solutions to the non Abelian equations (item ii) in the list given JHEP02(2017)029 above). Since in the present work we are mainly interested just in the O(θ) corrections, we present here the formal solutions to those equations discussing only their algebraic structure. Before expanding in m q , the equations we have to consider read Let us first rewrite the background instanton fields as where the η a M N are the 't Hooft symbols, which constitute a basis for the self dual tensors. The above solution represents an instanton with instanton number +1. The anti-instanton is given by the same expression with η replaced by η, where Our ansatz will be composed by two functions, one modifies the f 0 and the other will be an extra contribution to A z A a M = −η a M N ∂ N (log f 0 (ξ) + φ(r, z)) + δ M z ∂ a ψ(r) . (7.38) Notice the different arguments in φ(r, z) and ψ(r): we will see later that this is the correct assumption. These two functions have to be regarded as O(m q ), so the resulting equations will be linear in them (of course the zeroth order is already satisfied by f 0 ). The most lengthy part now consists in putting the ansatz above into equations (7.35) and write down the equations for φ and ψ. Let us first focus on the tensor structure With the ansatz φ(r, z) With the ansatz ψ(r) D M F a M i = −ε aij x j φ eq. , D M F a M i = ε aij x j ψ radial eq. , D M F a M z = x a φ eq. , D M F a M z = x a ψ zeta eq. . JHEP02(2017)029 The third derivative comes from the fact that in our definition of A M only the derivatives of φ and ψ enter. The actual variables thus are Φ ≡ φ (ξ) and Ψ ≡ ψ (r). The equations we were looking for finally read − φ eq. + ψ radial eq. = 0 , φ eq. + ψ zeta eq. = 2cm q κ cos θ 2 Combining these equations one gets Notice that in the first one the ξ dependence completely disappears. It is an ODE that can be easily integrated numerically. In the general case φ has to be regarded as a two-variable function φ(r, z). Remarkably, as stated above in (7.39), also in this case we have a very simple tensor structure and a dependence on only one parenthesis φ eq. , so all the manipulation made above are still valid. In this case however the equation is far more complicated where for φ (i,j) we mean ∂ i r ∂ j z φ(r, z). The final equation to be solved is where Ψ(r) is substituted by the solution found above. This equation can be integrated via numerical methods, even though now we are dealing with a PDE which is certainly more challenging. We will not show the numerical results here because the only purpose of this section is to show what is the correct tensor structure of the solution and how to get it. Abelian field: time component Let us finally consider equation (7.11). In the static case, on the A i mass = 0 solution, it reduces to an equation for A 0 The solution, of the form A 0 mass = f (r, z) , (7.46) JHEP02(2017)029 can be obtained after the equations for the spatial components of the non-Abelian field are solved, in the way we have described in the previous subsection. Precisely as those components, the field A 0 mass will be of O(θ 2 ) in the small θ regime. The neutron electric dipole moment In a theory with spin 1/2 particles where parity, time reversal and/or charge conjugation symmetries are not preserved, the form factors acquire novel contributions w.r.t. the cases with unbroken discrete symmetries. For example, the matrix element of the electromagnetic current between nucleon states of mass M N in the generic case reads (see e.g. [8] and references therein) where k = p − p, u s is a Dirac spinor with spin component s and Here, F 1 and F 2 are the standard (C,P,T even) Dirac and Pauli form factors: when k 2 → 0 F 1 (0) gives the electric charge of the fermion and F 2 (0) gives the anomalous part of the magnetic moment. The novel contributions are the dipole (F 3 ) and the anapole (F A ) form factors. When k 2 → 0, F A (0) gives the (T-and C-breaking) anapole moment and F 3 (0) gives the (T and P-breaking) electric dipole moment (EDM). In particular, the nucleon EDM reads The QCD Lagrangian with non zero θ parameter is invariant under charge conjugation and thus the corresponding anapole term vanishes (anapole moments can be induced by electroweak effects). The dipole form factor, instead, is expected to be proportional to θ, in the θ → 0 limit. As an example of application of (some of) the instantonic solutions found in section 7, in this section we review and discuss in details the holographic computation of the neutron electric dipole moment (NEDM) performed in [32]. Moreover, in section 8.5 we report the computation of the whole form factor F 3 (k). NEDM state of the art Permanent electric dipole moments of composite or fundamental particles with spin are sensitive observables of CP-violating effects in nature. The electric dipole couples to the electric field in the standard way E · d. For a neutral particle, like the neutron, the dipole has to be proportional to the spin, which is a pseudovector, so that E · d is odd under parity and time reversal. JHEP02(2017)029 Experimentally the electric dipole moment of a particle can be obtained by exposing it to an electro-magnetic field and measuring the Larmor frequency shift as the directions of the electric and magnetic fields are flipped. For neutral particles the measurement is much easier, since charged ones are accelerated by the electric field and would better require storage ring experiments. The history of the measurement of the neutron electric dipole moment finds its roots in the work by Purcell and Ramsey in 1950 [42]; since then many experiments followed, but no evidence for the NEDM has been found so far and the latest experimental upper bound is tiny, |d n | ≤ 2.9 × 10 −26 e · cm (90% CL) [1,2]. This bound on the NEDM is a relevant constraint to take into account when formulating theories beyond the Standard Model (bSM). This is because in most bSM scenarios many new CP-violating effects can arise providing possibly larger NEDM than the tiny Standard Model predictions. Hence any limit on the NEDM leads to bounds on the scales of new physics. In principle, the NEDM can be computed by where |n, s is neutron state with spin s and J em the electromagnetic current. In practice, computing the above matrix element requires using non perturbative tools. As we have recalled in the Introduction, the first order-of-magnitude theoretical estimate for the θ angle contribution to the NEDM, |d n | ≈ 10 −16 |θ|e·cm can be found in [3,4]. In order to refine this result, various strategies have been adopted. In lattice QCD there are essentially three possible ways for computing the NEDM (see e.g. [9][10][11][12] for a recent account). A first approach consists in computing the energy difference of neutrons with spin up and spin down in a constant external electric field (see e.g. [43]). Another one consists in taking the non-relativistic limit of the CP violating part of the matrix element of the electromagnetic current in the ground state of the neutron. Within this method, the NEDM is obtained from the electromagnetic form factor at zero momentum transfer. Finally, the NEDM can be computed by using an imaginary θ angle -to overcome the sign problem arising from the fact that the topological term is imaginary in the Euclidean Lagrangian -and then continuing back to real values. Lattice studies require a careful analysis of the quark mass dependence of the NEDM. Despite the fact that statistical errors are being reduced in recent lattice QCD computations (with N f = 2 or N f = 2 + 1 flavors) with unphysical (e.g. m π ≥ 0.5 GeV) pion masses, quite large systematic and statistical errors arise when pushing the pion mass to the smaller physical value. Most of the recent lattice results (see e.g. [9][10][11][12]) for both N f = 2 + 1 and N f = 2 point towards a negative value of d n , modulo the proviso above. In chiral perturbation theory [14] the strength of the NEDM, to which just the pion cloud contributes, turns out to be proportional to the non-derivative CP-violating pionnucleon couplingḡ π N N . To leading order in the chiral m π → 0 limit, JHEP02(2017)029 where g πN N is the CP-preserving pseudoscalar pion-nucleon coupling. Recent computations with N f = 3 at next to leading chiral order, actually give d n = −(2.9 ± 0.9) × 10 −16 θ e · cm [44] at the physical pion mass, after second order low energy parameters have been fitted with lattice data. In the large N c limit, the NEDM has been computed using the Skyrme model, both with N f = 2 + 1 massive flavors [21], yielding d n = 2 × 10 −16 θ e · cm and in the N f = 2 mass degenerate case [22], where a slightly smaller value d n = 1.4 × 10 −16 θ e · cm has been obtained. Notice that in both cases the sign of the NEDM is found to be positive. As it was pointed out in [21], the large N c Skyrme approach gives a scaling d n ∼ N c m 2 π θ when the m π → 0 limit is taken (after the large N c one). Comparing this with the expression found in chiral perturbation theory (8.5) we see explicitly how the noncommutativity of the large N c and the chiral limit show up. In particular no logarithmic terms are found in the Skyrme approach. The reason, as it was pointed out in [21], has to be found in the different mechanisms which give rise to the NEDM in the two cases. In the chiral limit the dominant term comes from a diagram where a neutron first dissociates in a proton and a π − . In the Skyrme approach, instead, virtual pion contributions are subleading in 1/N c . Actually, at large N c , g πN N ∼ N 3/2 c whileḡ πN N ∼ m 2 π N x c θ where the precise scaling factor x is not known. Although a first estimate gave x = 1/2 [23], a more careful analysis pointed out that x ≤ −1/2 [24]. The latter result would imply that at large N c the virtual pion contribution to the dipole moment (from (8.5)) would scale at most like d n ∼ m 2 π log(m π )θ and would thus be subdominant w.r.t. the "direct" Skyrme contribution d n ∼ N c m 2 π θ. The Skyrme computation is actually similar to the one we are going to perform for the WSS model: we can almost make a "dictionary" to translate our quantities with the ones in the Skyrme model. For instance the Skyrmion solution corresponding to a baryon here is the instanton A inst . The holographic model naturally extends the Skyrme one by including the contribution of the whole tower of vector mesons. In table 2 we summarize the estimates of the NEDM coming from different approaches, including the one in the WSS model which has already been presented in [32]. In the following we are going to review that result in detail, adding further comments. Notice that in the list a previous holographic estimate [45] appears too. That result has been obtained in a simpler and less controllable bottom-up model (hard-wall) with no string theory embedding. The currents In order to compute the NEDM using (8.4), we need to recall, from [41], how currents are holographically defined in the WSS model. Let us first introduce an external field in the theory by switching on non-normalizable modes for the gauge field A µ , so that These modes can be seen as perturbations over the background (that approach zero at infinity), whose boundary values are kept fixed. The theory is now modified and we expect an additional term in the action which is a source-current coupling. This term defines the chiral currents J µ L(R) which turn out to be given by The axial and vector currents, associated to the vector (+) and axial (−) fields are thus given by where ψ 0 = 2 π arctan(z). Working in the θ = 0 case, in [41] it has been noticed that the above expressions are consistent with the source-current term in the four-dimensional action for the mesonic JHEP02(2017)029 g a n a n µ + f π ∂ µ Π , (8.11) where a n µ and v n µ are, respectively, the axial-vector and vector mesons while Π contains the pion (non Abelian part) and the η singlet (Abelian part). The decay constants g v n and g a n are given in terms of boundary values of the eigenfunctions ψ n The fact that the vector current J V µ , as it can be read from (8.11), is expressed as a sum over the vector meson modes v n µ , reflects the complete vector meson dominance of the model. Splitting the Abelian and non Abelian parts of the currents as in (3.5) we get the isoscalar and isovector contributions. In particular, in the case with N f = 2 flavors, the electromagnetic current is given by Notice that the O(θ) term A mass z (7.20), modifies only the axial current J A and leaves untouched the vector current J V . We will return to the axial form factor in section 9, focusing for the moment on just the electric dipole term. Quantization reloaded The classical soliton solution we have found in section 7 has to be quantized. Both the mass term and the θ parameter could in principle give corrections to the moduli space Hamiltonian. If this is so, the eigenstates found in section 6.1 have to be modified accordingly. Crucially, however, we have found that the corrections to the Hamiltonian (i.e. those to the baryon mass formula (6.36)) are of order θ 2 for small θ: thus, at first order in θ we can forget about this issue and keep using the baryon eigenstates already found at θ = 0. Moreover, the mass term just gives rise to a O(m q ) correction to the instanton size ρ. We will neglect this correction since it will give rise to a subleading (in m q ) contribution to the NEDM. In order to compute the electromagnetic current we need to switch on the moduli of the gauge group orientations. We would also have to consider the time dependence of X I = { X, Z, ρ}, but this gives a subleading (1/N c ) effect and we neglect it for the moment. Using translational invariance, we also put X = 0. 12 12˙ X ∼ P , the momentum of the baryon, is classically zero since we work in the baryon rest frame. Clearly for P = 0 we have a non zero electric dipole moment, but it would be just a magnetic moment observed from a boosted frame. JHEP02(2017)029 Since we now want to maintain A 0 = 0, we work out a moduli space quantization in a different gauge w.r.t. the one used in (6.10). In particular, we use the following transformations 13 with V → a as z → ±∞. After these transformations the M components of the equations of motion for the gauge field remain untouched, while equation (7.13) gives the "modified Gauss law constraint" where Φ = −iV −1V and the time dependence of the moduli ρ, Z and X has been neglected. The first row is automatically zero on the solution for A mass 0 given in (7.20). Since, to compute the currents, we just need the asymptotic behavior for z → ∞, we can just linearize the remaining term as Neglecting ∂ 2 0 terms (as we are interested in slowly moving instantons), the asymptotic solution, at any time, can be given as a series expansion in the ψ n where we have implemented the boundary condition Φ → −ia −1ȧ as z → ∞. Actually, the whole sum must be independent on r when z → ∞, but it is not necessary for the present discussion to impose this requirement explicitly. The functions c n (r) contain all the information about the near core behavior of the instanton and of course they depend on the mass. At m q = 0 the solution can be found explicitly and reads (reintroducing the Z modulus dependence only for now) This just implies that Φ ∝ G( x, z) as defined in (7.10). 13 Another possible choice, which is gauge equivalent to ours, is [46] is necessary to solve the equations of motion also in the non stationary case. Defining Y so that −iY −1Ẏ = ∆(x, t) and making the gauge transformation with parameter Y allows us to find exactly (8.14) with V (x, t) = W (t)Y (x, t). Of course many other choices are possible, not necessarily related by gauge transformations; the only important requirement is that the equations of motion remain satisfied. The holographic computation of the NEDM The electric dipole moment is evaluated using the definition (8.4) where the operator in parenthesis is the quantum version of the time component of the electromagnetic charge (8.13). Let us first notice that the Abelian J 0 V piece actually does not contribute to the NEDM since: 1) ∂ 0 A mass z = 0; 2) [k(z)∂ z A 0 mass ] z→∞ z→−∞ is a function of r, from (7.46), and thus d 3 x x[k(z)∂ z A 0 mass ] z→∞ z→−∞ = 0 by parity. Let us thus concentrate on the contribution from the non-Abelian field. After the transformation (8.14) the non Abelian field strength F 0z becomes where, again, we have neglectedẊ I term. At first sight both A mass 0 and Φ may contribute to the NEDM. The current is easily computed from the definition (8.10) where the covariant derivatives have been replaced by ordinary derivatives because when z → ∞ the fields A inst and A mass are suppressed by powers of z −1 , so the commutators disappear when the limit is taken. The gauge structure is very simple: we have for DzΦ : At this point it is rather obvious that Φ cannot contribute to the NEDM: the form (8.17) depends only on r, so the integral is odd in x and hence it is vanishing. The matrix element is evaluated using the identity (see e.g. [47]) where σ and τ are Pauli matrices for spin and isospin respectively and the subscripts indicate the matrix elements in the standard representation. Using the above expression, we get the following formula for the "semi-classical" part of the NEDM (i.e. the result before including the ρ, Z-dependent parts of the neutron wave function) where the relation with the proton dipole moment comes from the fact that the neutron has isospin −1/2 which is the opposite for the proton. As we can see, the dipole moment is proportional to the spin of the particle, as one would expect, and the dipole moment of the neutron has an opposite sign w.r.t. the dipole moment of the proton. Factorizing the tensorial structure, we define the "semi-classical" NEDM d s.c. n , i.e. the leading order contribution in the 1/N c expansion to the NEDM, as In the following we present the numerical analysis for this quantity as a function of λ for N c = 3. The equation for W (r, z) (7.27) can be solved via standard methods of numerical integration, using for example Mathematica. The dipole is then computed using formula (8.25). The result for the NEDM as a function of λ is plotted in figure 1. This is a log-log plot of the dimensionless quantity d s.c. The NEDM can also be written as a dipole moment of a certain charge distribution whereŝ is the spin direction. One advantage of the holographic computation is the possibility to compute also the full charge distribution and not only its dipole moment. The radial charge distribution, factoring out the angular and the θ dependence and rescaling by a factor λ 2 , is plotted in figure 2 for various values of λ. We see that in the large λ limit it converges to a certain distribution. The factor λ −2 of the dipole (8.26) is thus due to an overall scaling of the charge distribution by the same factor; the charge remains always distributed over a length scale of order ∼ 1/M KK . This interesting feature is shared by other static properties of the WSS baryons, like the size of the baryon number distribution [41], which is governed by the vector meson inverse mass rather than by the instanton radius ρ cl ∼ O(λ −1/2 ). We then perform the numerical analysis with the parameters that are most commonly used in the literature to compare the WSS model with real QCD: Using the solution found in section 7.2 the dipole moment can be also expressed as an infinite sum over vector meson modes. Taking into account the mode expansion (7.28) and the relations (8.12), the CP violating part of the non Abelian vector current reads The functions R 1,3,5,7 (r) for the numerical solution obtained above are given in figure 3. The mode expansion method neatly indicates how all the meson tower is actually contributing to the NEDM. Calculating the latter including the first one, two and three modes gives 1.09 · 10 −16 θ e · cm with one mode , 0.68 · 10 −16 θ e · cm with two modes , 0.76 · 10 −16 θ e · cm with three modes , to be compared to the full result (8.31). 14 The first mode approximates the full result with an error of about 40%. This highlights the advantage of the holographic model, which allows to include the contribution of the whole tower of vector mesons. The inclusion of the second and third modes give significant corrections. The fourth mode is already essentially irrelevant (less than 1% correction) for λ ∼ O (10). Finally, as λ increases the higher massive vector mesons become more and more important in their contribution to the NEDM. The value of the NEDM above is extracted from the model at leading order in N c . The model actually allows to calculate the 1/N c corrections coming from the quantization of the baryonic spectrum, providing their wave functions [30], as reviewed in section 6.1. Clearly, these do not constitute all the possible 1/N c corrections. Nevertheless, they represent important corrections to the result when extrapolating the model formulae, valid at large N c , large λ, to the values N c = 3, λ ∼ O (10). We use the neutron wave function defined in (6.25). The electric current has an explicit dependence on the moduli ρ, Z, as can be JHEP02(2017)029 seen from equations (7.21), (7.27). 15 So, considering the full wave function rather than the classical approximation has a non-trivial effect. Noticing that the "semi-classical" dipole moment, as given in eq. (8.25), is a function of ρ, Z, the NEDM is calculated as 16 Using the standard value of parameters (8.30), we obtain our best estimate for the NEDM value [32] d n = 1.8 · 10 −16 θ e · cm . (8.35) The quantum 1/N c correction to the semiclassical value (8.31) is thus substantial for these phenomenological values of the parameters. It is also known that the standard values for the parameters λ, M KK used above do not perform extremely well for baryonic observables, see e.g. [30]. So, it is interesting to consider a different choice obtained by fitting against data such as the form factors calculated in [41]. In appendix C we give the details on how this fit is performed. The best fit gives λ = 12. The electric dipole form factor As we have recalled at the beginning of the present section, the nucleon electric dipole moment is related to the dipole form factor at zero momentum F 3 (0). Remarkably, the WSS holographic model allows to extract the full momentum dependence of the dipole form factor. Working in Breit frame, where k µ = (0, k), we can see, from the defining expression in eq. (8.2) and following similar steps as in [41], that the electric dipole form factor of the neutron is given by 16 Technically, we solve the differential equations numerically for a suitable grid of values of ρ, Z, interpolating the obtained results. JHEP02(2017)029 with y ≡ x − X, k ≡ | k|. 17 This formula can be also deduced from the Fourier tranform of the dipole charge distribution (8.29). Since [k(z)∂ z W ] z→∞ z→−∞ ρ,Z is a function of r ≡ | y|, the expression above reads Thus F 3 (k 2 ), as expected, can be expanded in even powers of k, around k = 0. At k = 0, F 3 (0)/2M N precisely reproduces the NEDM as given in eq. (8.34) (see (8.25)). Notice that in our setup with N f = 2 degenerate quarks, only the isovector part of the electric dipole form factor is turned on. The complete vector meson dominance of the dipole form factor is manifest once we implement the mode expansion for [k(z) In order to extract the explicit functional dependence of F 3 (k 2 ) on the momentum, we need to compute the integral in eq. (8.39). Focusing on the k → 0 behavior, it is easy to realize that if the function is power-like suppressed at large r, the integral in (8.39) gives generically divergent coefficients for the series expansion of F 3 (k 2 ). Actually, using the instanton solution found in section 7, we have that q(r) ∼ r −7 at large r. That solution has been found neglecting subleading corrections in the small parameters θ, m q /M KK and f (see eq. (2.17)). In particular, working to leading order in the latter parameter, which weighs the flavor backreaction, is what justifies the fact that we have neglected the η mass contribution (recall that the squared Witten-Veneziano mass (3.16) scales like f ) to the equation for A mass z in section 7.1. At subleading order that contribution is generically present as it can be easily deduced starting from the effective action (7.18). In order to consistently account for that, one should also include, to this order, at least also the flavor backreaction on the background (see [33]). This would produce f -corrected functions k(z) and h(z). The equation of motion for A mass z could still possibly be solved by the ansatz A mass z = u(r)/k(z) with u(r) now being solution of the equation 18 1 17 Not to be confused with the function k(z). 18 We are considering the N f = 2, ϕ ∼ θ/2 1 case. to be compared with eq. (7.22) which is obtained in the m → 0 limit. The solution to (8.42) is thus given by This function closely resembles, in form, the expression for the η VEV obtained within the Skyrme model [21]. Crucially, u(r), whose derivative enters the source term for the function W (r, z) (see equation (7.27)), is now exponentially suppressed for large r. This in turn provides an exponential suppression to the function q(r) at large r and gives a way to regularize the computation of the form factor. We perform this computation numerically, setting Z = Z cl = 0 for simplicity (wave function corrections related to the Z modulus only give small corrections to the whole result) and adopting the standard "mesonic" choice of paramenters N c = 3, λ = 16.63. The final outcome is the plot shown in figure 4. Numerically, for small k we find (reinserting the dependence on the scale M KK ) Actually, the dipole form factor at small momenta (i.e. for k < M KK ) is fitted quite well by a dipole behavior just as it happens, both in QCD and in the WSS model [41], for the standard electric and magnetic Sachs form factors of the nucleons. The dipole behavior is quite naturally induced in models with complete vector meson dominance, thus its occurrence in the present case is not totally surprising. For k M KK , the form factor F 3 (k 2 ) neatly deviates from the dipole behavior. Numerically, we find that it is actually exponentially suppressed with k. This feature, which turns out to show up also from a numerical analysis of the nucleon Sachs form JHEP02(2017)029 factors studied in [41], could be related to the very peculiar UV completion of the WSS model which, by construction, is expected to depart from perturbative QCD. The plot in figure 4 indicates that the scale of momentum variation of the dipole form factor is set by M KK . This observation can be complemented by defining, in analogy with the electric charge radius, the (isovector) electric dipole radius for the neutron With the parameters chosen as above we numerically get Finally, we notice that the NEDM, modified by the contribution of the Witten-Veneziano mass is now given by d n ≡ F 3 (0)/2M N ≈ 2.6 · 10 −17 θ e cm, which is smaller than the value reported in eq. (8.35). It is interesting to compare our findings with those obtained in chiral perturbation theory [48,49]. There, the pion cloud dominates the physics and the scale of momentum variation of the electric dipole form factor is set by m π . Correspondingly the dipole square radius scales like m −2 π . These results are in line with the already noticed differences between the large N c approach and the chiral one. 9 The CP-breaking pion-nucleon coupling As we have previously discussed, there are essentially two different approaches to compute the NEDM in phenomenological models: one is based on the Skyrme model [21], the other one on chiral perturbation theory [14]. This last method involves the computation of the CP breaking cubic coupling g πN N between baryons and pions. As we will show in the following, within the limiting regimes where the holographic computations have been performed, this coupling turns out to be zero, at leading order in the 1/N c expansion, in the Witten-Sakai-Sugimoto model. This statement actually allows for a CP breaking coupling which is subleading in the 1/N c expansion. We will give two different proofs of this claim; the first one, based on the form factor formalism, is given below. The axial form factors In the θ = 0 case, the matrix element for the axial current between nucleon states where C = 0, 1, 2, 3 and τ 0 = 1 2 , is given in terms of the following expansion A,P (k 2 ) are not independent in the massless theory because current conservation imposes However when the quark masses are non zero ∂ µ J µ A = 0 and this relation no longer holds. When we allow for a strong CP violation also other terms may arise. These look as the previous ones, without the γ 5 insertion. The matrix element (9.1) describes a cubic interaction between two nucleon states and the external source coupled to the field (V (−) µ in this case), so it actually computes diagrams of the type in figure 5(a). However we can imagine that the mesons are mediating this interaction and we already know their coupling (8.11) with the external field. Hence we find something of the form shown in figure 5(b). Diagrams such as that in figure 5(b) arise from effective interactions between mesons and nucleons described by L eff = n≥1 g a n N N a n µ N iγ 5 γ µ 1 2 2 N + g a n N N a n c µ N iγ 5 γ µ τ c 2 N + + 2i g πN N πN γ 5 1 2 2 N + g πN N π c N γ 5 τ c 2 N . (9.4) This is only the CP conserving part: the CP breaking one is the same but without iγ 5 . For example the coupling g πN N appears as Since the η is very massive we expect that the low energy physics is dominated by the isovector coupling g πN N . Let us proceed to write down the amplitude of figure 5(b) retaining only the CP conserving terms of L eff plus the CP breaking g πN N . The propagators can be read from the kinetic terms for the mesons (A.8), namely a Proca propagator and a scalar propagator (not massless in this case because the pions acquire a mass) p , s |J µ C A |p, s = 2p 0 2p 0 u p , s A µ C u( p, s) , (9.6) It is worth noticing the following feature: the relation (9.3), that holds only when m 2 π = 0, implies that the residue at the pole of g P in k 2 = 0 is proportional to g A , more precisely This is known as the Goldberg-Treitman relation. However when the pion is massive the pole of g P is displaced and the conservation of the axial current is broken also at the classical level, so this relation no longer holds. In order to have a non zero g πN N in the theory, we would need a term in the form factor proportional to (τ a ) I 3 I 3 δ s s k µ , (9.10) which means in the current a term like J µ a V = I a ∂ ∂X µ f Z, x − X , n z , n ρ |f Z, x − X |n z , n ρ = 0 . (9.11) The derivative with respect to X can be traded for a derivative with respect to x, which in Fourier transform yields k µ . The isospin operator defined in (6.21) is explicitly given by I a = −4iπ 2 κρ 2 Tr τ a aȧ −1 . (9.12) Clearly this term, which contains anȧ, can only appear in the field F a 0z , as a result of the modified Gauss Law constraint (8.15). Indeed we have The relevant term is the first one, indeed c n (r)∂ z ψ n (z) , (9.14) as we argued in (8.17). In the axial current, as it is easy to see from the definition, only the terms with even n contribute. Clearly c n for even n has to vanish for θ = 0, as a JHEP02 (2017)029 consequence of CP conservation and as it can also be inferred by (8.18) computed at Z = 0. Now the argument is simple: since c n solves an equation D 2 M Φ = 0, it gets contribution only from the non Abelian fields, the CP breaking part of those is proportional to cos θ 2 . The only way for c n to vanish at θ → 0 is to be identically zero. A more direct argument Let us notice, as in [20], that a possible way to define the g πN N coupling is to take the large r behavior of the pion expectation value in a nucleon state N |π a |N ≈ − g πN N 8πM N m π x i r 2 e −mπr σ i τ a . where the moduli dependence has been explicitly indicated. There are essentially two reasons why this does not give a CP breaking contribution to the πN N coupling. The first one is analogous to the one above: A z , being a non Abelian field, contains contributions proportional to cos θ 2 , which cannot automatically vanish in the limit θ → 0 unless g πN N is identically zero. Secondly, we would expect a precise moduli dependence from A z , namely A z,CP ∼ aȧ −1 . On the contrary, we have a dependence A z,CP ∼ a( x · τ )a −1 . (9.18) This can be explicitly checked by the solution given in section 7.3, but there is no need to do it since the one in (9.18) is the only combination compatible with the spin-isospin symmetry with no time derivatives. This dependence gives precisely the CP conserving behavior σ i τ a . Conclusions In this paper we have studied effects of the θ parameter in the Witten-Sakai-Sugimoto model [25,26], the top-down holographic theory closest to QCD. The (small) quark mass needed to make the θ parameter physical has been introduced by means of world-sheet instantons [27,28]. Let us recapitulate our main results. To begin with, we have studied the vacuum structure at finite θ, showing that it is identical to that of QCD, as derived from the chiral Lagrangian [13]. Then, we have analyzed the baryon spectrum, arguing that the θ parameter affects it only at subleading order (O(θ 2 )). Moreover, the existing solitonic solutions corresponding to baryons have been extended to include the leading quark mass and θ parameter corrections. We have reviewed and discussed in detail the results in [32] JHEP02 (2017)029 for the neutron electric dipole moment: we have extracted a value of the NEDM which is of the same order of magnitude as existing results in the literature based on effective models; we have discussed the dependence of the NEDM and the associated charge distribution on the theory parameters; exploiting the advantage of the holographic model on the effective theories for QCD, we have analyzed the dependence of the NEDM on higher vector mesons, showing that the first few modes are important to obtain the result at percent accuracy level. Moreover, we have presented a novel study of the full electromagnetic dipole form factor. Finally, we have argued that the CP-violating pion-nucleon coupling constant is subleading in the 1/N c expansion. Along the way, we have also pointed out a Horava-Witten-like solution to the anomalous Bianchi identity in the WSS model, which as far as we know was not present in the literature. Given the qualitative and quantitative success of the WSS model in comparing with phenomenology, it is certainly worth extending the results of this paper on the θ dependence of QCD physics. An obvious generalization concerns the calculation of the NEDM with three quarks of different masses. But it would also be worth studying nuclear observables in the same setting. A Meson sector In this appendix we give a brief review of the holographic description of mesons in the WSS model [26]. Let us consider the Yang-Mills part of the D8-brane effective action (2.22) setting N f = 1 for the moment: The expansions (2.25) for the fields A µ and A z imply that JHEP02(2017)029 The functions ψ n and φ n will be discussed in a moment and ψ means ∂ z ψ. Let us first set the ϕ (n) to zero. The action (A.1) then becomes Imposing the conditions κ dz h(z)ψ n (z)ψ m (z) = δ mn , κ dz k(z)ψ n (z)ψ m (z) = λ n δ mn , (A.4) and integrating by parts (the ψ n approach zero for z → ±∞ because of the normalization) we get the eigenvalue equations (2.26). When the λ n are ordered such that λ 1 < λ 2 < · · · it can be shown that ψ n has positive (negative) parity for n odd (even) under the transformation z → −z. The transformation (x µ , z) → (−x µ , −z) is interpreted as the holographic equivalent of the parity transformation in the boundary theory. If we use the above relations we find a Proca action for the fields B Now it is easy to include scalar fields ϕ (n) as well. As before, let us require κ dz k(z)φ n φ m = δ mn . (A.5) We can take φ n to be just φ n = ψ n / √ λ n . However there is a zero mode which is orthogonal to all the ψ n . In fact the ψ 0 mode whose derivative would be φ 0 is proportional to arctan(z): this is not normalizable by means of the integral (A.4). The field φ 0 , instead, has the correct normalization with respect to (A.5). The F µz field strength is rewritten as The gauge transformation B (n) µ → B (n) µ + m −1 n ∂ µ ϕ (n) can be used to eliminate all the ϕ (n) with n ≥ 1 from the theory; the ϕ (0) mode survives instead. All in all we get the following four dimensional action The massless field ϕ (0) is associated to the mode ψ 0 ∝ arctan z which is an odd function: it is thus a pseudoscalar field and we interpret it as the pion field, which is the Goldstone boson of the spontaneous chiral symmetry breaking. A similar analysis can be performed to include also the massive scalar mesons: they arise as fluctuations of the embedding of the D8-branes in the background. JHEP02(2017)029 We can finally substitute these fields in the DBI action. The field strengths read Using the normalization conditions given at the beginning of this section we find where a and b are constants given by We see that we have obtained the Skyrme model (see [50] for a review) with parameters (2.27). B The C 7 andF 2 action Let us consider the following off-shell action 20 where dy ω y = 1. The three fieldsF 2 , C 7 , F are all independent. The equation of motion forF 2 gives the usual duality relation 21 which gives the on-shell action used in [26] S (2) = − 1 4π(2πl s ) 6 F 2 ∧ F 2 , (B.5) supplemented by the modified Bianchi forF 2 (B.4). 20 We are grateful to Luca Martucci for a relevant discussion about this section. 21 Note that we could start with a plus sign for the term dF2 in the action; this just amounts to a different convention of the sign of the Hodge dual ofF2.
22,517.6
2017-02-01T00:00:00.000
[ "Physics" ]
NIACIN AMELIORATES HYPERCALCIURIA AND HYPERPHOSPHATURIA DUE TO GLUCOCORTICOID ADMINISTRATION IN RATS Hypercalciuria and hyperphosphaturia are present in long term and high dose regimens of glucocorticoid therapy. This study aims to evaluate the effect of niacin at its pharmacological dose on calcium and phosphate disturbances due to methylprednisolone ad ministration in growing rats. Twenty one rats were randomly divided into three equal groups and treate d s follows for 4 weeks: 1-Normal saline (Control) ; 2Methyl Prednisolone (MP) acetate, 3.5 mg kg −1 five days a week, SC and 3MP acetate, 3.5 mg kg −1 five days a week, SC + niacin 200 mg kg −1 daily by oral gavages. At the end of the experimen t, serum and urinary calcium and phosphate assays were performed and calcium content of forth lumbar vertebrate and tibia-fibula bone was determined by atomic absorpti n method. No significant difference observed in se rum calcium or phosphate levels among different groups (p>0.05), however an obvious hypercalciuria associa ted with hyperphosphaturia was present in MP group as c ompared to control (p<0.001). Niacin significantly decreased urinary calcium (p<0.001) and phosphate ( p = 0.005) concentrations as compared to MP group. Calcium level was still significantly higher than c ontrol (p<0.001), while phosphate decreased even to a lower level than control (p = 0.005). Calcium conte nt of forth lumbar vertebrate or tibia-fibula bone of rats remained statistically the same among different gro ups (p>0.05). Niacin at its pharmacological dose ca n ameliorate hypercalciuria and hyperphosphaturia due to long term and high dose glucocorticoid administration in growing rats without affecting bo ne calcium content. The possible clinical importanc e of this effect needs to be clarified in future studies . INTRODUCTION are widely prescribed to treat immune and inflammatory conditions of different organs including eye, skin, joints, blood, gastrointestinal and respiratory tracts, in veterinary as well as human patients. However, using systemic GCs especially in long term and high dose regimens may be associated with multiple side effects among them are changes in calcium and phosphate homeostasis and bone metabolism. Hypercalciuria is a known adverse effect of treatment with systemic GCs (Bentur et al., 2003). Duzen et al. (2012) demonstrated that GC treatment induces hypercalciuria just after starting the treatment until the end of it, which promptly improves by the cessation of therapy. Hypercalciuria is present in 85.7% of people with Cushing's disease; dogs with Cushing's syndrome are 10 times more likely to have calcium-containing uroliths than control dogs (Faggiano et al., 2003;Hess et al., 1998). Despite hypercalciuria, plasma ionized calcium was normal in people and dogs with hypercortisolism compared with control subjects (Faggiano et al., 1982;Ramsey et al., 2005). On the other hand, hyperphosphaturia is a common observation following GC therapy or in Cushing's disease (Vrtovsnik et al., 1994). Human Cushing's patients have Science Publications AJPT hypophosphatemia whereas canine patients have elevated serum phosphorus (Smets et al., 2010). Niacin (Nicotinic acid or vitamin B3), which strongly increases HDL cholesterol levels and has a well-documented anti atherosclerotic effect, has attracted new interest. The discovery of the nicotinic acid receptor GPR109A, which has recently been renamed Hydroxy-Carboxylic Acid (HCA) receptor 2 (HCA 2 ) (Offermanns et al., 2011) has led to new research activities into the mechanisms through which nicotinic acid exerts its pharmacological effects (Gille et al., 2008;Kamanna et al., 2009a). Recent studies have shown that the nicotinic acid receptor is expressed in various cells including adipocytes, several types of immune cells and keratinocytes. Evidence suggests that nicotinic acid has lipid-independent anti-inflammatory effects (Wu et al., 2010;Lukasova et al., 2011a). Although it has been demonstrated that niacin lowers serum phosphate in dialysis patients (Muller et al., 2007), the effects of this agent on calcium and phosphate imbalance due to GC administration has not been clarified yet. The present study aims to evaluate the effect of niacin on calcium and phosphate disturbances in growing rats treated with Methyl Prednisolone (MP). Animals and Experimental Design Twenty one female Sprague-Dawley rats with about three weeks of age and a mean body weight of 220 g were purchased from animal house of Shiraz Medical University, Shiraz, Iran. Rats were acclimatized for one week before the beginning of the experiment to the ambient conditions (temperature about 23°C and a 12h/12h, light/dark cycle). Animals had free access to tap water and standard rat chow diet prepared by Razi Vaccine and Serum Research Institute, Shiraz, Iran. After adaptation, rats were randomly divided into three equal groups (n = 7 each) and treated as follows for 4 weeks: • Normal saline (Control) • MP acetate (Aburaihan pharmaceutical Co., Tehran, Iran), 3.5 mg kg −1 five days a week, SC • MP acetate, 3.5 mg kg −1 five days a week, SC + niacin (Novin Kavosh Mamtir Co., Tehran, Iran) 200 mg kg −1 daily by oral gavages Procedures used in the present study are in accordance with institutional ethical guidelines of School of Veterinary Medicine, Shiraz University, for care and use of laboratory animals in experiments. Determination of Calcium and Phosphate Levels in Serum and Urine At the end of the experiment, rats were fasted over night and voiding urine samples collected in the morning and noon. Blood samples were collected under chloroform anesthesia by cardiac puncture. After centrifugation at 2000 rpm for 20 min, harvested sera were stored in -70°C until use. Calcium and phosphate assays in pooled urinary samples (morning and noon for each animal) and sera were performed by commercial colorimetric kits prepared by Ziest Chem ® Diagnostics, Tehran, Iran. Determination of Calcium Content of Forth Lumbar Vertebrate and Tibia-Fibula Bone After blood collection, animals were euthanized by deepening anesthesia. Forth lumbar vertebrate and right tibia-fibula bone were removed and soft tissues were completely dissected. Bone samples were dried for two weeks at room temperature. Dry-Ashing was performed at 600°C for 8 h and samples were oxidized for 16 h at 100°C bath with a mixture of nitric acid 65% and perchloric acid 70% with 7/3 ratio, there after. Bone calcium content was determined by using an AA670 Shimadzu flame atomic absorption spectrophotometer. Statistical Analysis Data were presented as mean±SD. Data analysis was carried out by using one-way ANOVA and Tukey's multiple comparison tests as the post hoc (SPSS 11.5 for windows software). Differences were considered significant at p<0.05. Calcium and Phosphate Levels in Serum and Urine No significant difference observed in serum calcium or phosphate levels among different groups (p>0.05), however; an obvious hypercalciuria associated with hyperphosphaturia was present in MP group as compared to control (p<0.001 for both comparisons). Niacin significantly decreased urinary calcium (p<0.001) and phosphate (p = 0.005) concentrations as compared to MP group. Calcium level was still significantly higher than control (p<0.001), while phosphate decreased even to a lower level than control (p = 0.005). Data are summarized in Table 1. Calcium Content of Forth Lumbar Vertebrate and Tibia-Fibula Bone No significant difference observed in calcium content of forth lumbar vertebrate or tibia-fibula bone of rats in different groups (p>0.05). Data are presented in Table 2. DISCUSSION Prolonged GC use induces osteoporosis; the pathogenesis of this condition is multi factorial and includes GC-induced hypercalciuria (Duzen et al., 2012). Kruse et al. (1988) observed that GCs induce hyperphosphaturia due to decreased renal phosphate reabsorption not mediated by secondary hyperparathyroidism, as well as marked hypercalciuria in children. These researchers recommended administration of hydrochlorothiazide for correcting hypercalciuria and oral phosphate for hypophosphatemia and replacement of over excreted phosphate from kidneys. Rats were widely used as a model to study the efficacy of various treatments in the prevention of GCinduced bone loss (Li et al., 2007). In growing rats, the prevailing activity on the bone surfaces is modeling, with a linear growth which is rapid until 6 months (Erben, 1966). Our study shows that growing rats clearly exhibit hypercalciuria and hyperphosphaturia due to long term and high dose GC treatment and may be used as a model for evaluation of potential agents with effects on calcium or phosphate balance in this situation. As noted elsewhere, we did not observe significant changes in serum calcium and phosphate levels of MP treated rats. This is consistent with the findings of Wang et al. (2002) who observed that administration of MP to rats at the dosages of 2.5, 5, 10 and 20 mg kg -1 day -1 for 4 weeks does not affect serum calcium and phosphorus concentrations. These reaserches did not evaluate the urinary levels of this ions. It seems that the derangement in calcium and phosphate metabolism due to MP administration is not reflected in their serum concentartion in rats. Niacin is required at doses of 15-20 mg day -1 as a vitamin. However, when it is given in supraphysiological doses, exerts a variety of pharmacological effects (Lukasova et al., 2011b). Niacin administration in pharmacological doses seems to be relatively safe. Flushing and gastrointestinal symptoms such as dyspepsia, diarrhea or nausea are the most common unwanted effects of oral niacin therapy. These effects are harmless, never the less, flushing can affect patients' compliance. This effect has been reduced by using extended-release products (Kamanna et al., 2009b). Niacin, which strongly increases HDL cholesterol levels and has a well-documented clinical efficacy, has attracted new interest. Few studies are available which have addressed the effect of niacin on phosphorus metabolism. Recently, (Bostom et al., 2011) demonstrated that extended-release niacin/laropiprant (a PGD2 receptor antagonist for inhibition of flushing due to niacin) lowers serum phosphorus concentrations in diabetic patients with renal disorders. Moreover, Maccubbin et al. (2010) observed a reduction in serum phosphorus concentration of patients who have dyslipidemia and are free of advanced renal disease. As far as we know no study has addressed the effect of niacin on calcium and phosphate disturbances due to GC administration which establishes the rationality for our research. We observed that niacin significantly reduces urinary calcium and phosphate levels as compared to rats treated with GC alone. The effect of niacin on urinary Science Publications AJPT phosphate level was so strong where it was reduced to levels even lower than control without any significant change in serum phosphate level as compared to control. Moreover, GC adiminstration did not result in a significant reduction in calcium content of forth lumbar vertebrate (as a cancellous bone) or tibia-fibula (as a cortical bone) as compared to control group, although a slight reduction was present in both bones. A possible explanation may be the relatively short period of GC treatment, where the loss of calcium from bones for compensation of serum calcium was not still detectable by flame atomic absorption. On the other hand, the effect of niacin on urinary calcium level was not associated with appreciable changes in calcium content of bones as compared to GC group, although this parameter was slightly higher in niacin treated group than GC group specially for lumbar vertebrate. Regardless of the relatively short term of the experiment, this may show that the effect of niacin on amelioration of hypercalciuria due to GC administration may be at least partly due to its plausible effects on other organs which are involved in calcium homeostasis, especially the kidneys. Although this is highly speculative and needs to be further investigated in future studies. CONCLUSION Niacin at its pharmacological dose can ameliorate hypercalciuria and hyperphosphaturia due to long term and high dose MP administration in growing rats without affecting bone calcium content. The possible mechanisms involved in this effect and its clinical importance needs to be clarified in further studies. ACKNOWLEDGEMENT Funding for this study was provided by School of Veterinary Medicine, Shiraz University, Shiraz, Iran.
2,693.6
2013-07-13T00:00:00.000
[ "Medicine", "Biology" ]
Project Performance Indicators for Measuring Construction Performance in Mumbai The aim of the study is to evaluate and rank a range of performance indicators that industrial experts regard as important, with the key identified indicators being those associated with the overall project characteristics. This paper presents the result of survey of indicators for measuring the performance of construction projects in Mumbai. A list of performance indicators is prepared based on a comprehensive literature review. These indicators grouped under 11 categories denoted as Key Performance Indicators are used to develop a survey questionnaire and RII is subsequently used to analyze the survey results and determine the relative importance and rankings of various PIs. The results reveal that the top Key Performance Indicators to evaluate the success of construction projects (in descending order) arecost, time, safety, productivity, satisfaction, quality, knowledge and service. Keywords— Performance Indicators; Project monitoring; Key performance indicators; cost; Quality; Relative importance index. INTRODUCTION Performance measurement is integral to any project and provides a basis for continuous improvement in performance. Highly competitive nature of the construction industry and profound technological changes are forcing construction executives to continuously improve the performance of their projects. It is commonly accepted that project success is measured by the performance of a project in terms of cost, time and quality [1]. The construction sector is labour-intensive, including indirect jobs, provides employment to millions of people. Considering the variety of construction projects across various sectors of economy like energy, housing, transport etc., it is necessary to identify a set of common indicators and develop a measurement scale to standardize the measures of construction project performance. OBJECTIVE OF RESEARCH The aim of the research presented is to assess the project performance process for its efficiency. This study will forward references for improvement of process based on conclusions of the study. Key Performance Indicators (KPI) are identified from the research work considering the working of Indian construction industry. The study also provides indications to effect improvements in the existing work patterns. According to [2], "performance measurement is the heart of ceaseless improvement. As a general rule, benchmarking is the next step to improve efficiency and effectiveness of products and processes." Previous studies by [3], [4], [5], [6] describes project success and associated key performance indicators. However, a pertinent question is how success/ performance can be measured to effectively test the validity of proposed performance measurement system. This is because of the long timescales involved in real-life projects and possible influence of control actions taken by project management between the various processes [7]. A Key Performance Indicator is the measure of performance of an activity that is crucial to the success of an organization. They are compilation of data measures used to assess the performance of a construction process [8]. The purpose of KPI is to deliver projects: on time, on budget, free from defects, efficiently and safely by profitable companies. [2] has identified seven indicators of performancecapital cost, construction time, predictability, defects, accidents, productivity and turnover & profits. [9] developed KPIs Framework for the UK construction industry with seven groups. These are: time, cost, quality, client satisfaction, client changes, business performance and health & safety. [10] identifies eight KPIs for all construction as follows: (1) client satisfaction -product, service and value for money(2) defects; (3) predictability (cost and time); (4) profitability; (5) productivity; (6) safety; (7) construction cost; and (8) construction time. [11]) investigated project management (PM) practices adopted by Singaporean construction firms. The study finds that certain practices do affect project performance. The most important of these are the practices relating to scope management, such as controlling the quality of the contract document, quality of response to be perceived, variations and extent of changes to the contract. Performance measurement is integral to performance management and provides a basis for performance improvement programs. To improve performance, organizations should both measure their performance and compare with benchmark [12].Performance measurement however does not automatically result in improved performance. These are approaches to determine if a process has obtained the desired result. Performance measurement enables organizations to identify areas in their operations where improvements are needed. 3. METHODOLOGY For the current study, performance indicators (PI) were pooled together from the literature review. Subsequently they were rationalized by merging some of them together, deleting some as they were described in different terms and some of them were split to improve accuracy of measurement. The 59 PIs were reduced to 40 for the purpose of the current study. These performance indicators were classified under 11 performance perspectives (KPIs) namely; cost, time, satisfaction, quality, people, legal, knowledge, safety, productivity, service and risk by conducting a preliminary survey from five construction industry experts including project managers, engineers and academicians. The classified 40 PIs (as shown in Table 1.1) formed the basis of questionnaire survey The questionnaire is divided into four major parts. The first part contains questions about the details of construction firm and the respondent. The second part consists of questions pertaining to the extent, importance and mechanism of applying PIs in construction projects and the respondents were asked to rate each PIs on a five points Likert scale based on its influence on project performance. The third part contains additional comments and in fourth part, ranking the KPIs for bench marking the construction projects in Mumbai. A total of 110 questionnaires were delivered to building construction contractors in Mumbai for the purpose of survey. Out of which 22 responses were received. Mean, standard deviation, variance and Relative importance index (RII) and the ranking of 40 performance indicators are shown in Table 1.2. Variance of each indicator was relatively small enough to conclude and the respondents agreed on its importance. RESULTS AND DISCUSSIONS All PIs met the requirement of reliability based on Cronbach's alpha value. Cronbach's alpha value ranging from 0.944 to 0.948 and small variances indicate that the opinions of the survey are highly consistent. In order to identify the order of KPIs for project performance measurement, mean of PIs grouped under each KPIs were calculated and arranged in its descending order as shown in Table 1.3. Top eight KPIs were selected on the basis of cumulative percentage of its weightage. It is commonly accepted that project success is measured by the performance of a project in terms of cost, time and quality [13]. In the present study, quality is ranked as sixth important KPI. The study indicates that the performance measurement though essential is not an easy task for construction projects considering the number of indicators involved and data that needs to be collected on continuous basis for reasonable and acceptable levels of accuracy. Based on ranking of KPIs, only few of the top ranking indicators can be used to assess the performance to make the task easier. Considering importance of each of these indicators, a weighed indicator can also be developed to indicate the performance in one single number. Individual indicator level measurements will help to make corrective actions to keep project on track.
1,628.2
2020-06-27T00:00:00.000
[ "Engineering" ]
MODELING AND OPTIMIZATION OF FLANK WEAR AND SURFACE ROUGHNESS OF MONEL-400 DURING HOT TURNING USING ARTIFICIAL INTELLIGENCE TECHNIQUES This work aims to model and investigate the effect of cutting speed, feed rate, depth of cut and the workpiece temperature on surface roughness and flank wear (responses) of Monel-400 during turning operation. It also aims to optimize the machining parameters of the above operation. A power-law model is developed for this purpose and is corroborated by comparing the results with the artificial neural network (ANN) model. Based on the coefficient of determination (R), mean square error (MSE), and mean absolute percentage error (MAPE) the results of the power-law model are found to be in close agreement with that of ANN. Also, the proposed power law and ANN models for surface roughness and flank wear are in close agreement with the experiment results. For the power-law model R, MSE, and MAPE were found to be 99.83%, 9.9×10, and 3.32×10, and that of ANN were found to be 99.91%, 5.4×10, and 5.96×10, respectively for surface roughness and flank wear. An error of 0.0642% (minimum) and 8.7346% (maximum) for surface roughness and 0.0261% (minimum) and 4.6073% (maximum) for flank wear were recorded between the observed and experimental results, respectively. In order to optimize the objective functions obtained from power-law models of the surface roughness and flank wear, GA (genetic algorithm) was used to determine the optimal values of the operating parameters and objective functions thereof. The optimal value of 2.1973 μm and 0.256 mm were found for surface roughness and flank wear, respectively. Introduction Hard turning is a turning of material with a hardness range from 45 to 68 HRC (Fig. 1). Hard turning has many advantages in addition to the cost of operation, such as faster metal removal rate, reduced cycle time, good surface finish and environmentally friendly, over grinding operation [1]. In machining, the material is strain-hardened due to the presence of retained austenite. The new machining industries aim to produce components at low product cost with good quality in minimum time. To achieve a good cutting performance in turning, the selection of optimum cutting parameters is important. Machinability of hardened materials was evaluated by cutting force for better surface roughness and tool wear by several researchers. However, turning the hard material to get a minimum surface roughness with minimum tool wear is difficult. Katuku et al. [2] conducted experimental work in dry cutting conditions on austempered ductile iron (ASTM Grade 2). The cutting forces, chip characteristics and tool wear were analyzed with PCBN cutting tools. The result revealed that the optimum cutting speed for better tool life and flank tool wear is 150 to 500 m/min. In another work, Marcelo Vasconcelos de Carvalho et al. [3] investigated the machinability of ADI (ASTM grades 2 and 3). It has been reported that minimum surface roughness and higher tool wear observed when turning ADI grade3 with a higher tool nose radius. In another work, Tuğrul Özel and Yiğit Karpat [4] developed the prediction model using regression and neural networks in hard turning for surface roughness and tool wear by CBN inserts. Minimum surface roughness was obtained at high workpiece hardness with high cutting speed. Higher tool wear was obtained with the higher cutting speed at lower feed rates. Lower feed rate gives a good surface finish. Zahia Hessainia et al. conducted experimental work on hard turning. The surface roughness was predicted with the use of cutting parameters and tool vibrations. The mixed ceramic cutting tool Al2O3/TiC was used. They found that the feed rate was a more dominating factor than the tool vibration in affecting the surface roughness [5]. Mustafa Gunay and Emre Yuce applied the Taguchi method for cutting conditions optimizing for surface roughness in turning of white cast iron (high alloy). Mandal et al. [6] investigated the optimization of cutting parameters for tool flank wear using newly developed cutting tool Zirconia Toughened Alumina (ZTA). Taguchi method and regression analysis were used to optimize the cutting parameters. It has been observed that the tool wear was highly affected by the depth of cut. Nickel-based alloys have found wide applications ranging from automobile to the aircraft sector owing to its properties such as excellent tensile strength, corrosion resistance, ability to withstand elevated temperatures [7]. Monel-400, a solid solution of Ni and Cu, is one of the nickel-based alloys in this category. Machining of such materials by conventional methods encounters several problems which include rapid tool wear, excessive cutting forces, more pronounced surface roughness [8]. The nickel-based alloys have been machined by different machining operations-hard turning, electro-dischargemachining etc. These machining operations have limitations due to high cutting tool cost and low (metal removal rate) MRR. Hot machining offers a good opportunity to machine these alloys. Several investigations have been conducted by the researchers to study hot machining. Parida and Maity [9] investigated the machinability of several nickel-based alloys at elevated temperatures. The machinability in the hot condition was improved, comparing the machinability at room temperature. The tool life was investigated by Ozler et al. [10] during the hot machining of high manganese steel using flame heating. It was found that the cutting speed has more effect on tool life than the feed rate and depth of cut. Ginta et al. [11] found that the machinability performance of titanium alloys at elevated temperatures is better as compared to that at room temperature. Similar results have been reported by other researchers regarding the induction heat machining, laserassisted machining and plasma-assisted machining [12]. Optimization of machining parameters is necessary as it directly influences the cost, time and reliability of machining operations. Ranganathan and Senthilvelan [13] used a multi-objective optimization method in hot machining of AISI 316 using the grey Taguchi method. They took surface roughness, material removal rate and tool life as system responses. Optimization of machining parameters using flame heating has been studied by researchers in turning of Monel-400 [14], Inconel 625 [13] and Ni-hard material [16] for improving machinability. They used grey Taguchi, desirability, data envelopment analysis for optimization of machining parameters. Zhang et al. [17] implemented a combined method of RSM and a non-sorting genetic algorithm to optimize the wire-electro-discharge machining parameters. Aouici et al. [18] applied surface response methodology to optimize the effect of the cutting parameters on surface roughness, cutting force, specific cutting force, and power consumption in hard turning of AISI D3 steel. Feed rate is the most influential parameter affecting the cutting force and surface roughness compared to other parameters. Gupta et al. [19] studied the mathematical modeling of surface roughness, tool wear and power consumption in turning operation using surface response methodology combined with artificial neural network and support vector regression. Aouici et al. [20] applied RSM to investigate cutting force and surface roughness by taking different hardness of AISI H11 steel. Koyee et al. [21] optimized the machining parameters on flank wear, chip volume ratio, cutting force and cutting power using response surface methodology combined with cuckoo search algorithm in turning duplex steel. Parida [22] discussed the chip geometry in the hot machining of Inconel 718. He concluded that chip geometry such as the degree of segmentation, serration frequency, and equivalent chip thickness decreased with the increase of heating temperature. Venkatesh and Chandrakar [23] analyzed the heat-assisted turning of a nickel-base alloy. They concluded that heating on the surface of the workpiece reduces cutting force, surface roughness and tool wear compared to room temperature machining. Nickel-based alloys have been studied through experimental investigations and modeling by several researchers but only one article could be found in the published literature on modeling of hot machining of Monel-400 using response surface methodology. In order to have a much better and more accurate model, power law and ANN have been used for modeling the hot machining of Monel 400. The optimization of the machining operation has been carried out through GA, where the power-law model has been used as an objective function. Palani et al. [24] developed a mathematical model for Ra, tool wear ratio and MRR in-terms of machining parameters and the model developed was used as desirability function for carrying out the optimization of the machining parameters. Durairaj and Gowri [25] investigated the Ra and tool wear during the machining of Inconel-600 using a genetic algorithm for parametric optimization to improve tool life and surface finish. A multi-pass turning parameter optimization was performed by Rao and Kalyankar [26] using a teaching-learning based optimization algorithm. The results were compared with GA and Particle swarm optimization techniques. Asiltürk et al. [27] performed the optimization of parameters using Taguchi's method that influence Ra in Co28Cr6Mo material. It was concluded that tooltip radius is the dominant factor that affects surface quality. Selvakumar and Ravikumar [28] conducted optimization for minimum tool wear and surface roughness during machining of Titanium alloy. Power-law model The relationship between the output, i.e. surface roughness and flank wear, and the machining parameters can be expressed as = . . . . 1 = 1, 2 where subscript 1 and 2 corresponds to surface roughness and flank wear. i.e. 1 2 are surface roughness and flank wear, , is cutting speed, is feed rate, is the depth of cut, and is temperature. Unknown constants are , , , , determined from the experimental data. In order to find these constants, Eq. (1) is linearized by logarithmic transformation. We get Eq. (2) can be rewritten as a linear mathematical model as ANN model The capacity of ANN to solve nonlinear problems has attracted the attention of the researchers to solve the problems of machining. So, it has been used in this work too. ANN has many layers that depend on the complexity and type of problem. In general, it has an input layer, hidden layer and output layer. The input data are processed in the hidden layer. Next, the hidden layer computes the output and this is further processed in the output layer to produce the final results. The hidden layer and the output layer compute results based on the transfer functions. In this work, tansig and purelin functions were used as transfer functions in the hidden layer and output layer, respectively, and are given in Eqs. (8a) and (8b). The schematic representation of the ANN model is shown in Fig. 2. The ANN is initiated by training, where the input, along with the output, is introduced to the network, and the weights are set randomly. To achieve a satisfactory level of performance, weights are altered by the backpropagation algorithm to minimize the mean square error (MSE). In the backpropagation algorithm technique, the weights are adjusted by propagating weight changes back to the input neuron from the output neuron [29]. The training process is stopped when a satisfactory level of performance is attained. The network generated thereof uses these weights to make the decisions. The MATLAB toolbox was used for ANN modeling in this paper. The parameters used for the network are tabulated in Table 1. Several independent runs with different initial random weights were performed to achieve the best possible solution. The MSE during the learning process of the network was evaluated by where ∆ ( ) is the change in weights, is the momentum coefficient ( ) is the error, is the learning rate and ( ) is the output. The results were tested with the experimental data that were not presented during the training process after successful training. The results were again compared with using R 2 and MSE. R 2 is defined as the proportion of the variance in the dependent variable that is predicted from the independent variable and is given by Experimental setup and case study The experiments conducted by Parida et al. [30] on Monel-400 to measure the surface roughness and flank wear were used for the present work. The tests were performed on an HMT center lathe with 1200 rpm maximum speed and 6 kW spindle power. A round bar of Monel-400 workpiece of diameter 40 mm and 300 mm length was used in the experiments. A TiN coated inserts were utilized for machining operation which was fitted to PSBNR 2525 M12 tool holder. To avoid error in the measurements, each experimental run was carried out three times and a new cutting edge of the tool was used for each run. The flank wear of the cutting tool and roughness of machined surface were measured using an optical microscope and Taylor Hobson Surtronic S-100 Series surface roughness tester, with cut off value 0.8 mm. Results and discussion Power law and ANN model have been used for modeling surface roughness and flank wear during hot turning of Monel-400. The model parameters of the power-law equation were determined from experimental data. Also, the same data set was used for training and validation of the ANN model to carry out the comparison between the results of the two models (power law and ANN model). The Eq. (12) and (13) Which implies that theflank wear increases with the increase in cutting speed, feed rate and depth of cut but decreases with an increase in temperature. The temperature and cutting speed were found to be the most influential parameters that affect flank wear and surface roughness respectively. The ANN was trained for 20 values of input and validated for 5 out of 30 experimental input data for both surface roughness and flank wear. The correlation of parameters of surface roughness and flank wear of the ANN model is given in Fig. 3. Fig. 3. Correlation of parameters for surface roughness and flank wear. Further, the comparison of surface roughness and flank wear with the experimental results obtained from the power law and ANN are shown in Fig. 4 and Fig. 5 and Table 2. It can be seen from Fig. 4 and Fig. 5 that the maximum error for surface roughness is about 8.5% and 4% for power law and ANN model, respectively. Similarly, the maximum error for flank wear is about 20% and 17% power law and ANN model, respectively. Optimization of machining parameters Multi-objective optimization, using the Genetic Algorithm (GA) is an efficient method for solving nonlinear and constrained problems. GA originated from the principle of natural genetics and been widely used for engineering problems, Zain et al. [31]. GA creates the Pareto Front with multiple outputs for optimal selection of parameters. The output of GA depends on the size of the population, selection type, GA operators i.e. mutation, and crossing over. In the present work, tournament selection was used to select individuals from the given population at random. Crossover involves a combination of two individuals to form parents or offspring for the next generation. While mutation causes random changes in the individual to widen search space for attaining genetic diversity. Adaptive feasible type function is used to select the search direction based on the last successful generation. Aim of multi-objective optimization in this work is to establish various optimum conditions for the chosen surface roughness and flank wear. The Eqs. (13) and (14) The Pareto front is shown in Fig. 6 and presented in Table 3. It consists of a set of possible solutions. Based on the priority given for each response variable, the particular combination is selected. In present work, equal priority is given to and . The corresponding operating parameters are chosen as the optimum parameters. An optimum value of and were found to be 2.1973 and 0.2565, respectively. The corresponding parameters = 99.2758 mm/min, = 0.1014 mm/rev, = 0.5003 , and = 92.9177 ℃. Conclusion The models for surface roughness and flank wear, during the hot turning of Monel-400, are obtained using power law and ANN in this paper. The influences of machining parameters on surface roughness and flank wear have been analyzed based on proposed models. The optimal values of the machining parameters were determined by multiobjective optimization using a genetic algorithm. The power-law model developed was used as an objective function. The following conclusions were drawn from this work: The 2 , of the power-law model were found to be 99.83%, 9.9 × 10 −4 and 3.32 × 10 −2 , and that of ANN model were found to be 99.91%, 5.4 × 10 −4 5.96 × 10 −2 , respectively. It was concluded from the above statistical parameters that the proposed models are competent to predict the surface roughness and flank wear. The surface roughness decreased with the increase of cutting speed and feed rate, whereas an increase in temperature and depth of cut caused an increase of the surface roughness. An increase in cutting speed, feed rate and depth of cut, lead to an increase of flank wear. However, an increase in temperature up to a specific limit there decreased the tool wear and after that, the flank wear increased with the increase of temperature. The temperature was the most influential factor which affects flank wear, whereas cutting speed was the most affecting factor influencing the surface roughness. Using a genetic algorithm the optimal values of the machining parameters-cutting speed, feed rate, depth of cut and temperature were found to be 99.2758 mm/min, 0.1014 mm/rev, 0.5003 mm and 92.9177 o C, respectively. The corresponding values of surface roughness ( ) and flank wear ( ) were found to be 2.1973 and 0.2565 , respectively.
4,026.4
2020-04-16T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]
Inhibition of Oxidative Stress and Inflammation by Fisetin Ameliorates Heat Stress-Induced Intestinal Injury in Rats Intestinal injury and dysfunctions play an important role in the pathophysiology of heat stress. The objective of this study was to determine whether fisetin could ameliorate heat stress-induced intestinal oxidative stress and inflammation, and explore the possible mechanisms at transcriptional levels. Twenty- four male Sprague-Dawley rats aged 8 weeks were randomized to 3 groups, namely, control, heat stress, and heat stress + fisetin (HS-FIS). The experiment lasted for 3 days with daily 1.5 h of heat treatment (40°C) for the heat stress and HS-FIS groups. Rats of the HS-FIS group were orally given 100 mg fisetin /kg body weight/day before the heat treatment. The results showed that fisetin restored the heat stress- induced jejunum morphological damage and increased intestinal permeability, which may be attributed to the improved redox status, the decreased myeloperoxidase activity, the suppressed toll-like receptor 4 signaling pathway mediated expression of pro-inflammatory cytokine tumor necrosis factor alpha at translational and transcriptional level, and the increased gene expression of interleukin 10 in the jejunum. In conclusion, fisetin alleviated the intestinal injury in rats caused by heat stress through inhibiting of oxidative stress and inflammation. This may offer a useful nutritional strategy for improving the health status of individuals exposed to heat stress. INTRODUCTION W ith climate change and global warming, the adverse effects of heat stress (HS) on human and animal health are becoming serious. The HS causes damage to multiple organs and high rate of mortality. Individuals exposed to HS are vulnerable to intestinal injury and dysfunction, as indicated by morphological alteration (He et al., 2015), reduced absorption and digestion of nutrients Wu et al., 2021), enhanced oxidative stress , over activation of inflammation and increased paracellular permeability . Importantly, intestinal injury and dysfunctions play a pivotal role in the pathophysiology of HS, which were observed in clinical studies (Snipe, 2019) and animal models (Ye et al., 2019). Therefore, the search for preventive or/and therapeutic strategies that could alleviate the adverse effects of HS on intestine is a main concern. F i r s t A r t i c l e oral FIS administration elevated glutathione (GSH) level, suppressed the infiltration of inflammatory cells, production of pro-inflammatory cytokines (e.g., tumor necrosis factor alpha (TNF-α), interleukin (IL)-6, and IL-1β), reactive oxygen species (ROS) and reactive nitrogen species in the colon tissues of colitis mice exposed to dextran sulphate sodium. However, data related to FIS modulation of intestinal health are limited. We hypothesized that FIS could attenuate the HS-induced intestinal damage in rats due to its excellent antioxidant and anti-inflammatory properties. In the present study, the beneficial effects of FIS on intestinal morphology, oxidative and immune status in heat-stressed rats, as well as the possible mechanisms at transcriptional levels were explored. Animals and treatments Male Sprague-Dawley rats, aged 8 weeks, weighing 200±20 g, were acclimated to the environment (temperature, 20-24 °C; humidity, 40-60%; 12 h light/ dark cycle) for 1 week. During the entire experimental period including acclimation, rats were provided with tap water and standard chow diet ad libitum under the normal condition. And then, rats were allocated into 3 groups (n=8): (1) the control (CON) group: rats were fed with 0.5% carboxymethylcellulose sodium (CMC-Na, diluted in 0.86% normal saline; Sinopharm Chemical Reagent Co., Ltd., Shanghai, China) by oral gavage administration for 3 days; (2) the HS group: rats were fed with 0.5% CMC-Na by oral gavage administration for 3 days under HS environment (1.5 h per day at 40 °C from 11:30 am to 1:00 pm for 3 consecutive days); and (3) the HS-FIS group: rats were fed with 100 mg FIS/kg body weight/day (purity 98%; diluted in 0.5% CMC-Na; Yuanye Biotechnology Co. Ltd, Shanghai, China) by oral gavage administration for 3 days under HS environment. The CMC-Na or FIS were provided for 3 consecutive days at 2 h before HS treatment. The FIS dose in the present study was selected according Lee et al. (2015). Sample collection After heat treatment for third day, all rats were anesthetized and sacrificed quickly. Blood was collected through eyeball of each rat and centrifuged at 2000 g (15 min, 4 °C) to harvest serum. The serum was stored at -80 °C until subsequent analysis. The procedure of jejunum sample collection was performed according to the method of Lu et al. (2011). Part of the jejunum was fixed in 4% buffered paraformaldehyde for histological analysis, and another part was immediately snap-frozen in liquid nitrogen for further analysis. Diamine oxidase (DAO) activity The activity of DAO (catalog No. A088-1) in the serum was determined using a commercial kit purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Histology analysis The fixed jejunum sample was dehydrated and embedded in paraffin. Five-µm sections were cut and then stained with hematoxylin and eosin (HE). Ten welloriented, intact villi and their associated crypts per rat were selected, and images were recorded using an optical binocular microscope (Olympus BX5; Olympus Optical Co. Ltd, Tokyo, Japan) equipped with a digital camera (Nikon H550L; Nikon, Tokyo, Japan). Measurements of the villus length, crypt depth, and villus width of the jejunum were detected using the Image-Pro Plus software (version 6.0, Media Cybernetics, Inc., Rockville, MD, USA). According to the method described in the previous study (Dong et al., 2014), the villus: crypt ratio and villus surface area were calculated. Oxidative status assay As described in the previous study , the jejunal malondialdehyde (MDA, catalog No. A003-1) concentration, total superoxide dismutase (T-SOD, catalog No. A001-1) and glutathione peroxidase (GPX, catalog No. A005) activities, as well as total antioxidant capacity (T-AOC, catalog No. A015-1) and GSH (catalog No. A006-2) levels were determined using assay kits according to the guidelines of manufacturer (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). All results were normalized to total protein concentration in each sample for inter-sample comparison. The jejunal total protein concentration was determined according to the guidelines of the manufacturer (catalog No. A045-3, Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Cytokine assays by ELISA A commercial ELISA kit (catalog No. EK382/3-96) purchased from Multisciences Biotech Co., Ltd (Hangzhou, China) was employed to analyze the jejunal TNFαconcentration following the manufacturer's instructions. The detection limit was 0.43 pg/mL; the inter-and intraassay coefficients of variation were less than 7% and 9%, respectively. All results were normalized to total protein concentration in each sample for inter-sample comparison. The jejunal total protein concentration was determined according to the guidelines of the manufacturer (catalog No. A045-3, Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Myeloperoxidase (MPO) activity assay The MPO activity (catalog No. A044) and total protein concentration (catalog No. A045-3) in the jejunum were analyzed using a commercial kit following the instructions of the manufacturer (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). All results were normalized to total protein concentration in each sample for inter-sample comparison. CON, rats were orally fed with 0.5% carboxymethylcellulose sodium; HS, rats were orally fed with 0.5% carboxymethylcellulose sodium and then subjected to heat treatment; HS-FIS, rats were orally fed with 100 mg FIS /kg body weight/day and then subjected to heat treatment. Results are expressed as mean and standard error (n = 6). * P<0.05 was compared with the CON group; # P<0.05 was compared with the HS group. (n=5); T-SOD, total superoxide dismutase (n=8); T-AOC, total antioxidant capacity (n=8); CON, rats were orally fed with 0.5% carboxymethylcellulose sodium; HS, rats were orally fed with 0.5% carboxymethylcellulose sodium and then subjected to heat treatment; HS-FIS, rats were orally fed with 100 mg FIS /kg body weight/day t and hen subjected to heat treatment. Results are expressed as mean and standard error. * P<0.05 was compared with the CON group; # P<0.05 was compared with the HS group. Fig. 2. Effects of fisetin on the jejunal diamine oxidase (DAO) activity in heat-stressed rats. CON, rats were orally fed with 0.5% carboxymethylcellulose sodium; HS, rats were orally fed with 0.5% carboxymethylcellulose sodium and then subjected to heat treatment; HS-FIS, rats were orally fed with 100 mg FIS /kg body weight/day and then subjected to heat treatment. Results are expressed as mean and standard error (n=8). * P<0.05 was compared with the CON group; # P<0.05 was compared with the HS group. Quantitative RT-PCR analysis Total RNA from liver samples were extracted using TRIzol Reagent (TaKaRa, Dalian, China) according to the guidelines of the manufacturer. The integrity, concentration, and purity of RNA, reverse transcription, as well as qRT-PCR were performed according to the previous studies (Cheng et al., 2016. The primer sequences of genes used in this study are presented in Table I. The target genes expression levels were normalized by the housekeeping gene β-actin, and then were calculated via the 2 -ΔΔCt method (Livak and Schmittgen, 2001). The values of CON group were used as a calibrator. Statistical analysis Results are expressed as mean and standard error and analyzed by SPSS 17.0. The individual rat was used as the experimental unit. Statistical differences between different groups were determined via one-way analysis of variance (ANOVA) and Tukey's post hoc test for multiple comparisons. Significant difference was accepted at P < 0.05. FIS reduces the severity of intestinal injury in rats subjected to HS In Figure 2, the DAO activity in the serum was O n l i n e F i r s t A r t i c l e Inhibition of Oxidative Stress and Inflammation 5 significantly higher in rats exposed to HS compared with the CON group (P<0.05). Also, the jejunal matrix metalloproteinase 3 (MMP3, P=0.079) and heat shock protein 70 (HSP70, P<0.05) genes expression were increased in the HS group compared with the CON group (Fig. 3A). Rats in the HS group exhibited jejunal villus atrophy and shedding (Fig. 4). The jejunal villus height and villus height: crypt depth ratio were significantly lower (Table II, P<0.05) in rats exposed to HS compared with the CON group. Administration of FIS to rats exposed to HS significantly decreased (P<0.05) the serum DAO activity and the jejunal HSP70 gene expression, improved jejunum morphology, increased (P<0.05) jejunal villus height, villus surface area, villus width and villus height: Crypt depth ratio in the HS-FIS group compared with the HS group (P<0.05). However, crypt depth, the IFABP and villin mRNA expression in the jejunum of rats were not affected among the 3 groups (P>0.05). FIS attenuates the jejunal oxidative stress in rats subjected to HS The higher MDA content, T-SOD activity, GPX activity in the jejunum of heat-stressed rats were reduced by FIS administration (Table III, P<0.05). However, the T-AOC and GSH levels in the jejunum were comparable among the 3 groups (P>0.05). At transcriptional level, administration of FIS to rats exposed to HS alleviated the increased nuclear factor, erythroid 2-like 2 (Nrf2) and GPX1 expression compared with the HS group (Fig. 3B, P<0.05). The genes expression of SOD1 and Kelch-like ECH-associated protein 1 (Keap1) in the jejunum were not influenced by HS and FIS treatment (P>0.05). FIS relieves the jejunal inflammation in rats subjected to HS The TNF-α concentration was higher in the jejunum of the HS group compared with the CON group (Fig. 5A, P<0.05). However, FIS treatment to heat-stressed rats caused decreases in jejunal TNF-α concentration and MPO activity (Fig. 5B) compared with the HS group (P<0.05). At transcriptional level, rats in the HS group exhibited higher jejunal TNF-α, IL10 and toll-like receptor 4 (TLR4) mRNA expression compared with the CON group (Fig. 3C, P<0.05). Expectedly, the increased IL10, decreased TNF-α and TLR4 genes expression were observed in the jejunum of the HS-FIS group compared with the HS group (P < 0.05). The jejunal IL6 and interferon γ (IFN-γ) genes expression in rats were not affected (P > 0.05) among the 3 groups. Fig. 3. Effects of fisetin on the genes expression related to jejunal injury markers (A), redox status (B) and inlammation (C) in heat-stressed rats. HSP70, heat shock protein 70; I-FABP, intestinal fatty acid-binding protein; MMP3, matrix metalloproteinase 3; GPX1, glutathione peroxidase 1; Nrf2, nuclear factor, erythroid 2-like 2; SOD1, superoxide dismutase 1; Keap1, Kelch-like ECHassociated protein 1; IL6, interleukin 6; IL10, interleukin 10; TLR4, toll-like receptor 4; TNF-α, tumor necrosis factor alpha; IFN-γ, interferon γ; CON, rats were orally fed with 0.5% carboxymethylcellulose sodium; HS, rats were orally fed with 0.5% carboxymethylcellulose sodium and then subjected to heat treatment; HS-FIS, rats were orally fed with 100 mg FIS /kg body weight/day and then subjected to heat treatment. Results are expressed as mean and standard error (n=8). * P<0.05 was compared with the CON group; # P<0.05 was compared with the HS group. O n l i n e F i r s t A r t i c l e K. Cheng et al. Fig. 4. The jejunal histological appearance (hematoxylin and eosin). (A) CON, rats were orally fed with 0.5% carboxymethylcellulose sodium; (B) HS, rats were orally fed with 0.5% carboxymethylcellulose sodium and then subjected to heat treatment; (C) HS-FIS, rats were orally fed with 100 mg FIS /kg body weight/day and then subjected to heat treatment. Original magnification 100 ×, Scale bars = 100 μm. Fig. 5. Effects of fisetin on the jejunal tumor necrosis factor alpha (TNF-α) level (A) and myeloperoxidase (MPO) activity (B) in heat-stressed rats. CON, rats were orally fed with 0.5% carboxymethylcellulose sodium; HS, rats were orally fed with 0.5% carboxymethylcellulose sodium and then subjected to heat treatment; HS-FIS, rats were orally fed with 100 mg FIS /kg body weight/day and then subjected to heat treatment. Results are expressed as mean and standard error (n=6). * P<0.05 was compared with the CON group; # P<0.05 was compared with the HS group. DISCUSSION In the present study, the effects of oral FIS administration on the jejunum of rats subjected to HS were investigated for the first time. As expected, FIS relieved the heat stress-induced intestinal damage demonstrated by the decreased serum DAO activity, improved histologic structure, inhibited oxidative stress and inflammation. In this study, the beneficial effects of FIS on intestine of heat-stressed rats may be attributed to the decreased gene expression of TLR4, reduced TNF-α expression, increased IL10 gene expression, and suppressed MPO activity. The intestine is one of the first and more susceptible organs negatively affected by hyperthermia challenges due to the fact that animals redistribute blood to the periphery to maximize radiant heat dissipation (Pearce et al., 2014). When exposed to HS, the synthesis of most proteins is delayed, but HSP is rapidly synthesized (Al-Aqil and Zulkifli, 2009). Among the HSP, HSP70 is the most conserved and most common family, which is abundant in various tissues in most organism. There is ample evidence that the transcription of HSP70 is rapidly induced by high temperature (Tedeschi et al., 2015;Song et al., 2017;Cheng et al., 2019). Therefore, the expression of HSP70 in intestine is a reliable biomarker for measuring a thermotolerance response. Similarly, in this study, heat exposure led to an increase in the jejunual HSP70 mRNA expression. As expected, FIS restored the heat stressinduced upregulated jejunal HSP70 gene expression, suggesting that FIS reduced the heat responses of rats exposed to high temperature. Similar results observed in scrotal hyperthermia model showed that administration of FIS decreased the gene expression of HSP72 (Pirani et al., 2021). Heat stress results in the increased intestinal permeability in animals. The serum DAO activity is O n l i n e F i r s t A r t i c l e Inhibition of Oxidative Stress and Inflammation recognized as a sensitive marker for monitoring the alteration of intestinal barrier permeability Cheng et al., 2019). In the present study, the serum DAO activity was increased during HS, suggesting that the intestinal barrier function was compromised. In addition, MMP3 expression was analyzed as an indicator of intestinal damage, as has been reported by other authors Yi et al., 2018), results in this study showed that this parameter was affected by HS, which further confirmed heat stress-induced intestinal injury. As expected, FIS treatment could attenuate the intestinal morphologic damage of heat-stressed rats as indicated by the increased villus height, villus width, the ratio of villus height to crypt depth and villous surface area. Meantime, FIS significantly decreased circulating DAO activity in response to HS exposure in this study. Thus, our results showed that FIS could be used as a potential regulator in improving intestinal morphologic damage and permeability of rats under HS. Emerging evidence revealed that an increase in the generation of ROS such as superoxide anions, hydrogen peroxide and hydroxyl radicals was observed in individuals exposed to HS, which eventually led to intestinal damage . As it is known, the enzymatic antioxidant defense against excessive ROS has an important function on maintaining redox homeostasis; SOD catalyzes superoxide radicals to molecular oxygen and hydrogen peroxide, which is decomposed by CAT and GPX to harmless compounds such as water and oxygen. However, the overwhelming ROS will harm DNA, proteins and lipids, even leading to cellular injury and death. The results in our study presented that HS induced increases in the activities of GPX and T-SOD; nevertheless, these increases were shown to be inadequate to counteract the oxidative damage in the intestine of rats as indicated by the increased MDA concentration, which supported the findings of Cheng et al. (2019). In addition, Nrf2 and its target antioxidant enzyme GPX genes expression were upregulated in the intestine of heat-stressed rats, which may explain the increased GPX activity. Accumulating studies have confirmed that HS can result in the increased adaption of Nrf2 and its target antioxidants genes (Zhang et al., 2002;Bhusari et al., 2008). However, the decreased T-SOD activity in the jejunum of heat-stressed rats was not parallel with its gene expression, which need to be further investigated. In this study, FIS administration alleviated the increased GPX and T-SOD activities, MDA content, Nrf2 and GPX mRNA expression, suggesting that FIS can attenuate intestinal oxidative damage induced by HS. Our data in rats are consistent with the experiment in HS-induced oxidative stress broilers, in which FIS supplementation improved the circulating redox status (Ogbuagu et al., 2018). The beneficial effects of FIS on redox status of heat-stressed rats in the present study were attributed to its hydroxyl groups and anti-inflammation property rather than enhancing antioxidant defense systems. Previous studies have shown that the overproduction of pro-inflammatory cytokines such as TNF-α induced by HS contributed significantly to the intestine tissues necrosis and dysfunction (Cheng et al., 2019). Similarly, results observed in our study also demonstrated that the protein and mRNA levels of jejunal TNF-α were increased in heatstressed rats, which may be attributed to the up-regulation of TLR4 gene expression. The TLR4, a well-known pattern recognition receptor, simulation of which triggers the biosynthesis and release of inflammatory cytokines, including IL6 and TNF-α (Shi et al., 2006). Our results were in accordance with the previous studies in different animal models such as broilers , mice (Mohyuddin et al., 2021) and rats (Cheng et al., 2019) in which HS increased TLR4 mRNA abundance and its targeted inflammatory cytokines production in intestine. In addition, in the present study, the upregulated expression of jejunal IL10 mRNA during heat exposure may be due to the fact that heat stress-induced inflammation and the cells undergoing inflammation will respond by producing anti-inflammatory cytokines. These results in this study further supported the notion that heat stress-induced jejunal inflammation could be due to the increased TLR4 mRNA expression. Expectedly, FIS counteracted the increased TNF-α concentration, and the upregulated genes expression of TLR4 and TNF-α in the jejunum of heatstressed rats, suggesting that FIS could play a positive role in inhibiting jejunal inflammation. Likewise, FIS has been reported to reduce the colonic the protein expression of pro-inflammatory cytokines, TNF-α, IL6, and IL-1β, in colitis mice subjected to dextran sulphate sodium (Sahu et al., 2016). On the other hand, in this work, FIS treatment upregulated the transcription of IL10 gene in the jejunum of rats exposed to HS. The IL10 is considered a more potent inhibitor of many pro-inflammatory cytokines produced by monocytes and dampens many inflammatory responses (Patel and Davidson, 2014). Moreover, in the current study, FIS administration also inhibited the MPO activity in the jejunum of heat-stressed rats. The MPO system of neutrophil plays a critical role in intestinal mucosal inflammation. Additionally, MPO has been implicated as a participant in intestinal damage under many inflammation conditions, which catalyzes the production of cytotoxic oxidant hypochlorous acid (Hampton et al., 1998;Nicholls and Hazen, 2005). Hypochlorous acid can react avidly with cellular bio-macromolecules such as proteins, lipids and DNA, which consequently contributes to oxidative O n l i n e F i r s t A r t i c l e K. Cheng et al. stress and inflammation in tissues including small intestine (Smith, 1994). Thus, the antioxidant property of FIS is partly attributable to the decreased MPO activity. Taken together, the FIS anti-inflammation property probably results from the downregulated TLR4 mRNA expression, upregulated IL10 mRNA abundance and inhibited MPO system of neutrophil. CONCLUSION The data obtained from the present study indicates that FIS administration confers protection against heat stress-induced intestinal damage partly by mitigating oxidative stress and inflammation via the downregulated TLR4 mRNA expression, upregulated IL10 mRNA abundance and inhibited MPO system of neutrophil. This study offers a useful nutritional strategy for improving the health status of individuals exposed to HS.
4,954.6
2023-01-01T00:00:00.000
[ "Materials Science" ]
Isocitrate dehydrogenase 1 (IDH1) mutation-specific microRNA signature predicts favorable prognosis in glioblastoma patients with IDH1 wild type Background To date, no prognostic microRNAs (miRNAs) for isocitrate dehydrogenase 1 (IDH1) wild-type glioblastoma multiformes (GBM) have been reported. The aim of the present study was to identify a miRNA signature of prognostic value for IDH1 wild-type GBM patients using miRNA expression dataset from the The Cancer Genome Atlas (TCGA). Methods Differential expression profiling analysis of miRNAs was performed on samples from 187 GBM patients, comprising 17 mutant-type IDH1 and 170 wild-type IDH1 samples. Results A 23-micoRNA signature which was specific to the IDH1 mutation was revealed. Survival data was available for 140 of the GBM patients with wild-type IDH1. Using these data, the samples were characterized as high-risk or low-risk group according to the ranked protective scores for each of the 23 miRNAs in the 23-miRNA signature. Then, the 23 IDH1 mutation-specific miRNAs were classified as risky group and protective group miRNAs based on the significance analysis of microarrays d-score (SAM d-value) (positive value or negative value). The risky group miRNAs were found to be expressed more in the high-risk samples while the protective group miRNAs were expressed more in the low-risk samples. Patients with high protective scores had longer survival times than those with low protective scores. Conclusion These findings show that IDH1 mutation-specific miRNA signature is a marker for favorable prognosis in primary GBM patients with the IDH1 wild type. Glioblastoma (GBM, WHO grade IV glioma) is the most malignant brain tumor in adults. Even after treatment with surgical resection and radiotherapy plus concomitant chemotherapy, most patients with the diagnosis of GBM seldom survive more than 15 months [13]. A number of molecular markers for GBM associated with diagnosis, prognosis, and treatment have been identified. Somatic mutations in IDH1 have been identified in GBM patients, especially in secondary GBM which evolves from lower-grade gliomas [14]. Several miRNA signatures associated with IDH1 mutations have been revealed via miRNA expression profiling and better outcomes have been predicted for GBM patients with IDH1 mutations [1]. However, to date, no valuable prognostic miRNA signatures have been reported for patients with wild-type IDH1 GBM. In the present study, we used the GBM miRNA dataset from The Cancer Genome Atlas (TCGA, http://cancergenome.nih.gov/) and selected miRNAs that were differentially expressed between wild-type and mutant-type IDH1 GBM samples. As a result, we successfully identified a 23-miRNA signature, which predicted a better outcome for GBM patients with wild-type IDH1. Samples MiRNA expression data (level 3) and the corresponding survival data for glioblastoma samples were downloaded from The Cancer Genome Atlas (TCGA) data portal. Two mutant-type IDH1 samples and 30 wild-type IDH1 samples were removed during analysis because of unavailable survival information or very short survival time (less than 30 days, probably caused by other lethal factors). Thus, a total of 155 GBM patients, with 15 mutant-type and 140 wild-type IDH1 patients, were enrolled for further analysis. Because the data were obtained from TCGA, further approval by an ethics committee was not required. Data analysis Differential expression profiling analysis was performed on the GBM miRNA dataset of TCGA using significance analysis of microarrays (SAM), performed using BRB-ArrayTools developed by Dr. Richard Simon and the BRB-ArrayTools Development Team (available at http://linus. nci.nih.gov/BRB-ArrayTools.html). The differential expression standard was set to 1.5 fold (SAM-d value score greater than 1.5 or less than −1.5) and P-values less than 0.01 were taken as significant. The SAM application calculates a score for each miRNA on the basis of the change of expression relative to the standard deviation of all measurements. To assess the survival prediction value of selected miRNAs, a protective-score formula for predicting survival was developed based on a linear combination of the miRNA expression level multiplied by the SAM dvalue. MiRNAs from 155 GBM patients, including 15 mutant-type and 140 wild-type IDH1 samples, that showed enormous differences in expression between the wild-type and mutant-type IDH1 GBM samples, were selected for further analysis. Identification of the 23-miRNA signature Twenty-three miRNAs were identified from the total of 470 GBM miRNAs in TCGA and defined as IDH1 mutation-specific miRNA signatures ( Figure 1). Each of the 23 miRNAs showed significantly aberrant expression in the mutant-type IDH1 samples and, thus, were defined as a 23-miRNA signature specific to IDH1 mutation. Accessing protective scores To assess the value of survival prediction for the 23-miRNA signature protective-scores were calculated for all enrolled GBM patients. The 140 patients with wild-type IDH1 were ranked according to the protective score values Figure 1 The IDH1 mutation-specific 23-miRNA signature. The 23 miRNAs were differentially expressed by more than 1.5 fold in GBM samples with mutant-type IDH1 compared to those with wild-type IDH1. for the 23-miRNA signature along with the corresponding survival data ( Figure 2B and 2C). Using the 60th percentile protective-score as a cutoff, the 140 wild-type IDH1 samples were divided into two groups, high-risk (corresponding to the low-score group) and low-risk group (corresponding to the high-score group) (Figure 2A and 2C). The 23 miRNAs were divided into two groups according to the SAM d-value (positive value or negative value), the risky group and the protective group with 16 and seven miRNAs, respectively ( Figure 2C). Protective miRNAs were expressed at higher levels in the low-risk group, while risky miRNAs tended to be expressed more in the high-risk group ( Figure 2C). We also compared the overall survival of the patients in the mutant-type (15 samples) and the wild-type IDH1 groups (140 samples) and found statistically significant differences between them ( Figure 3A, P = 0.0001). Kaplan-Meier curves for the low-score and high-score groups were shown in Figure 3B. A statistically significant difference was observed between the two groups (P = 0.0045). Patients in the high-score group had better outcomes than patients in the low-score group. Thus, the 23-miRNA signature, which was specific to IDH1 mutation in the GBM samples, may be a marker of favorable prognosis in wildtype IDH1 GBM patients. Discussion Primary GBM is considered to be the most lethal brain tumor in adults. The prognosis is variable, with some patients surviving less than a year and others surviving for three years or more [13]. To date, only IDH1 mutation and O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation have been identified as stable prognostic indicators for GBM patients across various studies. IDH1 mutations were reported to have a strong positive correlation with overall survival in secondary and primary GBMs, although the mutation rate in primary GBM was much lower than that in secondary GBM [14]. Through differential miRNA expression profiling, we identified a 23-miRNA signature that was Figure 2 Protective scores for the 23-miRNA signature and survival days in GBM patients with wild-type IDH1. A. Ranked protective scores. B. Survival days for the 140 GBM patients. C. The risky group and protective group for the 23 miRNAs. Risky miRNAs were expressed more in the high-risk group and protective miRNAs were expressed more in the low-risk group. implicated with outcomes for GBM patients with the mutant-type IDH1. Nevertheless, until now, no miRNA signature that could serve as an indicator for GBM in patients with IDH1 wild-type is available. Here, we used a scoring method to measure the relative expression levels of the 23 miRNAs. Then we divided all of the samples into high-score and low-score groups as shown in Figure 2. We found that the high-score group had better clinical outcomes than the low-score group. According to the SAM-d value, these miRNAs were defined as risky miRNA group and protective miRNA group. Seven miRNAs were designated as risky miRNAs, of which higher expressions indicated worse outcomes, and 16 miRNAs were designated protective miRNAs, of which higher expressions indicated better outcomes for GBM patients. A recent study, which examined the expression data of 305 miRNAs from 222 GBM samples in TCGA dataset, identified a 10-miRNA prognostic signature [15]. The 10-miRNA signature is partially consistent with the 23-miRNA signature that we identified in the present study. The two signatures share six miRNAs, including are protective miRNAs (miR-20a, miR-106a, miR-17-5p) and three risky miRNAs (miR-221, miR-222, miR-148a). To some extent, the overlap between the two miRNA signatures verified the potentially clinically predictive significance of, at least, the 6-miRNA signature. A possible explanation for why the two signatures did not agree exactly may be because of differences in the target population and/or the entry criteria. In another study, a 5-miRNA signature was identified as a prognostic biomarker in Chinese patients with primary GBM [1]. This 5-miRNA signature (miR-181d, miR-518b, miR-524-5p, miR-566, and miR-1227) was significantly associated with improved overall survival for GBM patients. Interestingly, none of the five miRNAs in this signature overlapped with the miRNAs in our 23-miRNA signature, probably because different patient populations and datasets were used in the two studies. We further investigated the six miRNAs that were common to the 10-miRNA and 23-miRNA signatures. Some studies have shown that miR-183 was significantly downregulated in osteosarcoma and may subsequently promote migration, invasion, and recurrence of osteosarcoma [16]. In our study, we found that miR-183 was a favorable predictor for GBM, which was consistent with its effect in osteosarcoma. In advanced colorectal cancer, miR-148a expression was the most significantly downregulated, which resulted in a worse therapeutic response and poor overall survival [17]. A similar effect was found in GBM, and, in our study, miR-148a was classified as one of the risky biomarkers for GBM. In a study of adult T-cell leukemia, miR-155 was identified as a novel unfavorable biomarker for disease progression and prognosis [18]. Another study reported that elevation of plasma miR-155 was associated with shorter survival times in non-small cell lung cancer [19]. These findings were consistent with our results for the function of miR-155. MiR-221 and its paralogue miR-222 are known inhibitors of angiogenesis, which act by blocking cell migration and proliferation in endothelial cells [20,21]. Other studies have reported different functions for miR-221, suggesting that miR-221 was also associated with induction of angiogenesis [22,23]. In our research, miR-221 and miR-222 were identified as unfavorable indictors for GBM. In a study into chronic lymphocytic leukemia, miR-34a and miR-17-5p were found to be downregulated in chronic lymphocytic leukemia patients with tumor protein p53 (TP53) abnormalities, indicating that higher expression levels of miR-34a and miR-17-5p may predict a better clinical outcome for these patients [24]. In TCGA, the IDH1 mutation-type samples account for only 10-16% of the GBMs, most of which are secondary GBMs. Our results provided a robust clinical prognostic indicator for GBM patients with wild-type IDH1. However, we still have no idea how exactly this 23-miRNA signature worked in GBM. Clearly, the mechanisms behind Figure 3 Overall survival of GBM patients in the mutant-type and wild-type IDH1 groups. A. Patients with mutant-type IDH1 had much better outcome than those with wild-type IDH1. B. Kaplan-Meier curves for the low-score and high-score groups. In the 140 IDH1 wild-type GBM patients, patients in the high-score group had much longer overall survival times than those in the low-score group. the roles of these miRNAs require further investigation. Better insights into how the 23-miRNA signature functions in GBM will potentially contribute to an understanding of the genetic aberrations that are involved in tumor genesis, progression, and/or response to treatment. In particular, there are a number of significant advantages over microarray methodologies for the routine examination of miRNA signatures. Analysis can be undertaken straightforwardly, rapidly and cost-effectively. It is much more applicable and feasible to be tested in the clinical practice than whole genome miRNA profiling. Furthermore, these profoundly aberrantly expressed miRNAs can serve as potential molecular targets for new therapeutic strategies, subsequently leading to improved outcomes for GBM patients.
2,703.4
2013-08-29T00:00:00.000
[ "Medicine", "Biology" ]
Expedition into Exosome Biology: A Perspective of Progress from Discovery to Therapeutic Development Simple Summary Exosomes symbolize membrane-enclosed entities of endocytic origin. They play an important role in the intracellular communication by shuttling proteins, nucleic acids, etc., between cells of different tissues and organs. Recent studies have revealed an interplay between cell and exosomes; thereby highlighted their importance in disease diagnosis and possible implication for use in therapeutics. They are currently been explored for the strategic development of platforms towards their employment in achieving the target specific delivery of therapeutics. This review summarizes the composition, biogenesis and trafficking of exosomes in different cellular backgrounds and explores their multifarious role as drug delivery vehicles towards achieving correct functionality and efficacy of the therapeutic molecules. Additionally, it discusses genetic engineering platforms for employment in the designing of optimal delivery modules for their application in the delivery of drugs as part of anticancer therapy. Abstract Exosomes are membrane-enclosed distinct cellular entities of endocytic origin that shuttle proteins and RNA molecules intercellularly for communication purposes. Their surface is embossed by a huge variety of proteins, some of which are used as diagnostic markers. Exosomes are being explored for potential drug delivery, although their therapeutic utilities are impeded by gaps in knowledge regarding their formation and function under physiological condition and by lack of methods capable of shedding light on intraluminal vesicle release at the target site. Nonetheless, exosomes offer a promising means of developing systems that enable the specific delivery of therapeutics in diseases like cancer. This review summarizes information on donor cell types, cargoes, cargo loading, routes of administration, and the engineering of exosomal surfaces for specific peptides that increase target specificity and as such, therapeutic delivery. Introduction Extracellular vesicles (EVs) represent a heterogenous population of membranous structures of varying sizes and cellular origin [1]. Their secretion into the extracellular milieu provides a means of mediating intercellular communication. Exosomes are a subset of EVs that were introduced to the scientific world as vesicles released from mature blood reticulocytes expressing transferrin receptor [2]. Exosomes develop intracellularly as Composition Exosomes constitute a subcomponent of the secretome [14], and their composition is dictated by the functional status of the cell (rested, stimulated, transformed, or stressed) [13]. Although the composition of exosomes are highly dependent on their origin, they all contain specific sets of endocytic proteins and nucleic acids (DNA, RNA), and are enclosed by a membrane of plasma membrane origin ( Figure 1). A wide range of methods are employed to separate exosomes from cell culture and body fluids ( Table 1). Analyses of their composition by fluorescence-activated cell sorting (FACS), Western blot, and mass spectrometry have revealed them to have a series of tetraspanins (CD9, -26, -58 and others), RAB proteins, heat shock proteins (Hsp70, -90), endosome-associated proteins (Alix, TSG101), annexins, cytoskeletal elements (actin, tubulin), the lysosomal protein (Lamp2b), and the intercellular adhesion molecule (ICAM-1) and co-stimulatory molecules of T-cell origin such as CD86 [15][16][17][18]. Surface proteins such as heat shock protein, α4β1 (surface localized protein) on reticulocytes, A33 on enterocytes, and P-selectin on platelets are signatures of cell-specific exosomes [19][20][21]. Proteomic analyses of exosomes have shown them to possess surface-anchored sheddases, such as ADAM (a disintegrin and metalloproteinase), matrix metalloproteinases (MMPs), and MHC II molecules [22][23][24]. In addition to their role in extracellular matrix (ECM) remodeling, MMPs have been associated with intra-and intercellular communication via the proteasomal processing of exosome contents [25]. Enzymatic proteins, such as pyruvate kinases and peroxidases, have also been reported in human dendritic cells (DCs) and enterocyte-derived exosomes. In addition to displaying an array of intracellular proteins, exosomes contain DNA, and a wide range of non-coding RNAs (miRNAs, lncRNAs, and circRNAs). lncRNAs have emerged as regulatory RNA molecules with functions often related to cell differentiation and cell cycle regulation, whereas circRNAs act as competitive inhibitors of miRNAs during regulation of protein function [12,[26][27][28]. Furthermore, exosome membranes are rich in lipids such as phosphatidylserine and cholesterol [29]. At the time of writing, the exosome database (http://www.exocarta.org; accessed on 20 December 2020) contained 9769 entries for proteins, 3408 for mRNAs, 2838 for miRNAs, and 1116 lipid entries. The presence of such a wide range of proteins, mRNAs, and miRNAs suggest enormous heterogeneity in terms of exosomal contents, the local expression of proteins and lipids, and the uniqueness of exosomes. Biogenesis The most accepted model of exosome biogenesis involves membrane orientation and inward budding. According to this model, budding events during exosome formation occur in a reverse membrane orientation, similar to that observed during apoptosis [22,46,47] and the release of milk fat globules from the epithelial cells of mammary glands [48]. Budding events during exosome formation involves phosphatidylserine flipping from the inner to the outer plasma membrane leaflet. Furthermore, electron microscopic observations have revealed the fusion profiles of late endosomes with the plasma membrane of antigen-presenting cells (APCs [15]), cytotoxic T-lymphocytes (CTLs [49]), dendritic cells (DCs [50]), and platelets [51]. Exosome production occurs in an active or passive manner, that is, with or without protein involvement. Active production involves a heterooligomeric protein complex referred to as endosomal sorting complex required for transport (ESCRT) and fusion of multivesicular bodies (MVBs) with the plasma membrane to enable exosome release. Ubiquitination is one of the sorting mechanisms that results in the incorporation of endosomal proteins into MVBs. The loading of monoubiquitinated entities into MVB compartments is achieved by four different ESCRTs (ESCRT-0, I, II, and III) that interact with accessory proteins such as Vps-4 (vacuolar protein sorting-4) and ALIX (programmed cell death 6 interacting protein, also called PDCD6IP) [52][53][54]. A complex comprising ESCRT-0, HRS (hepatocyte growth factor regulated tyrosine kinase substrate), and STAM1 (signal transducing adapter molecule 1) aids in the recognition of ubiquitinated transmembrane proteins for incorporation into endosomal membrane [54]. Reportedly, ESCRT-I and II recruitment drive membrane budding, whereas ESCRT-III is required for bud scission [54][55][56]. The recruitment of ESCRT-III by ESCRT-I and II occurs with the involvement of ALIX, a protein that causes simultaneous binding of ESCRT-III to TSG101 (tumor susceptibility gene 101 and a component of ESCRT-I) [57]. After exosome membrane formation, ESCRT dissociates from MVB membrane and contributes to the transport of new cargos. ATPase VPS-4 (adenosine triphosphatase vacuolar protein sorting-4) is required for the dissociation of ESCRT from MVB membrane, which represents the first step of the ESCRT recycling machinery [54,58]. Exosome Trafficking Fusion of MVBs with the plasma membrane results in the release of exosomes into the extracellular milieu. Although the mechanism that drives this fusion is unknown, the secretion of acetylcholinesterase tagged exosomes from reticulocytes was found to depend on the function of VAMP-7 (vesicle associated molecular pattern-7) [70]. Recent studies on exosomes carrying WNT3A morphogen revealed that their release is dependent on R-SNARE (soluble N-ethylmaleimide sensitive fusion attachment protein receptor) protein (also called Ykt6) [71][72][73]. Furthermore, MVB-plasma membrane fusion was found to be mediated by a ternary SNARE (t-SNARE) complex formed by v-SNARE (vesicle SNARE) and t-SNARE [73-77]. After the two membranes make contact, the energy barrier required for their fusion is overcome by the SNARE complex due to its association with the V 0 subunit of V-type ATPase. The ability of V-type ATPase to overcome this energy barrier was found to be independent of its proton pump activity [78]. Other key regulatory components of the exosome secretion pathway include Rab proteins, e.g., Rab11 [82], and in another study, though calcium transients were found to trigger exosome release, Rab 27 and Rab35 acted as regulatory GTPases for exosome secretion [83][84][85][86][87][88]. In addition, Alix and Vps4 (components of the ESCRT pathway) were reported to play an important role in exosome secretion [89], which was found to be regulated by P2X receptor activation by LPS-induced ATP on monocytes and neutrophils, and by TLR4 activation on dendritic cells [9,10,79,90]. Immunomodulatory Effect of Exosomes Insights of the role of exosomes have revealed their importance as regulator of different biological processes under physiologic and pathologic conditions. Exosomes release into the extracellular milieu influences cellular morphology by interfering with cell signaling components and by modulating recipient gene expressions and functions and the cell differentiation program. Exosomes [103][104][105][106]. In addition, they play crucial role in intracellular communication and in the pathogenesis of several diseases as they can transfer signals (cytokines, proteins, lipids, nucleic acids, and infectious agents) from cells to nearby or distant locations [91, 107,108]. In one study, exosomes derived from immunocytes were found to contain a minimum of 98 immunogenic molecules [109]. The immunological functions of exosomes are highly dependent on their membrane proteins and cells of origin, and their stabilities in the extracellular space enable them to carry cargoes to distant cells [110]. Furthermore, the regulatory effects of exosomes involve cross-talk between different immune cells, for example, between B-lymphocyte-derived exosomes and CD8 + cytotoxic cells [111] and between T-cell-derived exosomes and DCs [112][113][114][115][116][117][118]. Here, we summarize the involvements of exosomes derived from mesenchymal stem cells (MSCs) and immune cells in cell-to-cell communication and immune system stimulation and suppression ( Figure 3). MSC-Derived Exosomes Mesenchymal stem cells (MSCs) are multipotent stromal cells sourced from bone marrow, adipose tissues, placenta, or umbilical cord ( Table 2). Their regenerative capacities underlie their importance in immune modulation [106,[119][120][121][122]. The immunomodulatory effects of MSC-derived exosomes on peripheral blood mononuclear cells (PBMCs) have been well established. Exosomes from healthy human bone marrow are essential for the interaction between MSCs and PBMCs. Furthermore, MSC-derived exosomes can modulate the activities of lymphocytes, macrophages, neutrophils, DCs, and natural killer (NK) cells [123]. The ability of MSC-derived exosomes to inhibit the secretion of proinflammatory cytokines such as interleukin-6 (IL-6), interleukin-1beta (IL-1β) [124], and tumor necrosis factor-alpha (TNF-α) and to increase the production of anti-inflammatory factors such as transforming growth factor-beta (TGF-β) and interleukin-10 (IL-10) have been well described [125]. In addition, MSC-derived exosomes induce conversion of Thelper-1 (Th1) to T-helper-2 (Th2) cells and reduces potential of T-cells to differentiate into effector T-cells (Th17, capable of producing IL-17). The exosomes induce the proliferation and differentiation of CD4 + cells into Th2 cells, and thereby, suppress differentiation of Th1 to Th17 cells, which are known to participate in autoimmune response. Furthermore, an increase in the regulatory T-cells (Tregs) was also observed in the interaction between Th-cells and exosomes. Together, studies have revealed that MSC-derived exosomes have favorable immunomodulatory properties [106,[120][121][122][123][124][125][126], and thus, they are considered as potential therapeutic candidates in many pathological contexts and as a convenient means of delivering therapeutics, enzymes, and genes to targeted cells [127]. Interestingly, recent evidence suggests that MSC-derived exosomes offer a potentially safe means of treating graft-versus-host disease (GvHD) [128]. DC-Derived Exosomes Exosomes secreted by immune cells such as mature DCs displaying MHC molecules on their surface can act as antigen-presenting vesicles, thereby activate lymphocytes and initiate innate or adaptive immune responses [118,134,140]. DC-derived exosomes can bind antigenic peptides either by direct capture or by indirect antigen processing through parent DCs [141]. DC-derived exosomes displaying MHC II molecules mediate CD4 + helper cell activation by interacting with lymphocyte function-associated antigen 1 (LFA-1) expressed on the surface of T-cells [142]. In the context of antigen-presenting properties, DC-derived exosomes have greater immunostimulatory effect than intact DCs [143], and in the absence of antigen-presenting cells (APCs), exosomes can activate CD8 + lymphocytes, which supports a report that exosomes contain high levels of class I MHC proteins and ICAM-1 [110]. On the other hand, immature DC-derived exosomes have opposite effects on the immune system, as their cargoes are enriched with self-antigens and anti-inflammatory factors that might promote or induce immune tolerance. The immature DC-derived exosomes were also found to contain low levels of MHC II and co-stimulatory CD86 + molecules, and thus, were incapable of inducing immune response and instead had immunosuppressive effects [104,135]. In the background of allograft transplantation, immature DC-derived exosomes have been shown to promote allograft survival by secreting anti-inflammatory cytokine IL-10, and thus, suppressing T-cell proliferation [144]. It appears that DC-derived exosomes participate in the modulation of helper and cytotoxic T-cell immune responses, and thus, maintain immune tolerance. NK-Derived Exosomes NK cells are innate immune cells that play a central role in immune response. These cells exhibit natural cytotoxicity that enables them to lyse malignant and virus-infected cells without prior sensitization [145]. Also, activated NK cells can mediate immune response indirectly by secreting pro-inflammatory cytokines and chemokines that modulate adaptive cell-mediated immune response [146]. It has also been reported NK-derived exosomes have anti-tumor effects similar to those of NK cells [136]. In a recent study, activated NK cell-derived exosomes loaded with cytotoxic proteins, such as perforin (PFN), granulysin (GNLY), and granzymes (Gzm-A and Gzm-B) induced caspase-dependent apoptosis on entry into target cells [137]. A comparative study on the effect of resting and activated NK cells on tumor cells revealed that activated NK cell-derived exosomes contain high levels of FasL (Fas ligand) and perforin molecules with cytotoxic lysing activity against cancer cells, especially in hematologic malignancies, such as leukemia and lymphoma [147]. Furthermore, it has been suggested that understanding of the cytotoxic activities of NKderived exosomes at the molecular level would undoubtedly aid in the development of immunotherapeutic strategies for the treatment of cancers and viral infections [148][149][150]. Treg-Derived Exosomes Treg cells (suppressive T-cells) compose a subset of T-cells that play crucial immunomodulatory role by maintaining self-antigen tolerance and in preventing autoimmunity by inhibiting the proliferation of effector T-cells (i.e., CD4 + and CD8 + cells) [151]. Like other immune cells, Treg cells are capable of releasing exosomes, which markedly outnumber those released by other T-cell subpopulation [152][153][154]. The secretion of exosomes by Treg cells is highly dependent on hypoxia, calcium levels, and IL-2 [155][156][157]. Recent studies on the proteomic profile of Treg-derived exosomes have shown that these exosomes contain most components of the parent cell and transport several molecules such as miRNAs, CD73 + , CD25 + , and CD125 + (also known as cytotoxic T-lymphocyte-associated protein 4 (CTLA-4)) with marked immunomodulatory effect [139,158]. Recently, Treg-derived exosomes were reported to be enriched with miRNAs (e.g., miRNA-155, Let-7b, and Let-7d) as compared with parental Tregs, and when transferred to conventional effector cells, these specific miRNAs suppressed IFN-γ production and the expression of effector genes, thereby, inhibited T-cell proliferation [139]. An analysis of the Treg-derived exosomes showed high expression of CD73 + , which perform an essential function in immune modulation by enhancing the production of adenosine (an anti-inflammatory modulator) that potently suppresses the proliferation and function of T-cells and block the production of IFN-γ and IL-2 [158]. Exploiting Exosomes for Therapeutics The utilization of exosomes as drug delivery vehicles requires proper understanding of their production in different cellular backgrounds to achieve correct functionality and efficacy of the therapeutic cargoes. The following section summarizes the considerations that should be borne in mind to achieve targeted drug delivery. Choice of Cells In addition to stability in body fluids, reduced immune-stimulatory activity and minimal inflammatory response are prerequisites of therapeutic exosomes, and correct donor cell choice is a steppingstone toward achieving these developmental targets (Table 3). Anti-miR-9 (Transfection) Reversal of chemoresistance [160] miR-146b (Transfection) Reduction of progression & metastasis [161] miR-133b (Transfection) Suppression of progression [162] PLK-1 siRNA (Electroporation) Induction of apoptosis & necrosis [163] Paclitaxel (Incubation) Growth inhibition of human pancreatic adenocarcinoma cell [164] Knockdown of specific gene after specific siRNA delivery to the brain for AD [165] VEGF siRNA (Electroporation) Suppression of tumor growth in breast cancer [166] GAPDH siRNA (Electroporation) Knockdown of specific gene after specific siRNA delivery to the brain for AD [165] Doxorubicin (Electroporation) Specific drug delivery to the tumor site & inhibited tumor growth [167] HEK293 Let-7a mimic (Transfection) Target EGPR-expressing cancerous tissues with nucleic acid drugs for breast cancer [168] HEK293T BCR-ABL siRNA (Transfection) Overcome pharmacological resistance in CML cells [169] Mouse lymphoma cell Curcumin (Mixing) Increase anti-inflammatory activity [170] Human cell lines such as HeLa and HEK293 and murine melanoma cell lines like B16-F1, B16-F10, and B16-BL6 are commonly used to produce exosomes [168,[171][172][173][174][175][176][177][178][179]. In terms of immunogenic properties, immature DCs acts as a suitable donor cell alternative for exosome production [135]. Additionally, surface modification of locally expressed peptides enable exosomes to be used for targeted drug delivery [165,167]. DC-derived exosomes engineered to locally express rabies virus glycoproteins have been utilized to deliver siRNA across the blood-brain barrier in murine models [165]. However, despite their attractive characteristics, production at large-scale for clinical use is restricted due to technical difficulties [167]. To scale up production for clinical use, MSCs offer a possible alternative as they produce large number of exosomes [160,161,[180][181][182]. The use of MSCderived exosomes to deliver drugs to glioblastoma (GBM) xenograft tumors significantly reduced tumor size [161]. Although exosomes provide a platform for developing new therapeutic strategies, scale-up of MSC-derived exosome production is mostly hampered by technical difficulties [183,184], and manufacturing challenges remain to be properly addressed [7]. In this regard, a combination of tissue-specific targeting and scalability to large-scale production appears to be an appropriate developmental target. Choice of Therapeutic Cargoes Several therapeutic cargoes have been loaded into exosome-based delivery systems. Utilization of the abilities of exosomes to carry interfering RNAs [185,186] and deliver therapeutic cargoes offer a potential means of treating different cancers [187]. Several research groups have investigated the use of exosomes to carry siRNA for gene-based therapy [165,174,176,[187][188][189][190]. Exosome-mediated delivery of siRNA not only reduces the risk of degradation, but substantially increases bioavailability and delivery efficiency. When MAPK1-siRNA was delivered using plasma or cell-based exosomes, a significant reduction in MAPK1 gene expression was observed in peripheral blood mononuclear cells [174]. In fibrosarcoma cells, gene knockdown by exosome-mediated delivery of RAD51 or RAD52-siRNA reduced viability and proliferation [176]. In a similar study, exosomes carrying the siRNAs of glyceraldehyde-3-phosphate dehydrogenase (GAPDH; the housekeeping gene) or β-site APP cleaving enzyme -1 (BACE1; an Alzheimer's disease-associated gene) downregulated targeted protein level in neurons [165]. Also, the risk of hepatitis C virus (HCV) infection was reduced in liver cells treated with exosomes containing short hairpin RNAs (shRNAs) against viral entry receptor and the replicative machinery of HCV [49,176]. Dysregulation of the expression profiles of miRNAs is a characteristic of a large number of cancers [191,192], and subsequent studies reported that the exosome-based targeted delivery of miRNAs suppressed symptoms in different disease models [185]. Encapsulation of miR-150 in exosomes suppressed T-cell populations and reduced endothelial cell migration, and treatment of T-cells with the conditioned media of miR-122 transduced HEK293T cells increased miR-122 gene expression several-fold and suppressed hepatic inflammation, necrosis, and fibrosis [172,193,194]. Exosome-based delivery of miR-214 to hepatic stellate cells suppressed fibrosis by downregulating CCN2 expression [195,196], and miRNAs had tumor-suppressive effects when miR-143 or let-7a were transported to prostate and breast cancers in vivo [168,173]. However, no effect was observed when normal prostate epithelial cells were treated with exosome-encapsulated miR-143 [173]. MSC exosome (MSC exos )-mediated delivery of miR-133b was found to be effective for treating brain ischemia in mice [182], and exosome-mediated miRNA transfer from activated immune cells effectively induced epigenetic changes that influence convalescent plasma response to virus in COVID-19 [197]. In a systematic review, Khalaj et al. [198] reported that exosomes extracted from mesenchymal stem cells derived from bone marrow or umbilical cord ameliorate lung injury in experimental models by (1) attenuating inflammation (reducing pro-inflammatory cytokine levels, neutrophil infiltration, and macrophage polarization); (2) regenerating alveolar epithelium (by reducing apoptosis and stimulating surfactant production); (3) reducing microvascular permeability (by upregulating endothelial cell junction protein levels); and (4) preventing fibrosis (reducing fibrin production). The authors attributed these differential effects to the release of EV cargoes and identified several of the factors responsible, which included miRs126, -30b, -3p, -145, -27a-3p, syndecan-1, hepatocyte growth factor, and angiopoietin-1 [198]. Exosomal delivery of miR-146b inhibited tumor growth in a xenograft model of GBM [161,199], and the delivery of anti-miRs against miR-9 (an oncogenic miRNA) to GBM cells increased their susceptibility to chemotherapeutics like temozolomide [160]. The desired output highlights the communicative role played by exosomes in interaction between MSCs and GBM cells irrespective of the presence of gap junctions. These observations show that the exosomal delivery of miRNAs offers a promising means of delivering anti-cancer and anti-COVID-19 agents. Nevertheless, knowledge of the mechanisms of miRNA loading into exosomes would undoubtedly improve results. In particular, we suggest investigations be conducted to identify and characterize the EXO-motifs that direct the targeted exosome-based deliveries of miRNAs. Exosomes containing chemotherapeutics like doxorubicin have shown growth inhibitory effect on xenografted breast and colon adenocarcinoma tumors [167,200]. Enhancement in the efficiency of chemotherapeutic agents like doxorubicin achieved by direct delivery of immature DC-derived exosomes effectively reduced side effects on nontargeted organs, especially the heart [167,201]. An exosome preparation of JSI-124 (a STAT3 inhibitor) effectively reduced tumor volume in a murine model of GBM [170,202,203], and notably, exosomes containing 5-fluorocytidine (a prodrug) facilitated its conversion to 5-fluorouracil and 5-fluoro-deoxyuridine and resulted in tumor cell apoptosis in an orthotopic model of schwannoma [175,204]. Furthermore, an exosome-based co-treatment offer another means of treating malignancies, and exosomes loaded with super paramagnetic iron oxide nanoparticles (SPIONs) were shown to have potential use as an MRI cancer imaging agent [177,205]. Exosome Loading Procedures The loading of therapeutic cargoes into exosomes involves the use of classical incubation and electroporation methods and transfection reagents and the modern techniques of donor cell transfection or activation [177,188]. However, simple incubation with a cargo is sometimes sufficient to load exosomes ( Table 3). The best example of this is provided by curcumin, a natural compound with an anti-inflammatory effect, which can be loaded by simple incubation for 5 min at 22°C, presumably because curcumin rearranges membrane lipids and alters membrane fluidity [206,207]. The encapsulation efficiency for the drug doxorubicin was higher for exosome-mimetic bioengineered nanovesicles generated from filtered monocytes or macrophages [194,200]. On the other hand, the loading of small-sized cargoes, such as miR-150, was efficiently achieved by simple incubation [200,208]. Efficient loading of therapeutic cargoes into exosomes can also be achieved by electroporation at 150-700 V [165,186], but the effectiveness of cargo loading depends on the donor cell type [167,174,176], exosome type, and cell concentrations [165,167,177,209]. Quantification of cell delivery using fluorescently labeled siRNA revealed higher uptake than by chemical reagent-based transfection [163,174]. An analysis of the cell viability after electroporation of exosomes with therapeutic cargoes was used to investigate the efficiency of the technique [176]. Although it seems to be a suitable clinical option, electroporation is known to have adverse effects on the integrities of exosomes and cargoes, for example, it has been reported to induce exosome and siRNA aggregation. In fact, after optimizing delivery parameters and using trehalose medium to minimize exosome aggregation [177], siRNA retention in exosomes was only <0.05% [210]. Nevertheless, the loading of drugs like doxorubicin by electroporation is still considered a better option than incubation or chemically based transfection methods, because it better maintains the functionality of the drug [167]. The use of chemical-based transfection methods to load therapeutic cargoes such as siRNA into exosomes has restricted usage because they are less efficient than that achieved using HiPerFect transfection reagent-based methods [174,176]. Although Lipofectamine 2000-based siRNA loading was reported to alter gene expression in recipients, leftover micelles generated during exosome preparation prevented quantification of the effects of siRNA cargoes at target sites [174,176]. Transfection of donor cells with appropriate cargoes to obtain cargo-loaded exosomes appears to offer an acceptable means of therapeutic exosome production [211]. Destined for secretion, transfection of donor cells with the overexpression construct facilitates entry of therapeutic cargo into the lumen or its labeling to the surface of exosomes [161,168]. In most studies, miRNAs are transfected as overexpression constructs in miRNA expression vectors and then loaded into exosomes [161,162,172,173,195]. Exosomes produced from MSCs transfected with a construct carrying miR-146b were found to restrict tumor growth effectively [161]. In a similar study, let-7a containing exosomes with a surface expressed target peptide efficiently delivered cargo to epidermal growth factor receptor (EGFR) expressing breast cancer cells [168]. Elevated miR-214 expression achieved by transfecting cells with anti-miRs seems to be a promising alternative to transfecting donor cells with pre-miR-214 [160,195]. Though transfection of donor cells seems appropriate for exosome loading for in vivo studies, engineering cells to express desired surface molecules and carry maximum therapeutic load is time-consuming. Thus, non-autologous exosome producing methods are required to generate non-immunogenic exosomes with specific targeting characteristics for clinical use. Studies that used activated donor cells to generate exosomes have shown them to be less appropriate choice for exosome production as they are capable of transferring therapy resistance to drug sensitive cells via, proteins, that increases DNA repair and tumor cell survival along with disposal of the pro-apoptotic proteins. Using this methodology, stimulation of THP-1 cells using inflammatory stimulants caused an increase in miR-150 levels in vesicles [193], and in another study, co-culture of brain extracts from rats that had undergone middle cerebral artery occlusion show increased miR-133b levels [182]. Hypoxia is a characteristic of tumor microenvironment [212][213][214] and is believed to enhance release of exosomes. Studies that used hypoxic condition to generate exosomes have revealed them to be enriched with CD81, CD63 and HSP70 markers [215][216][217]. Although hypoxic microenvironment alter the miRNA cargoes of exosomes from different cells [215], exosomes generated under hypoxic conditions were found to be enriched in IL-8 and IGFBP3 mRNAs and proteins, which promote the proliferation and migration of angiogenic cells in vitro [218,219]. Exosome Administration Routes Conventional routes are required to administer drug-loaded exosomes. In addition, to the efforts being made to increase stability during long-term storage, research is also being conducted to identify means of delivering drugs to tumors located in fragile tissues [220,221]. Administration of exosome-based therapeutics via intravenous injection has been commonly used to deliver drugs to brain, pancreas, and tumors in other tissues [165,167,168,172,198,[222][223][224][225][226], and the endogenic origins of exosomes help them escape removal by immune cells [227]. Exosome-based delivery of therapeutics increases drug stability and enables high drug loadings in body fluids [227], and lack of lymphatic drainage and the presence of fewer blood vessels aid in the retention of exosomes in tumorigenic tissues [12,228,229], which enhances their therapeutic efficacies. Upon administration through an intravenous mode, the half-life of exosome-based therapeutic cargo in circulation was approximately two minutes [178]. The distribution of exosomes to lungs, liver, spleen, and bone marrow and their later accumulation in liver and then lungs, suggests a clearance mode similar to that of synthetic liposomes [178,230,231]. Accumulation in liver has also been reported in studies on the administration of EGFR-bearing exosomes with high affinity for hepatic tissues, and in tumor tissues in a xenograft model of breast cancer [168,232]. Despite their exhaustion in circulation within short span of time, the presence of therapeutic cargo in tumor vasculogenesis appears to program bone marrowderived MSCs [233]. In addition, modifications, such as PEGylation, aimed at increasing their half-lives, are still warranted [234]. The intra-tumoral injection (another appropriate administration technique) of exosomeencapsulated therapeutics for the treatment of different cancers resulted in successful reduction in tumor volumes [161,173,175,235,236]. The combined use of intratumoral injection and tumor resection further reduces the risk of tumor recurrence [161,237]. The oral administration of exosomes potently induce intestinal stem cell proliferation after stable passage through the gut in a murine model of colitis [238]. Administration of exosomes loaded intraperitoneally with curcumin increased their bioavailability by improving their stability in the circulation [170]. Intranasal administration of exosomes encapsulating curcumin or Stat3 inhibitor for delivery to microglial cells reduced inflammation in brain [202], and the subcutaneous administration of MHC II over-expressing exosomes proved effective in murine melanoma [179,202,239]. The exosomes loaded with therapeutic cargo exerts their effects at the target with in a short span of time after its delivery to the target [165,178]. Adoption of exosomes in clinical settings requires characterization of exosome protein compositions in order to avoid adverse effects in patients. Increased Specificity by Exosome Engineering The expression of targeting peptides or proteins on exosome surface is a prerequisite for the specific delivery of therapeutic cargoes and in avoiding the adverse effects associated with chemotherapeutic agents on normal cells surrounding tumors. Although many studies have been performed on the exosome-based delivery of therapeutic cargoes, few have addressed the engineering of exosomes to achieve the target-specific delivery of therapeutic cargoes [240][241][242][243][244]. The exosome-engineering aimed at inserting a peptide correctly into exosomes, while avoiding cleavage of peptide regions, is accomplished by expressing the target peptide as a fusion product with the surface localized lysosomal associated membrane protein-2b (Lamp-2b) [245,246]. This bioengineering approach helps to enhance the uptake of exosomes and as such treatment specificities in tissues of interest. An excellent example of this phenomenon is provided by RVG and iRGD peptides, which when engineered on immature DC-derived exosomes helps to target therapeutics to the brain and tumor tissues [165,167]. The expressions of hemagglutinin, myc-tag, and peptide (epidermal growth factor; EGF or GE11) as a fusion protein with platelet-derived growth factor receptor (PDGFR) on the surface of exosomes effectively targeted drugs to tumors [168]. With ability to bind specifically to EGFR-upregulated cells in tumor tissues, GE11-mediated delivery of therapeutic cargoes proceeds without activating the EGF-receptor [168], and thus, this method of delivery appears to be appropriate for treating different types of cancers [247]. U937 or Raw264.7 cell-derived exosomes or exosome mimetic nanoparticles expressing surface LFA-1 induced significant reduction in the tumor volume when used to deliver chemotherapeutics to tumor cells [200]. LFA-1 facilitates binding of exosomes to endothelial cell adhesion molecules and has been used to deliver therapeutics to rapidly growing tumors with extensive neovascularization [200]. The cell-specific characteristics of exosomes facilitate the delivery of therapeutics more specifically to tumor tissues. Transfection of the CIITA gene to induce the expression of MHC II in murine melanoma cells resulted in the production of exosomes expressing high surface levels of the MHC II protein [179]. The study indicated that MHC II has two functions, that is, as a targeting peptide to deliver cargoes to specific destinations and as a therapeutic [179]. Exosomes derived from choroid plexus epithelial cells expressing folate receptor-α (FRα) were reported to transport cargo to brain parenchyma cells after passaging through the choroid plexus [239]. The ability to cross the blood-brain barrier (BBB) or choroid plexus and the surface expression of targeting peptides on exosomes hold great promise for drug delivery to the brain [165,239,248]. The surface expression of tetraspanin proteins can be used as an alternative method to engineer exosomes that deliver therapeutics to tumor tissues [222]. Similarly, utilizing target specific antibodies to coat the surface of exosomes provides another means of avoiding the laborious procedure of modifying membrane proteins. Advancement in the Therapeutic Uses of Exosomes Many commercial enterprises have been established to exploit the exosome-based delivery of therapeutics. Codial BioSciences (Cambridge, MA, USA) has devised a specific platform called engEx™ for engineering exosomes to deliver different therapeutics entities [249]. exoSTING-a therapeutic entity developed on exosome backbone with minimal cytotoxicity is viewed as a promising therapeutic delivery candidate in the treatment of cancer [249]. Exosomes carrying therapeutic cargoes have also been subjected to clinical trials (Table 4). In a phase I study, DC-derived exosomes (DEX) loaded with MAGE3 antigenic peptides were administered to stage III/IV melanoma patients [250]. Studies performed on the intradermal and subcutaneous administration of DEX revealed an increased number of natural killer cells (NKCs) and reconstitution of NKG2D expression on NK and CD8 + T-cells. Autologous exosome production from these non-toxic cells was achieved successfully using standard manufacturing protocols [250]. In a phase II study of DC-derived exosomes (DEX2) loaded with the chemotherapeutic metronomic cyclophosphamide, DEX2 encapsulation increased the immunostimulatory effect of the drug on T-cells (NCT01159228). In addition, the application of ascites-derived exosomes (AEX) together with GM-CSF was found to have greater cytotoxic T-cell response in colorectal cancer than AEX alone [251]. Furthermore, exosome-based treatment was subjected to clinical trials in malignant glioma. Implantation of glioma cells isolated from resected tumor tissue into the abdomen of glioma patients treated with drug-inhibiting insulin-like growth factor receptor-1 (IGF-1) induced apoptosis in implanted cells, and this was followed by exosome release from these cells that stimulated the immune system to induce a T-cell mediating antitumor response (NCT01550523). A joint venture between PureTech Health and Roche aimed at developing novel exosome technologies, led to the development of milk exosome-based technology for the oral administration of antisense oligonucleotides [252], and this technology is considered to have the potential to enhance treatment efficacies and reduce toxicities as compared with conventional intravenous injection. In addition, plant-derived exosomes were assessed for potential use as cancer treatments at the James Graham Brown Cancer Center. Orally administered exosomes containing curcumin were tested for therapeutic effectiveness against colorectal cancer (NCT01294072) and evaluated for their effects on oral mucositis and pain after chemotherapy for head and neck cancers (NCT01668849). These trials, which are ongoing and completed, respectively, have demonstrated good safety profiles in clinical settings, and relevance of continuing the development of exosome-based drug delivery systems. Conclusions Exosomes are considered as versatile carriers due to their immunogenic nature and abilities to traverse biological barriers (e.g., the blood-brain barrier) and migrate to tissues or areas with no blood supply (e.g., dense cartilage matrix). Exosomes encapsulate many cargo types (DNAs, RNAs, proteins, and lipids) and transport them via body fluids to nearby or distant cells. Their biocompatibilities and the genetic engineering possibilities that prevent unwanted exosome accumulation and enable selective targeting, have encouraged researchers to develop exosome-based drug delivery systems. Selection of the source and optimization of the isolation methods are currently being explored towards achieving enhancement in the production of exosomes with distinct characteristics and functionalities. Studies are currently being undertaken on the potential therapeutic use of exosome derived from human tissues as drug carriers. However, such investigations are hampered by lack of suitable isolation methods and drug uptake discrepancies. Currently, the use of hollow fiber-based bioreactors offer an attractive means of harvesting exosomes with reproducible characteristics. As effectiveness of therapeutic cargo depends on the source of generation of exosomes and its release at target site, efforts are required to understand exosome generation in different cellular backgrounds and their drug uptake at the target tissues. Exosomes exhibit a lipid bilayer structure with embedded characteristic surface protein signatures that promote uptake at target sites. Given the complexity of exosomes, internalization of exosomes loaded with therapeutic cargoes can be achieved by incorporating cell-penetrating peptides (CPPs), such as arginine-rich CPPs, which stimulate micropinocytosis at target sites, onto their surfaces. Investigations are required to determine the optimal dosage, administration methods, and kinetic characteristics, and to further investigate the effects of environmental conditions, such as pH, on the efficiency of cargo delivery. Moreover, comprehensive investigations of the properties of cells used for exosome production and the functionalities of exosomes are needed to ensure target-specific delivery of therapeutics in the context of personalized medicine. Furthermore, the standardization of large-scale production and purification procedures would undoubtedly improve exosome reproducibility and aid in the development of exosome-based cancer therapeutics. Finally, investigations aimed at elucidating the mechanisms that govern the specific delivery of exogenously administered exosomes, their biodistribution, and pharmacokinetics would help to achieve the developmental transition of exosomes to the clinical level.
8,149.4
2021-03-01T00:00:00.000
[ "Medicine", "Biology" ]
Allometry of the Duration of Flight Feather Molt in Birds Replacement of flight feathers takes disproportionately more time for large birds than it does for small birds, because feather length increases with body size almost twice as fast as feather growth rate increases. Introduction Flight feather molt is a time-demanding activity in the avian annual cycle [1][2][3]. Yet, annual or alternate-year replacement of flight feathers is essential, because physical abrasion and ultraviolet light rapidly degrade even the most sturdy wing quills after two years of use [4,5]. Because flight performance declines during molt as new feathers are growing [6][7][8], most birds do not overlap molt and breeding, and those that do overlap these activities replace few flight feathers at a time, presumably to minimize the energetic and flightperformance costs of molting on reproduction [5,9]. Overlap of molt and breeding may be more common in larger species because the time required to rear young, as well as the time required to replace flight feathers, increases with body size [3,10,11]. Most smaller birds (i.e., generally ,1 kg) replace all their flight feathers annually, and a few do so twice a year [12][13][14]. In contrast, many larger birds (.3 kg) that depend on flight for feeding during the molt shed only a part of their flight feathers annually [5,15] and require two, and sometimes three, years to complete the molt [16,17]. For example, no albatross regularly replaces all of its flight feathers in a single bout of molting [18,19], and the largest albatrosses (Diomedea exulans and D. epomophora), whose masses reach 10 kg, avoid reproducing during years following successful breeding because of the competing time and resource demands of reproduction and molt. Although ornithologists have been aware of the protracted molts of large birds for many years, no general argument has been proposed to account for the increased time required for flight feather replacement. To the best of our knowledge, we show for the first time how the allometric scaling of flight feather length and flight feather growth rate with body mass sets an upper limit to complete annual replacement of the primaries at a body mass of about 3 kg. Because feather growth rates do not differ between similarly sized species exhibiting simultaneous versus sequential replacement of the primaries, the resource and energy demands of molting cannot explain why primary growth rate fails to increase with mass as fast as primary length. Rather, we suggest that the architecture of a two-dimensional structure emerging from an essentially one-dimensional follicle constrains the rate of feather growth to slow relative to increasing feather length in larger birds. Finally, these allometric relationships prompt us to ask how the 70kg raptor, Argentavis magnificens, a flying teratorn from the Miocene of Argentina [20,21], could have organized the replacement of its enormous flight feathers to have had sufficient time also to reproduce. Molt Allometries Primaries are the longest flight feathers of the wing, technically defined as the quills that attach to the bones of the hand. Most extant birds have 9 or 10 functional primaries [5]. We used allometric scaling to explain the basis for time constraints on primary replacement in the life histories of large birds. We have related primary growth rate (K, from the literature, defined as the daily increase in length of individual primaries) and both length of the longest primary and summed length of all the primaries (L, from museum specimens) to body mass (M) across a wide sizerange of birds (masses from [22]) by the allometric function Y = aM b , where a is a scaling constant and b is the power of the relationship of Y to mass. Primary growth rate scales as M 0.171 ( Figure 1A), close to a value of M 0.19 found by Hedenstrom [3] using other data and assumptions, whereas the combined lengths of all the primary flight feathers (as well as the length of the longest primary) increases with body mass almost twice as fast, as M 0.316 ( Figure 1A). The ratio of length (mm) to rate (mm/day), which is the time required to replace all the feathers one at a time (days), and which is also proportional to the time required to grow the longest primary ( Figure 1B), increases as the 0.14 power of mass (M 0.316 /M 0.171 = M 0.145 ). This illustrates why molt is so time consuming for large birds. These scaling relationships set upper and lower limits to the time that birds of different size would need to replace their primaries. Figure 1A approximates the upper time required for molt by assuming the primaries are grown one feather at a time, and Figure 1B approximates the lower limit when all primaries are lost and re-grown at the same time. Of course the actual duration of primary molts varies between these extremes by a factor of close to 10, depending on the number of primaries grown simultaneously. Birds that fly while molting usually grow only two or three primaries on each wing at the same time. For example, roughwinged swallows, Stelgidopteryx serripennis, are 15.9-g aerial foragers that replace an average of only 1.8 primaries at a time, because they forage on the wing while molting [23]. For a 15.9-g bird, the allometric relationships in Figure 1A predict that replacing the nine primaries, one feather at a time, would take 190 days. Adjusting for the number of primaries grown simultaneously reduces this estimate to 105.5 days, which closely matches empirical observations [23]. Simultaneous replacement of the flight feathers is characteristic of many water birds (loons, grebes, waterfowl, many rails, and some alcids) that can swim and dive to forage and escape predators while flightless [24]. In Figure 2B, the distance between the allometric relationships of the length of the longest primary (M 0.313 ) and of primary growth rate (as in Figure 1A, M 0.171 ) estimates the time (M 0.142 ) that simultaneous replacement of the primaries would render an individual flightless. In most forms of simultaneous primary replacement, secondary flight feathers (shorter flight feathers, proximal to the primaries) are replaced at the same time as the primaries, so the full period of flightlessness corresponds to the time required to replace the longest primaryestimated to be 57 days, and observed to be 63 days, for 11.8-kg mute swans Cygnus olor. The actual period of flightlessness is somewhat less, because individuals regain flight a few days before the longest primaries are fully grown [25]. Molt Allometries and Incomplete Molts For birds that continue to fly while molting, the diverging allometric curves of Figure 1A illustrate how the time required to replace the primaries one by one increases dramatically with body size. Large birds that continue to fly while molting reduce the time spent replacing primaries both by growing several primaries simultaneously and by retaining individual feathers for two or, rarely, even three years [2,17], but they still spend an everincreasing fraction of the annual cycle replacing flight feathers. Figure 1C illustrates the body size-dependence of the shift from complete to incomplete primary molts. Most individuals of species with masses below 1 kg replace all of their primaries annually, whereas most individuals of species with masses over 3 kg spread the primary molt over two or more years ( Figure 1C). The broad size range for this transition reflects special circumstances for many species. For example, numerous small owls have incomplete molts, possibly because the flight feathers of these nocturnal birds suffer little degradation from ultraviolet light [4] and can be used for more than one year. Some very large birds replace all of their flight feathers every year because they overlap molt and breeding [10,11] or because they molt for many months. Male wild turkeys (Meleagris gallopavo) do not participate in parental care and so they can replace primary feathers for six months each year, beginning well before females [26]. Primary Replacement and the Maximum Size of Flying Birds Early theoretical analyses suggested that the size of birds with sustained flapping flight would be limited by the power required for flight, which increases as M 7/6 , and the power available for flight, which increases as M 3/4 [27,28]. If these curves actually crossed at about 15 kg, the theory might explain the size of the largest swans and pelicans with sustained flapping flight. Recent analyses by Chattergee and colleagues [21,29] applying helicopter streamtube theory have confirmed this suggestion, finding that the upper limit of sustained powered flight for birds and pterosaurs is about 15 kg. We explored whether flight feather replacement might additionally constrain body size evolution in flying birds by considering size distributions of species using each of the three fundamental modes of primary replacement. Patterns of primary replacement have been described for many birds; further, most birds replace the secondary flight feathers during the primary molt, so secondary replacement does not add to the time spent molting flight feathers [12,16,30]. Thus, analyses of size constraints based on primary replacement patterns should be general to most birds. We made three predictions. First, the mode of the size distribution for birds with simple molts (species with a single wave of primary replacement) should be small and the right tail of this distribution should fail to approach the 15-kg limit for powered flight [27,28]. When the primaries are replaced in a single wave, the only way to reduce the time in molt is to grow more feathers simultaneously. However, the resulting large gaps in the primaries Author Summary The pace of life varies with body size and is generally slower among larger organisms. Larger size creates opportunities but also establishes constraints on timedependent processes. Flying birds depend on large wing feathers that deteriorate over time and must be replaced through molting. The lengths of flight feathers increase as the 1/3 power of body mass, as one expects for a lengthto-volume ratio. However, feather growth rate increases as only the 1/6 power of body mass, possibly because a twodimensional feather is produced by a one-dimensional growing region. The longer time required to grow a longer feather constrains the way in which birds molt, because partially grown feathers reduce flight efficiency. Small birds quickly replace their flight feathers, often growing several feathers at a time in each wing. Larger species either prolong molt over two or more years, adopt complex patterns of multiple feather replacement to minimize gaps in the flight surface, or, among species that do not rely on flight for feeding, simultaneously molt all their flight feathers. We speculate that the extinct 70-kg raptor, Argentavis magnificens, must have undergone such a simultaneous molt, living off fat reserves for the duration. would be detrimental to flight [7,8], especially in large species that are heavily wing-loaded [31]. Thus large size and simple primary molts should be incompatible for most species. Second, the size distribution of birds with complex modes of primary replacement should be larger than that for birds with simple primary replacement. Complex molts generate multiple waves of feather replacement (either by stepwise molts or by dividing the primaries into at least two replacement groups). Complex primary molts allow more feathers to be replaced at once, and also reduce the size of gaps in the wing surface because adjacent primary feathers partially overlap each other [32]. Because complex molts reduce time constraints on molting by maximizing the number of feathers growing simultaneously, compared with loss in wing area, the modal size of species with complex molts should exceed that for species with simple molts; further, the right tail of this distribution could extend towards the upper size limit for powered flight of about 15 kg [27,28]. Third, species with simultaneous flight feather replacement must be constrained least by the time required to replace their flight feathers ( Figure 1B). Hence their size distribution should exhibit the highest mode, and its right tail should extend to 15 kg. This prediction assumes that species that molt simultaneously can meet the energetic and nutritional demands of growing their many flight feathers simultaneously and, thus, reduce the time required to replace all their flight feathers to the time needed to grow their longest primary ( Figure 1B). The observation that feather growth rates do not differ between species with simultaneous and sequential replacement of the primaries (see below) supports this assumption. Because replacing the flight feathers simultaneously is so time-efficient, simultaneous replacement of the wing quills should also be favored in small aquatic species that can safely undergo a period of flightlessness, possibly giving the body size distribution for simultaneous molters a left skew. [22]) exhibits a relatively small modal size (,13 g) and a strong right skew (g 1 = 0.794, p,0.0001, n = 9,324), which characterizes size distributions for most animals [33,34]. Primary Replacement Strategy and Body Size Distributions: Results Among species with simple molts, log 10 (M) (mode = 24 g, Figure 2B) is also strongly right-skewed (g 1 = 0.634, p,0.0001, n = 4,163), but the extreme right tail of this distribution falls short of the upper size limit of contemporary flying birds (15 kg). Species with complex molts are much larger than those with simple molts (Mann Whitney p,0.0001), having a modal body mass of 133 g and a right tail that reaches the size of the largest flying birds ( Figure 2C). The size distribution of species with complex molts is not significantly skewed (g 1 = 0.096, p.0.20, n = 1,043), as predicted, presumably because the right tail is constrained by the power requirements for flight [27,28] and because the left tail is drawn out by numerous small tropical species that have complex modes of primary replacement to increase breeding frequency [32]. That complex molts permit larger body sizes than simple molts suggests that, if birds must fly while molting, a transition to one of the two complex modes of primary replacement is prerequisite to evolving body sizes that approach 15 kg. Species with simultaneous primary replacement crowd the maximum size of flying birds ( Figure 2D) and do so even more strongly than those with complex primary molts ( Figure 2C). At 750 g, the modal size for species that molt simultaneously significantly exceeds that of species having both simple and complex molts (Mann-Whitney p,0.001 for both simple and complex molts). The size distribution associated with simultaneous primary molts is slightly, but not significantly, left skewed ( Figure 2D; g 1 = 20.201, p.0.10, n = 344), presumably because the power requirements for flight sets an upper size limit [27,28], constraining skew to the left tail of this distribution. The left skew is generated by small species-such as dippers, small alcids, and small rails-with safe molting sites that permit temporary flightlessness. Simultaneous flight feather molts should be favored in these small species for several reasons. First, simultaneous molts are always complete ( Figure 1C), eliminating replacement asymmetries that have fitness costs [35]. Second, no developmental organization is required to maintain symmetry in flight feather replacement during simultaneous molt. Third, simultaneous replacement of the primaries minimizes time conflicts between molt and breeding. Finally, simultaneous flight feather molts may be particularly energy efficient if feathers that do not suffer the strain of use while growing can be grown with less cost [36]; we know of no data addressing this possibility. We found no evidence that primary growth rate during simultaneous molt is reduced by the energy and nutrient demands of growing all of the flight feathers at once. We divided the 43 species with feather growth rates ( Table 1) into two groups, those that replace their primaries simultaneously (n = 15 species in two orders) and those that fly while molting (n = 28 species in eight orders). We used analysis of covariance (see Methods), with body mass as the covariate to compare feather growth rates between these groups. Remarkably, growth rate did not differ between species with simultaneous primary molt and those that fly while molting (F = 1.0; degrees of freedom = 1, 40; p = 0.32; Figure 3). Because primary growth rates are similar for birds that grow two or three versus ten primaries simultaneously, primary growth rate seems not to be limited by energy or nutrient demands; others have suggested that growth rate might be limited by follicular-level constraints on the rate at which feathers can be generated [1,37], and we explore this below. Discussion Our discovery that the time required to replace the primaries (either one by one or all simultaneously) increases disproportionately with body size as M 0.14 provides a general explanation for much of the variation in primary molt patterns in birds. All birds share the same annual cycle of environmental conditions and seasonal periods available for reproduction, molt, migration, and other activities. The slower feather growth in large species constrains their allocation of time to molt and the degree to which the molt can be completed in a single year. At one extreme, most small temperate species are well known to replace all of their primaries annually, whereas many large species take two or more years to complete their primary molt ( Figure 1C). Large species, in which flight feather replacement is typically incomplete, probably grow stronger flight feathers, but we are unaware of data addressing this possibility. Two special adaptations in the molt sequence typify large birds that fly while molting and that often or always have incomplete molts-stepwise primary replacement and division of the primaries into two molt series ( Figure 2C). These modes of primary replacement likely evolved to minimize time conflicts between molt and breeding in large birds and to minimize the increase in wing loading that accompanies reduced primary feather area during molt. Large birds that are rendered flightless by simultaneous flight feather molts always replace all of their primaries annually ( Figure 1C), and the modal size for species with simultaneous primary molts most closely approaches the size of the largest flying birds ( Figure 2D), implying that simultaneous replacement of the primaries permits size increases by dramatically reducing the time required to replace the flight feathers. Finally, primary growth rate is not depressed in species that grow all their flight feathers simultaneously, suggesting that feather growth rate does not depend on the availability of energy or nutrients. We suggest below that feather growth rate is similarly constrained in sequential and simultaneous molt systems by similar architecture of the growing region at the base of the feather. That flight feather growth rate increases less rapidly with respect to body mass than does feather length offers a general explanation for the impact of molting on avian life histories. Large species have long reproductive cycles and molt periods, with the result that individuals often replace fewer flight feathers in a molt that follows successful breeding [38,39]. An individual having a succession of such incomplete molts might accumulate so many worn feathers that its success in subsequent breeding attempts might decline, even to the point of skipping a breeding opportunity to clear overworn flight feathers from the wing [2]. A large investment in breeding in one year often results in reduced adult survival or reduced breeding success in the following year [40][41][42][43], but the mechanism underlying this trade-off has been elusive. Accumulated feather wear may well be the culprit, particularly for large species. Even small species with complete molts apparently grow low-quality feathers after heavy investment in breeding [44,45]. This suggests that feather quality likely links high breeding investment in one year to low success the following year, even though the long post-breeding period available to small temperatelatitude species seems more than adequate for a complete physiological recovery from investment in reproduction. Feathers simply cannot be repaired! The time constraint on molting in large birds cannot be overcome by growing more feathers simultaneously because of the size-related scaling of the power required for sustained flight and the maximum power available from the flight muscles [28]. For small birds, maximum power is considerably larger than that required for sustained flight, so small birds can fly with large molt gaps in their wings. For large birds, the difference in the power required for sustained flight and the maximum power available is relatively small, making flight with proportionately similar molt gaps impossible. Thus large birds that fly while molting cannot compensate for the relatively slow growth of their primaries by replacing more primaries simultaneously. The allometric disparity between feather size and feather growth lead us to ask how the 70-kg Argentavis, with a wing span of 7 m and outer primaries that were 1,500 mm long [20], almost four times those of the mute swan, could replace its enormous wing quills frequently enough to maintain good flight performance and reproduce. California and Andean Condors have masses of only 10 kg and 12.5 kg, respectively, and California Condors need 2-3 years to replace all of their primaries [17]. Argentavis was simply so huge that it might have overcome time constraints on molting by replacing its enormous flight feathers simultaneously, as do the largest geese and swans. Perhaps it did so every 2-3 years by storing sufficient protein in muscles to shelter in caves or cliffs for a simultaneous replacement of the wing quills, which was estimated to require 74 days ( Figure 1B). Although no living raptors replace their primaries simultaneously, the evolutionary transition from sequential to simultaneous replacement of the flight feathers might require few changes in the neurophysiological controls that regulate molt; indeed, some individual flamingos and hole-nesting hornbills have been observed change molt patterns, sometimes molting sequentially and retaining flight, and sometimes molting synchronously and becoming flightless [46,47]. Because basal metabolic rate increases with body size at an allometric coefficient of about 0.72, whereas fat loading increases with an allometric coefficient that is greater than 1.0, long fasts are possible for large Table 1. Data used to generate the allometric equations of Figure 1A and 1B, and sources for the data on primary growth rate; primary lengths are from museum specimens. species [48]. All living penguins fast while replacing their body plumage on land and use protein stored in their breast muscles to build feathers. In the 35-kg Emperor Penguin Aptenodytes fosteri, this fast lasts about 35 days, during which time individuals lose 50% of their body mass [49]. Several fossil penguins, which surely also fasted while molting on land, weighed up to 100 kg [50][51][52]. If penguins can store enough protein and energy to replace their very dense and heavy body plumage while fasting, then our suggestion that Argentavis could have replaced its flight feathers from stored reserves seems plausible. The constraints that feather growth places on molt and other aspects of the annual cycle depend on the positive allometry dictating increasing time required to complete the growth of a single flight feather with respect to increasing body mass. Flight feather growth rate approximates 1/6 power scaling, while flight feather length approximates 1/3 power scaling. Among species of different size that maintain isometric proportions, lengths scale as the 1/3 power of volume. Thus, feather length is dimensionally isometric. Feathers elongate by cell division within a cylinder of collar cells at the base of the growing feather in the follicle, an invagination of the skin [53]. Cell division is followed by cell enlargement, differentiation, and keratinization further along the base of the growing feather and is supplied by blood circulation through the dermal feather pulp within the feather base. The growth zone, within which the barbs of the feather vane also grow, is essentially a linear structure that produces a two-dimensional feather. If the growth zone were to scale in proportion to the length of the grown feather, then the rate of growth would be inversely proportional to the square root (allometric scaling factor 0.5) of feather length. We tested this prediction using a log-log regression of feather growth rate on length of the longest primary, and found the predicted allometric coefficient of 0.5 (b = 0.5060.05 SD, F = 86.7, degrees of freedom = 1,41; p,0.0001). Other considerations would include the diameter of the follicle, which clearly increases with feather size, but no comparative data are available. If the length of the cylinder of collar cells had a fixed number of rows of dividing cells regardless of feather length, then the growth rate of the two dimensional feather structure would be related to the one-dimensional circumference of the collar, again leading to an 0.5 allometry of growth as a function of size. The length of the growing region of a feather might be constrained by structural considerations, because the base of the feather, which is filled with a soft dermal pulp within a nonkeratinized cylinder of dividing and differentiating epidermal cells, is quite weak. It is not unusual for growing feathers to break at this point. Although further measurements of the growing regions of primary feathers will be required to work out the basis for the square-root allometry between growth rate and feather size, the linear-to-surface relationship that transforms the cylindrical growing region into a two-dimensional feather provides a plausible mechanism at this point for understanding variation in patterns of primary feather molt as a function of body size in birds and how molt might set an upper limit to the size of flying birds. Methods Estimates of primary growth rate (from repeated measures of growing feathers) were obtained from the literature for 43 species of birds (Table 1). For each species, we measured the lengths of the primary flight feathers for one adult male and one adult female using museum specimens, and averaged values for the two sexes in our analyses. For 77 species across a large size range of flying birds, we estimated the fraction of adults that had replaced all their primaries in the previous molt by examining flight feather condition on 20 museum specimens obtained during nonmolting periods. When 20 adults were not available for a species in the collections we examined, fewer specimens were sampled (Table S1, numbers of each species examined). To compare distributions of avian masses with respect to mode of primary replacement, we used the masses for the birds of the world compiled by Dunning [22]. We included races within species if they differed in mass by 10% or more (some differed by more than 100%). We used references [5,15,54,55] and Rohwer (data not shown) to characterize the mode of primary replacement, which, unfortunately, has not been described for several major groups of birds. When mode of primary replacement was assigned using [55] or Rohwer (data not shown), we assumed that all members of a genus followed the molt strategy known for any member of that genus, unless additional data were available or unless body mass variation was too great to safely generalize. To analyze the relationship between the mode of primary replacement and body size, we divided the complexities of primary replacement across birds into three basic modes, Simple, Complex, and Simultaneous [15]. Some cuckoos and kingfishers do not fit these categories and were omitted [5]. In species with Simple primary replacement, molt begins at innermost P1 and proceeds distally until P9 or P10 is replaced. All species in this Figure 1, plotted separately for species that fly while molting and for species that replace their wing quills simultaneously. The latter grow their primaries no slower than birds that fly while molting, suggesting that follicular constraints on the rate of feather synthesis, rather than energetic costs, limit the rate at which flight feathers grow. doi:10.1371/journal.pbio.1000132.g003 category feature a single wave of feather replacement, but they often lose adjacent feathers in quick succession, resulting in large gaps in their wings. Complex primary replacement occurs in two ways, one or the other of which usually characterizes large species that maintain the ability to fly while molting. In the first, called stepwise molting, the primaries constitute a single molt series [56], but several waves of feather replacement progress through the primaries simultaneously in adults [5,32,37,57]. In the second, the primaries are organized into two separately activated and nonoverlapping molt series; this mode of replacement generates two waves of growing primaries if both series are activated during a single episode of molting [5,16,58]. The third mode of primary replacement is Simultaneous, whereby all primaries (and, usually, all secondaries) are lost and re-grown more or less simultaneously, resulting in a 3-6-week period of flightlessness. Allometric relationships between feather length, feather growth rate, and body mass were determined by regression and analysis of covariance of log-transformed values based on type III sums of squares, in which taxonomic orders (Anseriformes [n = 10], Passeriformes [16], Coraciiformes [1], Procellariiformes [1], Columbiformes [1], Falconiformes [4], Galliformes [3], Charadriiformes [2], and Gruiformes [5]) were entered as a main effect to avoid fortuitous relationships resulting from heterogeneity among taxa. Interactions between taxa and the independent variable were not significant and were dropped from the models. Because the regression slopes of models with taxon as a main effect did not differ from those obtained from simple regressions, we report here the slopes of the simple regressions (41 error degrees of freedom in each case): longest primary feather versus body mass, b = 0.32560.010, p,0.0001, R 2 = 0.961; sum of primary lengths versus body mass, b = 0.31660.009, p,0.0001, R 2 = 0.965; growth rate of primary versus body mass, b = 0.17160.017, p,0.0001, R 2 = 0.713. Analyses were carried out with the GLM procedure of the Statistical Analysis System version 9.1 (SAS Institute). Supporting Information Table S1 Species and numbers of adults used to assess completeness of molt for Figure 1C.
6,878.2
2009-06-01T00:00:00.000
[ "Biology" ]
Massive Neutron Stars and White Dwarfs as Noncommutative Fuzzy Spheres Over the last couple of decades, there are direct and indirect evidences for massive compact objects than their conventional counterparts. A couple of such examples are super-Chandrasekhar white dwarfs and massive neutron stars. The observations of more than a dozen peculiar over-luminous type Ia supernovae predict their origins from super-Chandrasekhar white dwarf progenitors. On the other hand, recent gravitational wave detection and some pulsar observations argue for massive neutron stars, lying in the famous mass-gap between lowest astrophysical black hole and conventional highest neutron star masses. We show that the idea of a squashed fuzzy sphere, which brings in noncommutative geometry, can self-consistently explain either of the massive objects as if they are actually fuzzy or squashed fuzzy spheres. Noncommutative geometry is a branch of quantum gravity. If the above proposal is correct, it will provide observational evidences for noncommutativity. Introduction Quantum mechanics (QM) and general theory of relativity (GR) are widely regarded as the two most promising discoveries of the twentieth century. QM is used to describe different microscopic phenomena, whereas GR is used to explain phenomena in which gravity plays a significant role. QM is primarily based on the Heisenberg algebra, which relates the position operator (x i ) and the momentum operator (p i ) as x i ,p j = ihδ ij , whereh = h/2π with h being the Planck constant. Note that in QM, position and momentum operators commute among themselves, i.e. x i ,x j = p i ,p j = 0. GR, on the other hand, is based on the equivalence principle, which can account for the perihelion precision of Mercury, the generation of gravitational waves (GWs), gravitational lensing, and a variety of other fascinating phenomena. Both QM and GR are required to understand the structure of compact objects, such as white dwarfs (WDs) and neutron stars (NSs). GR primarily governs the hydrostatic balance of a star, which is a macroscopic property; whereas QM determines the equation of state (EoS), i.e. the relation between pressure and density of the constituent particles. If a progenitor star has mass approximately in between 10 and 20 M , it becomes a NS at the end of its lifetime. A NS typically possesses central density, ρ c of about 10 14 to a few factors of 10 15 g cm −3 [1]. Although NSs predominantly consist of neutrons, various other particles, including hyperons, may also be present at such a high density. This uncertainty arises from the fact that such a high density has yet to be achieved in the laboratory, and hence the specific nuclear reactions, as well as their rates, are unknown. Researchers have so far provided various NS EoSs, each comprising different particle contributions and strong nuclear forces. Most of these EoSs are based on the relativistic energy dispersion relation E 2 = p 2 c 2 + m 2 c 4 , where c is the speed of light and E denotes the energy of the particle with mass m with p being its momentum. Although most NSs have masses of approximately 1 to 2 M , recent pulsar arXiv:2207.07667v1 [gr-qc] 15 Jul 2022 observations PSR J2215+5135 and PSR B1957+20 show that they have masses of about 2.3 and 2.4 M , respectively [2,3]. Similarly, the LIGO/Virgo collaboration detected a GW merger event, GW 190814, where one of the merged objects has a mass of about 2.6 M [4], which is mostly thought to be a NS [5][6][7][8]. Nevertheless, there was no detection of electromagnetic counterpart for this GW event, and hence various other proposals for this object, such as black hole [9,10], quark star [11], etc., have been put forward. In this article, however, we only talk about NSs while referring to this GW event. Based on these observations, various simulations have been performed and it has been suggested that those EoSs, which give the maximum mass of a non-rotating and non-magnetized NS less than 2 M , should be ruled out [5,12,13]. Hence, considering GR formalism, various EoSs, such as FPS [14], ALF1 [15], etc., seem to be inappropriate for NSs. Modified gravity, on the other hand, has emerged as a popular alternative to replace GR in the high-density regime over the last decades. It can be shown that modified gravity alters the hydrostatic balance of the star and thereby increases the mass of a NS [16][17][18]. As a result, some of these EoSs may still be valid in the modified gravity formalism. On the other hand, WDs are the end-state of stars with mass (10 ± 2) M [19]. They possess ρ c typically ranging approximately from 10 5 g cm −3 to a few factor of 10 10 g cm −3 . A WD achieves its stable equilibrium configuration by balancing the outward force of the degenerate electron gas with the inward force of gravity. If the WD has a binary companion, it pulls out matter from the companion, resulting in the increase of WD mass. Once the WD hits the Chandrasekhar mass-limit, which is about 1.4 M for a carbon-oxygen nonrotating, nonmagnetized WD [20], this pressure balance is lost, and it bursts out to create a type Ia supernova (SN Ia). However, recent observations of more than a dozen of peculiar over-luminous SNe Ia [21][22][23][24][25][26][27][28][29] reveal that they had to be produced from super-Chandrasekhar limiting mass WDs, i.e. the WDs burst significantly above the Chandrasekhar mass-limit [30,31]. Various theories incorporating magnetic fields [32,33], modified gravity [34][35][36], etc. can explain this violation of the Chandrasekhar mass-limit, albeit each has its own set of limitations. The goal of this work is to introduce noncommutativity (NC) among position and momentum variables and examine how it affects WDs and NSs. A popular way of proposing NC is by defining x i , x j = iη and p i , p j = iθ with η and θ being the NC parameters. It was shown that in the presence of NC, the spacetime metric alters [37]; causing the event horizon to shift and the singularity at the centre of a black hole to vanish, which is replaced by a regular de-Sitter core [38][39][40]. It further alters some other properties associated with black holes, such as the stability of Cauchy horizon [41], mini black hole formation with the central singularity replaced by a self-gravitating droplet [42], the Hawking temperature [43]. Various researchers also utilised this NC to describe a variety of other phenomena, including Berry curvature, fundamental length-scale, Landau levels, gamma-ray bursts, and many more [44][45][46][47][48][49]. Note that the basic assumption in the structure of this NC is quite ad-hoc. In 1992, Madore introduced the idea of a 3-dimensional fuzzy sphere NC [50], which has been used to better understand the thermodynamical features of non-interacting degenerate electron gas [51,52]. This formalism was later refined by projecting all the points of the fuzzy sphere onto an equatorial plane and named this configuration a squashed fuzzy sphere [53]. This NC model was also proven to imitate the magnetic field by producing distinct energy levels, which are similar to the Landau levels created in the presence of a magnetic field [54]. Apart from a few black hole applications, the implication of NC on compact objects is a relatively novel concept. We earlier showed its applications on the structure of WDs. We considered both the formalism of NC separately and showed that they modify the energy dispersion relation of electrons [55,56]. We further used this relation to obtain a new EoS of the degenerate electrons present in WDs and showed that it can explain the super-Chandrasekhar limiting mass WDs, which are believed to be the progenitors of the observed over-luminous type Ia supernovae. We obtained the maximum mass of a WD to be about 2.6 M in the presence of NC, and this mass-limit decreases as the strength of NC reduces. We further showed that the NC is prominent if the separation of electrons is less than the Compton wavelength of electrons, and it turns out to be an emergent phenomenon. The EoS obtained for WD is valid only up to neutron drip density, above which neutron starts contributing to the degenerate pressure. In this article, we obtain a new EoS above the neutron drip density taking into account of NC and derive a new mass-radius relation for NSs. With the advancement of technology, different proposed electromagnetic and GW detectors are likely to detect numerous WDs and NSs. If their observed masses and radii follow the mass-radius relations predicted based on NC, it would be a direct proof of NC's existence. The following is a breakdown of how this article is structured. In Section 2, we briefly review the squashed fuzzy sphere formalism and the modified energy dispersion relation, which we utilize in Section 3 to derive the EoS for degenerate particles reside inside WD and NS in the presence of NC. We further use this EoS to obtain the new mass-radius relation of the NS in Section 4. Finally, we put our concluding remarks in Section 5. Squashed fuzzy sphere formalism and modified energy dispersion relation In this section, we recapitulate the basic formalism of a squashed fuzzy sphere. In R 3 , the equation of a sphere with radius r is given by where (x 1 , x 2 , x 3 ) are the Cartesian coordinates of the points on the sphere. A fuzzy sphere is similar to a regular sphere, except that its coordinates x i (i = 1, 2, 3) follow the regular QM angular momentum algebra [50]. Hence, if J i are the generators of SU(2) group in an N-dimensional irreducible representation, we have with where κ is the proportionality (scaling) constant, C N =h 2 N 2 − 1 /4, and I is the N-dimensional identity matrix. Substituting J i in terms of x i and defining k = κr, we obtain Since the angular momentum algebra follows the commutation relation J j , J k = ih jkl J l , the coordinates of the fuzzy sphere follow [50] x j , x k = i kh r jkl x l . When all the points of a fuzzy sphere are projected on any of its equatorial planes, the result is a squashed fuzzy sphere. It should be noted that this is not a stereographic projection. The projection of all the points of a fuzzy sphere on the x 1 -x 2 equatorial plane is shown in Figure 1. The points of the upper hemisphere are projected on the equatorial plane's top side, while the points of the lower hemisphere are projected on the plane's lower side, and then they are glued together. Writing x 3 in terms of x 1 and x 2 using Equation (1) and replacing it in Equation (5), we obtain the squashed fuzzy sphere's commutation relation, given by [53] [x 1 , The Laplacian for the squashed fuzzy sphere is given by [53] which satisfies the following eigenvalue equation wherel(l + 1) −m 2 are eigenvalues of the squashed fuzzy Laplacian withl taking all the integer values from 0 to N − 1 andm taking all the integer values from −l tol. Using this Laplacian, one can obtain the energy dispersion relation in the squashed fuzzy sphere, given by [53,56] Moreover, Equation (6) in spherical polar coordinates (r, θ, φ) can be recast as This shows NC is between θ and φ alone, while they are commutative with r-coordinate. In other words, the formalism of squashed fuzzy sphere is such that it actually provides a NC between the azimuthal and polar coordinates. This is because the squashed plane in a fuzzy sphere can be any of its equatorial planes, which means that the squashed fuzzy sphere possesses rotational symmetry about the equatorial plane. Regardless of the squashed plane, the above energy dispersion remains unchanged. As a result, a particle traveling along the r-coordinate in a squashed fuzzy sphere is not affected by NC and the exact energy dispersion relation is given by where p r is the momentum of the particle in the radial direction. In the limit N 1, the above expression reduces to [56] where θ D = 2h/m 2 c 2 k. It is noticeable that this expression is very similar to the dispersion relation of Landau levels in the presence of a magnetic field. If the magnetic field is present along z-direction with strength B, the energy dispersion relation for an electron with mass m e is given by [54] where p z is the momentum of the electron along the z-direction and B c = m 2 e c 3 /he is the critical magnetic field (Schwinger limit) with e being the charge of an electron. Comparing Equations (12) and (13), we obtain Hence, in a squashed fuzzy sphere, k −1 behaves as the strength of NC. A detailed discussion on the equivalence of magnetic field and NC was given by Kalita et al. [56]. Equation (12) provides the energy dispersion relation of one squashed fuzzy sphere, inside which k is constant. If we consider a sequence of concentric squashed fuzzy spheres with same N, from Equation (4), we have k ∝ r 2 , i.e. k increases and thus the strength of NC reduces from center to the surface. As a result, all concentric spheres with a radius greater than r contribute to the effective NC at a point with radius r. From Equation (6), it is evident that NC vanishes at the surface. Noncommutative equation of state for degenerate particles In this section, we first discuss the commutative cases. In 1935, Chandrasekhar provided EoS for the degenerate electrons [20]. This EoS is valid for a system whose density is less than the neutron drip density (approximately 3.18 × 10 11 g cm −3 ), above which neutron also starts contributing to the degenerate pressure. Harrison and Wheeler (hereinafter HW), in 1958, provided an EoS considering a semi-empirical mass formula, which is valid even at higher densities than neutron drip density. Denoting ρ to be the matter density and P the total pressure, HW EoS is given by [1] ρ = n ion M(A, Z) + e (n e ) − n e m e c 2 + n (n n ) c 2 , P = P e + P n , where n is the energy density of neutrons and e is the same for electrons. Similarly, P e and P n are respectively the pressures due to electrons and neutrons. Here n e , n n , and n ion are the 6 of 13 number densities of electron, neutron, and ion respectively, while M(A, Z) is the energy of nucleus with mass number A and atomic number Z. In commutative physics where E 2 = p 2 c 2 + m 2 c 4 holds good, the pressures and energy densities are given by where λ e =h/m e c, λ n =h/m n c, and λ p =h/m p c are the reduced Compton wavelengths of electron, neutron, and proton respectively with m n being the mass of a neutron and m p the mass of a proton. Moreover, x F,e = p F,e /m e c, x F,n = p F,n /m n c, and x F,p = p F,p /m p c with p F,e , p F,n , and p F,p being the Fermi momentum of electron, neutron, and proton respectively, and This EoS can explain physics beyond the neutron drip density regime. However, above 4.54 × 10 12 g cm −3 , the neutrons contribute most in the pressure and density. Hence, beyond this density, HW used the idea n-p-e EoS where neutrons, protons, and electrons are considered to be degenerate and non-interacting. In the commutative picture, the n-p-e EoS is given by [1] P = P e + P n + P p , HW and n-p-e EoSs together provide the pressure-density relation of the non-interacting degenerate particles. In NC, these EoSs are expected to be modified. Vishal and Mukhopadhyay earlier derived a modified HW EoS of degenerate particles in the presence of a constant magnetic field [57]. Later, to study the effect of varying NC on degenerate electron gas, we obtained the following relation [56] where ξ is a dimensionless proportionality constant. The dependency θ D ∝ n 2/3 e is required to match the modified EoS with the Chandrasekhar EoS at a low density where NC does not have any significant influence. Thus we obtained the modified EoS for degenerate electrons when all the electrons reside in the ground level, given by [55,56] where µ e is the mean molecular weight per electron and E F,e is the Fermi energy of electrons, which is related to p F,e as E 2 F,e = p 2 F,e c 2 + m 2 e c 4 (1 + 2νθ D ). (22) Since, for the present purpose, we require the modified HW and n-p-e EoSs in the presence of NC, we also assume a similar form of pressure-density relation except that the various properties of the electron are now replaced by the same for the corresponding particle. After doing some simplifications using Equations (19) and (21), we obtain Note that we do not put any subscript for the electron in this equation, which means that it is valid for electrons, protons, and neutrons. We further denote the NC parameters of neutron, proton, and electron as θ D,n , θ D,p , and θ D,e respectively. Thus the modified HW and n-p-e EoSs are given by the same expressions of Equations (15) and (18), except the pressures and energy densities of the respective particles are modified as follows: where We already showed that if all the electrons reside only in the ground energy level, we require ξ e ≈ 1.5 to match the noncommutative EoS with the Chandrasekhar EoS at the low density [55]. However, the corresponding parameters for neutron and proton (ξ n and ξ p ) remain arbitrary. We choose ξ n and ξ p in such a way that the maximum mass of NS in the mass-radius curve is above 2 M , which we discuss in the next section. Thereby we calculate both the noncommutative HW and n-p-e EoSs when all the particles are in their respective ground levels (see Figure 2). Note that, the neutron drip density changes in the presence of NC, which was also shown earlier in the presence of strong magnetic fields forming Landau levels [57]. Mass-radius relation of noncommutativity inspired white dwarfs and neutron stars We assume a semi-classical approach to obtain the mass-radius relations for WDs and NSs. In other words, we use classical pressure balance and mass estimate equations (also known as the Tolman-Oppenheimer-Volkoff or TOV equations) while the EoS is governed by the NC. The TOV equations are given by [58] dM dr = 4πr 2 ρ, where M is the mass of the star inside a volume of radius r and G is the Newton gravitational constant. We earlier showed that NC is prominent when the inter-particle separation is less than the Compton wavelength of the respective particles [55,56]. When we consider the hydrostatic balance equations for the entire star having a macroscopic size, the length-scale of the stellar fluid is much larger than the Compton wavelength of the constituent particles. Thus, the TOV equations remain commutative in the semi-classical limit. Furthermore, when all the electrons reside in the ground energy level, we already found the mass-radius curve earlier [55,56], and for recapitulation, we display it again in Figure 3. It is evident that NC inspired WDs can possess more mass than the conventional WDs following the Heisenberg algebra. The maximum mass of such a non-rotating WD is estimated to be around 2.6 M , explaining the origins of many over-luminous SNe Ia. In the case of a NS, ρ c is high, and we employ a combination of HW and n-p-e EoSs to derive its mass-radius relation, as illustrated in Figure 4. In the commutative picture, the maximum mass turns out to be just 0.7 M , while it is increased to about 2 M in the case of NC, which is supported by the observations of massive pulsars. However, the radius increases to 20 km in this situation, which is almost ruled out by existing GW observations [59][60][61]. Note that, the relation θ D ∝ x 2 F in Equation (23), is valid for electrons and we extrapolate it for neutrons and protons too. If we choose a different dependency of θ D on x F , the EoS alters and so as the mass-radius relation for NS. Figure 5 depicts several mass-radius relations for various powers of x F . It is evident that as the power decreases, the radius for maximum mass falls as well and when θ D ∝ √ x F , the maximum mass is about 2.08 M with radius being 12 km. These masses and radii obey the observational bounds of NSs, and hence such an EoS is a realistic one. Conclusions For a long time, scientists have been fascinated by the possibility of massive WDs and NSs from several direct or indirect observations. Various ideas, such as magnetic fields and rotation, modified gravity, etc., have been thoroughly investigated in recent years. Rotation can explain massive NSs, which, however, alone fails to elucidate the massive WDs with a mass of about 2.8 M . High magnetic fields can, in principle, explain both these massive objects. Nonetheless, the maximum field that a compact object can possess is always a source of contention. Similarly, despite the fact that modified gravity can explain such high masses, it has so far been impossible to identify the most appropriate one from the hundreds of such modified gravity models. In this regard, each of these theories suffers its own limitations. In the context of astronomical objects, the concept of NC is relatively new. With the exception of a few applications on black holes and wormholes, it has received little attention in astrophysics. We earlier self consistently used NC for the first time to explain the super-Chandrasekhar WDs [55,56]. We first employed a basic planar NC model and later used a squashed fuzzy sphere model to modify the EoS of the degenerate electrons present in a WD. This modification leads to increasing the mass of a WD. If the electrons solely occupy the ground energy level, i.e. NC is the strongest, the new mass-limit of WD turns out to be about 2.6 M . As NC weakens and electrons occupy higher energy levels, this mass-limit decreases. It is to be noted that the effect of NC is only prominent at sufficiently high densities and negligible at low densities. Hence, our model supports the observed bigger WDs, which generally have very low densities, and it does not violate any observable at such low densities. We have already established that the strength of NC depends on the length scale of the system. If the inter-particle separation distance is smaller than the Compton wavelength of the corresponding particle, NC starts becoming prominent [55]. Furthermore, NC does not have any classical effect, unlike magnetic fields (i.e. field pressure, tension, etc.), and hence the problem of instabilities that occurred in magnetic fields does not arise in the case of NC, making the NC model preferable over that of magnetic fields. In this article, we have extrapolated NC to higher densities and investigated for its effect on the structure of NSs. For simplicity, we have only considered the effects of neutrons, protons, and electrons and assumed they are non-interacting. In commutative physics, it is well known that such an EoS gives the maximum mass of a NS to be about 0.7 M [1]. However, current observations demand the maximum mass of a non-rotating NS has to be at least 2 M [5,12,13]. Once we introduce NC, we have found that even such non-interacting particles can constitute an EoS which generates NS with a mass of about 2.1 M and radius 12 km. Such an EoS is perfectly legitimate with the current observation constraints. Note that we have only considered the case where all the particles are in their respective ground energy levels, which is the scenario for the strongest NC. One can, in principle, consider higher occupancy in the energy levels. However, it only reduces the mass of the NS, as we have seen in the case of WDs [56], and the maximum mass could fall below 2 M and those cases would be unrealistic. In such instances, one must account for the interactions that may occur between the various particles at these high densities; which however is beyond the scope of this paper. In such cases, even the EoSs, which are considered non-physical, might not be ruled out if they are affected by NC. In the future, GW observations may detect numerous massive WDs and NSs [33,62,63], allowing us to constrain more EoSs and examine the NC effect on these compact objects more closely. If observed masses and radii of WDs and NSs follow the respective predicted mass-radius relations based on NC, it would be a direct confirmation for the existence of NC at scales far away from the Planck scale.
5,910.8
2022-07-15T00:00:00.000
[ "Physics" ]
TEMPLATES: A Robust Outlier Rejection Method for JWST/NIRSpec Integral Field Spectroscopy We describe a custom outlier rejection algorithm for JWST/NIRSpec integral field spectroscopy. This method uses a layered sigma clipping approach that adapts clipping thresholds based upon the spatial profile of the science target. We find that this algorithm produces a robust outlier rejection while simultaneously preserving the signal of the science target. Originally developed as a response to unsatisfactory initial performance of the jwst pipeline outlier detection step, this method works either as a standalone solution, or as a supplement to the current pipeline software. Comparing leftover (i.e., not flagged) artifacts with the current pipeline’s outlier detection step, we find that our method results in one fifth as many residual artifacts as the jwst pipeline. However, we find a combination of both methods removes nearly all artifacts—an approach that takes advantage of both our algorithm’s robust outlier rejection and the pipeline’s use of individual dithers. This combined approach is what the TEMPLATES Early Release Science team has converged upon for our NIRSpec observations. Finally, we publicly release the code and Jupyter notebooks for the custom outlier rejection algorithm. INTRODUCTION One of the transformative new capabilities of JWST (Gardner et al. 2023;Rigby et al. 2023) is integral field spectroscopy (IFS) that captures spatially-resolved spectra of distant galaxies with the NIRSpec (Böker et al. 2023) and MIRI (Wright et al. 2023) science instruments.In the first 1.5 years of JWST science, the user community has had to learn best practices for calibrating and removing instrumental signatures from these<EMAIL_ADDRESS>NASA Postdoctoral Fellow data, including expected issues like cosmic rays and surprising issues like residual pattern noise (Rauscher 2023).Indeed, the Early Release Science (ERS) teams were charged with testing existing data processing software (such as the STScI jwst pipeline; Bushouse et al. 2023) 1 and developing resources to work around problems in the existing data reduction pipeline. One such problem arose in a critical step in the final stage of the jwst calibration pipeline: the outlier rejection step.In this step, the pipeline checks for and removes artifacts (outliers, primarily from cosmic rays) present in the individual dithers such that the final three-dimensional (3D) cube is clean of these features.However, during the first year of observations, this step did not work -aggressively removing outliers to the extent that the step also removed real flux from the science targets (jwst calibration pipeline versions 1.10.2 and earlier; e.g., Perna et al. 2023;Veilleux et al. 2023;Marshall et al. 2023;Vayner et al. 2023).In response, early teams developed alternative processing for this critical step (e.g., running lacosmic, van Dokkum 2001, on individual dithers and post-processing the final cube using uniform sigma clipping; see Perna et al. 2023, among others). Part of the challenge of working with early science data from JWST has been adapting to rapid changes in the jwst pipeline algorithms and calibration files.One year into the JWST science mission, the pipeline and calibration files were updated such that the outlier rejection step no longer removes flux from the science targets (jwst calibration pipeline versions 1.11.3 and later).However, crucially, the pipeline algorithm is still not catching some outliers. In this paper, we present a custom outlier rejection algorithm designed to robustly clip the numerous outliers present in the NIRSpec integral field unit (IFU) data, which are largely produced by cosmic rays.This custom algorithm employs a layered sigma clipping treatment of the 3D data cube, using the signal-to-noise (S/N) spatial profile of the science target to clip outliers within S/N layers at each wavelength slice.This approach removes outliers while preserving the signal of the science targets, a method that is effective even in the presence of very bright emission lines. We test the effectiveness of a) our algorithm alone, b) the current jwst pipeline outlier detection step, and c) a hybrid approach where we use the jwst pipeline and our custom outlier rejection algorithm to produce results that are superior to the pipeline's alone while taking advantage of the pipeline's use of dithers. This paper is organized as follows.In §2 we describe the process of generating a pixel mask to identify the spaxels associated with the science target, defining mask layers for use in the custom outlier rejection algorithm, and the three-part outlier rejection algorithm itself.In §3, we discuss the results of this algorithm on JWST data from the Early Release Science (ERS) program TEMPLATES (PID: 1355; PI Rigby, Co-PI Vieira) and analyze comparisons between this algorithm and the updated pipeline outlier rejection step.We summarize this work in §4, and highlight alternative potential uses and approaches for this algorithm. THE ALGORITHM The algorithm described in this work is a custom outlier rejection method designed to robustly flag and remove the numerous outliers present in JWST/NIRSpec IFU data.We note that the algorithm does require some manual customization, but it produces one of the lowest rates of leftover outliers in the resulting data cubes.Our method works as follows: • Sorts spatial pixels as science target pixels or not, based upon the S/N of some spectral feature in the science target. • Separates science target into layers (bins) of S/N. • Sigma clips non-target pixels uniformly for each wavelength slice, replacing flagged pixels with the median of the non-target pixels in that slice. • Separately sigma clips science target pixels in each S/N layer for each wavelength slice, replacing flagged pixels with the median of that S/N layer. • Combines science target and non-target pixels back together into a final post-processed IFU cube. This algorithm works on the fully reduced level 3 data cube, which is science-ready except for the identification and removal of outlier pixels.By contrast, the jwst pipeline performs outlier rejection on the level 2 calibrated data products.It uses the sampling of the same piece of the sky multiple times, through dithering, to identify outliers.The advantage of the pipeline's method is that it works on the individual dithers, before the cube-building step in the pipeline.It therefore should preserve spatial information; however, such an algorithm needs a very good astrometric solution, which was not initially available.Additionally, as described above, the changing nature of the effectiveness of this step in the pipeline over time made it an unreliable method when processing IFS observations within the first year of JWST. We developed this algorithm in the TEMPLATES ERS collaboration to replace the outlier rejection step for the reasons outlined above (e.g., Birkin et al. 2023).We continue to use this algorithm in tandem with the current outlier rejection step in the pipeline to achieve the cleanest data products possible (see §3 for the motivation for this combined choice, for jwst pipeline versions 1.11.3 and later). We now detail each part of the method.For visualization purposes, we show as an example JWST/NIRSpec IFS observations of SGAS1723+34, a bright, highlymagnified galaxy at z = 1.3293 from the TEMPLATES program (Rigby et al., submitted).Unless otherwise specified, the IFS data are reduced using calibration reference files under calibration reference data system pipeline mapping (CRDS, pmap) 1105 and jwst calibration pipeline version 1.11.3.Finally, we set a preliminary threshold value that is 0.5-1 orders of magnitude above the highest pixel value of the science target (identified in the individual cal.fits files) in order to remove the most egregious outliers before the cube-building step in the final stage of the pipeline. Generating Masks Before the outlier rejection algorithm can be run, a necessary first step is to divide all of the spaxels into those associated with the science target and those that are not (i.e., sky).From there, we further separate the target spaxels by the spatial profile of the S/N of the target.By generating these "layers" of S/N for a given target, we are able to efficiently flag and remove outliers from the data while preserving real signal from the science target.We describe the different mask-making steps in detail in this section. Creating the Initial Science Target Pixel Mask In order to divide the science target spaxels from the sky spaxels, we create a S/N map of the data cube based upon an identified bright spectral feature.This step is critical, as the sky and science target spaxels are treated separately in the layered sigma clipping routine.For this work, we use the brightest emission line (or a blend of multiple emission lines) to generate the S/N mask.However, we note that a similar S/N map could be made by combining multiple emission lines (in the event that they trace significantly different regions of the science target) and/or creating a continuum-based S/N map by collapsing the entire cube into one map with all emission lines masked (in the event that there are no emission lines or that they trace different regions than the continuum). To make the S/N mask using an emission line, we take IFU slices covering a small wavelength range of ∼700-800 km/s centered on the brightest line of the cube (or lines, depending upon the spectral resolution of the data), which has been manually identified.Collapsing the selected slices in the spectral direction for both the signal and uncertainty cubes from the calibration pipeline (summing in quadrature for the uncertainty array), we generate a two-dimensional (2D) S/N map of the emission line.From this map, we remove every pixel below a minimum threshold of S/N = 3.This creates the first step in the target pixel mask-making step.We use this approach to making a map instead of fitting a Gaussian to the bright emission line(s) in each of the Step 1: generating initial target pixel mask Step 2: defining mask layers by S/N bins spaxels to prevent outliers in the data biasing a given emission line fit.The result of the initial science target pixel mask is shown in Figure 1a, using the bright lensed Lyman-break galaxy SGAS1723+34 from the TEMPLATES program.For this galaxy, we used the [O iii] λ5008 emission line to generate the S/N map shown.The straightforward S/N map, created from a series of IFU slices around the bright [O iii] feature, clearly maps out the spatial light from the target galaxy.However, there exist additional pixels that passed the S/N threshold cut which are not obviously associated with the target.These higher S/N sky pixels can come from surrounding sources, artifacts biasing the S/N, etc., and should be removed to create a clean science target pixel mask.We address the finetuning required for the science target pixel mask in the following section. Fine-Tuning the Target Pixel Mask As evidenced by Figure 1a, additional steps are required to finalize the target pixel mask in order to re-move sky spaxels that appear to have high S/N (either artificially high or from a nearby source).Using a pixel masking code to overlay regions on our initial target pixel mask, we visually identify spaxels that are clearly not associated with the target and remove them from the mask.When necessary, we also visually inspected some spaxel spectra to verify whether the spaxel included light from the science target or not. Figure 1b shows an example of the result of fine-tuning the pixel mask, where we have removed the majority of the sparse spaxels on the outskirts of the 2D map. As part of this process, there may be artifacts present in the summed slices that bias the S/N of certain spaxels, making them appear to have more S/N than they truly would.This is evidenced by the few yellow and green pixels present on the upper right edge of the fine-tuned target pixel mask in Figure 1b.The values for these pixels are not important for this step in the mask making process (as we have already identified these spaxels as associated with the target galaxy), but we will address this in the following section.Finally, it is important to note that there may be faint signal that reaches out past the science target pixel mask made in this step.The layered sigma clipping routine described in §2.2.2 will not negatively affect galaxy light that faint (where the brightest line(s) are S/N < 3, for example), and therefore science relevant to such diffuse light should be safe using this algorithm. 2fter converging on a satisfactory fine-tuned science target pixel mask, we convert it into a binary mask that will be used in the algorithm (in §2.2). Splitting Mask into S/N Layers Once satisfied with the science target pixel mask, we create the mask layers that will be used in the layered sigma clipping process.We split the associated pixels into layers that are defined as bins of S/N.To make these layers, we define three to four bins in the S/N map of the science target pixels.There should be two goals when choosing the number of S/N bins for a given target: 1. Avoid too large a S/N range for a given bin to prevent real signal from being clipped in that layer (this is more critical for the lower S/N bins). 2. Have an adequate number of pixels in each bin to ensure that high-value artifacts will be clipped properly during the layered sigma clipping step (a good goal is at least 8-10 pixels in each S/N layer, so the sigma clipping performs well). Figure 1c shows an example of the S/N bins used for the same TEMPLATES lensed galaxy, with 2D contours defining the S/N bins used for the science target.The [O iii] emission line is incredibly bright towards the center of this lensed galaxy, with a peak S/N > 300.For bright extended sources whose surface brightness spans a large range in S/N across the galaxy, we recommend four S/N bins (in order to protect the brightest spaxels); for fainter sources with a smaller range, we recommend three bins. The number of bins used (and the S/N range spanned in each bin) will be unique to each target processedadjustments of the S/N ranges and bin sizes may be necessary more than once during the layered sigma clipping process (see additional discussion about this iterative adjustment in §2.2.2).For our work, it was common to iterate a few times between defining the target mask layers and running the layered sigma clipping routine to ensure that we achieved the best result possible. Slight adjustments to a higher or lower S/N bin may be necessary for some individual pixels.As referenced in the previous section, this depends upon the S/N of a given pixel and whether or not those values are driven by artifacts present in the summed IFU slices used in the original S/N map.Using the example pixels mentioned in §2.1.2,the yellow pixels located in the upper right portion of the S/N map would fall in a higher S/N bin.However, upon inspecting their 1D spectra, it is clear that the actual emission line in each spectrum is as faint as the spaxels around them.Therefore, these pixels should be identified and manually moved to lower S/N bins. Additionally, the edges of the IFU cube have very high noise due to less coverage at the edges from dithering.Therefore the edge pixels would be down-weighted in the 2D S/N map generated in the previous steps.If the science target approaches or goes over the edge of the IFU field of view (as is the case for our example galaxy), these down-weighted pixels will need to be lifted to higher S/N bins to properly preserve the actual signal.In practice, we apply a manual pixel adjustment step to several pixels for each source.It is advantageous to go slowly through this step to ensure that the S/N bin layers, and their associated pixels, are as accurate as possible (and ideally, the pixel adjustment step is only required once per source). Once we identify the S/N bins for a given source, we convert the pixels in each bin into Boolean masks that will be used in the layered sigma clipping routine.Running the outlier rejection method on each of the individ-ual layers ensures that the real signal from the science target is preserved while robustly clipping outliers. Custom Outlier Rejection Here we describe the three-part custom outlier rejection algorithm, separated into three steps to allow the user to adjust the input masks as needed and check the functioning of the code along the way.The process is divided as follows: 1) a general sigma clipping of the sky spaxels, 2) a layered sigma clipping of the target spaxels using the target pixel mask layers created in §2.1.3,and 3) combining the two separately-clipped pixel regions into one final cube. Clipping the Sky Spaxels This step of the custom outlier rejection algorithm is straightforward.Using the full science target pixel mask created in §2.1.1,we mask out the spaxels associated with the target.Next, choosing a slice in the cube that has some amount of outliers clearly present, we identify by eye four benchmark pixels that are used to check the clipping process.We recommend defining at least one benchmark pixel as a "normal" pixel, while the rest can be located on various outlier pixels in that slice.The spectra from these pixels are plotted before and after the clipping process to verify that artifacts are being properly flagged, while "normal" signal is unaffected. The clipping part of this step walks through the cube slice-by-slice, masking out the target pixels and the non-IFU pixels (present from to the rotation of the cube due to observing position angle).Next, using the sigma clip function from astropy (which removes values above and below a specified standard deviation threshold), we clip the sky pixels in the slice using σ = 5 and max iterations = 5.The clipped pixels from this step are masked and replaced with the median of the unmasked sky pixels.For users who may not want the clipped pixel values replaced, we also log the masked pixels (as a separate FITS extension) so that the user can track and/or remove the replaced values as needed. Finally, we compare the spectra of the four benchmark pixels to check the result of the sigma clipping.The clipped cube and associated log of clipped pixels are saved to a multi-extension FITS file, to be read in for the final step of the outlier algorithm ( §2.2.3). Layered Clipping of the Target Spaxels This step is the most hands-on of the custom outlier rejection algorithm.Similar to the previous step, we choose a different set of four benchmark pixels, located across the target, for use in checking the robustness of the layered sigma clipping step in the algorithm.When identifying the four benchmark pixels, we recommend choosing locations that cover a range in S/N for the target (from low S/N to high S/N). 3In addition to the four benchmark pixels, we also compare before and after clipping of a 2D benchmark slice in the cube as additional validation that the algorithm is removing artifacts while not affecting actual signal. At this stage, the code again runs through the cube slice-by-slice, to mask out sky pixels.For each slice, we run through each of the S/N mask layers (defined in §2.1.3)-setting all of the target pixels not in that layer to NaNs.Using the sigma clip function with the same parameters as the previous step, we clip the science target pixels in the layer, masking and replacing the clipped pixels with the median of the unmasked science target pixels from the same layer.Clipping in this manner ensures that, for example, the higher S/N pixels are compared and clipped only with other high S/N pixels, such that real signal is preserved as much as possible while outliers are efficiently removed.We repeat this step for each S/N mask layer.Finally, we add the clipped layers back together as the final, clipped slice for the target pixels.In the same manner as the previous step, we log the pixels that are clipped in each slice.The entire process is then repeated for each slice of the cube. Figure 2 shows an example of the result of the layered clipping step, with the pre-defined S/N pixel mask layers from §2.1.3shown in the left panel.This visualization of the S/N layers highlights the layered clipping effect, where in each slice the code runs through the sigma clipping of each layer individually.The middle and right panels of Figure 2 show the result for a benchmark IFU slice, chosen to illustrate the layered sigma clipping step on an area of the target that covers a range of S/N (or, more than one S/N mask layer). The result of the second step in the algorithm shows a clean removal of outliers while preserving the signal from the science target.While this step does spatially interpolate to replace the flagged value, we argue that this method of replacing the flagged science target pixels using the S/N layers, for each wavelength slice, is a reasonable approximation due to the definition of the S/N layers themselves.They are defined such that the pixels flagged in each layer, for a given wavelength slice, are compared with (and replaced by) only those with similar S/N values (and therefore relatively similar flux densities).Additionally, in practice we have seen outliers impact at most only two (and at rare instances, three) consecutive wavelength slices for a given spaxel, while spec-no outlier rejection custom outlier rejection no outlier rejection custom outlier rejection pixels clipped in slice tral features of the science target such as emission lines generally span many more wavelength slices.Therefore, if this algorithm's approach happens to poorly replace flagged signal for a given spaxel, it should be evident with little effort.Additionally, for spatially-integrated science using IFS data, if there are missing values then the integrated flux measurements will be artificially low (requiring some method of correcting for the percentage of flagged pixels).However, if the user prefers to flag but not replace pixels, we include the log of clipped pix-els (for each wavelength slice) as part of the output of the algorithm for this purpose.Upon completion of this step, we visually compare the four benchmark pixels to measure how well the layered sigma clipping procedure performed.These comparisons and checks are vital for this step of the custom outlier rejection algorithm.Some adjustments may be necessary between the layered sigma clipping step and in the previous step where we define the S/N mask layers used ( §2.1.3).If the code appears to be overzealous in clip- The spectra shown are spatially integrated from the target spaxels for the different outlier rejection codes.We have arbitrarily chosen this representative spectral region to highlight the leftover artifacts present in the data after each outlier rejection method.Additionally, as a reference for the utility of both our algorithm and the updated jwst pipeline in removing most artifacts, underlaid in this panel we have included an example using no outlier rejection (faint grey).(right) A histogram of the flux density of the three co-added spectra (green, blue, black) demonstrating the overall number of artifacts leftover from the different algorithms.The spectra used in this histogram have been continuum-subtracted with the emission lines masked out in order to highlight the artifact spikes still present in the data. ping real light, or if it is missing obviously false signal, adjusting the S/N bin ranges described in §2.1.3will be necessary. The clipped target cube and associated log of clipped pixels are saved to a multi-extension FITS file. Creating the Final Cubes For the final step of the custom outlier rejection method, we combine the output from the previous two steps.In short, we combine the slices from each of the previous steps to create a final complete slice, repeating this process for the full cube.We keep this step separate from the previous steps to enable validation checks at each step in the algorithm.As part of this, we compare the original cube with our post-processed cube at various slices where we had previously identified the presence of outliers.This inspection is important to ensure that the final product is appropriately science ready, and that real signal has been properly preserved throughout the algorithm. Figure 3 shows such a comparison for another benchmark IFU slice from our example galaxy, chosen to highlight artifacts both on and off of the target galaxy.The right panel shows the same slice, but from the aforementioned log of clipped pixels, showing the pixels in the benchmark slice that the code flagged in both of the previous steps.We include the target pixel mask in this panel to denote the location of flagged pixels on and off of the science target. The final clipped cube and associated log of clipped pixels are saved to a multi-extension FITS file, and are the science-ready data products produced by the algorithm. COMPARISONS WITH THE EXISTING PIPELINE In this work, we have detailed a custom outlier rejection algorithm that we originally designed to replace the jwst outlier detection step.As previously described, this was motivated in large part due to the pipeline (versions 1.10.2 and earlier) aggressively removing outliers such that it was incorrectly flagging and removing signal from bright emission lines in our science targets, resulting in stunted and oddly-shaped emission line profiles.As of jwst pipeline version 1.11.3, the outlier detection step has been improved such that it no longer flags and removes real signal from our science targets.However, artifacts still remain in the final cubes -they require additional processing to remove. We show the utility of the outlier rejection methods and the leftover outliers present in each data cube in Figure 4.In the left panel of Figure 4, we show spatially integrated 1D spectra for both the version 1.11.3 jwst pipeline (green) and our outlier rejection algorithm (black) as well as a combination of the two (blue).For comparison, an example of IFS with no outlier rejection applied is underplotted (faint grey) to highlight how both methods successfully flag and remove most of the outliers in the IFS data.The effect of our algorithm is striking.For the representative spectral region, we find that the version 1.11.3 pipeline's outlier detection step produces ∼ 25 leftover artifacts. 4In contrast, our algorithm produces ∼5 leftover artifacts in this same spectral region.Thus, our algorithm results in one fifth as many residual artifacts as the updated version 1.11.3 jwst pipeline. 5sing the combination of both methods -where first we use the version 1.11.3 pipeline's outlier detection step in stage 3 and then apply our algorithm on the final 3D cube -yields only 1 artifact remaining in the spectral region shown.Thus, combining both approaches produces data cubes with the cleanest removal of artifacts. We visualize the total number of leftover artifacts in the right panel of Figure 4, where we show histograms of the flux density of the spatially integrated 1D spectra that are shown in the left panel.To make the comparison of leftover artifacts from each method easier to quantify, we have subtracted the continuum and masked out all emission lines in each spectrum.For clarity, we have not included the IFS data with no outlier rejection in this panel.The version 1.11.3 pipeline's outlier detection step (green) very clearly has the most remaining artifacts of the two methods, with many flux density values biasing towards larger positive values.By comparison, our algorithm (black) has remaining artifacts that exist at similar high values, but there are significantly fewer present.The differences between the two methods verify the results found in the left panel, where our algorithm catches and removes a much larger number of leftover artifacts than the version 1.11.3 pipeline itself.However, the combination of the two methods (blue) shows the tightest result, with very few values above 10 −17 erg s −1 cm −2 Å−1 for the science target. From these results, we conclude that the combination of these two methods performs better than either method alone.This combined approach has the added benefit that the jwst outlier detection step works on on the individual dithers, while our algorithm works on the final, drizzled three-dimensional cube.In principle, the pipeline's approach is preferable, since it removes outliers from the individual dithers before combining; with a sufficient number of dithers, the flux density of a given sky spaxel can be determined from the dithers not affected by an artifact.The uncertainty in that flux density will be greater since the removal of the outlier effectively lowers the total integration time for that pixel, but there is no interpolation of any kind.By contrast, as our algorithm works on the final drizzled spectral cube, it necessarily interpolates spatially, replacing the flagged outliers with the median of spaxels at similar S/N and flux density levels.This philosophical difference in approach is why, for the TEMPLATES ERS program, we chose to use the pipeline's outlier detection first (which does not interpolate) and then use our algorithm on the outliers that the pipeline still misses (see Rigby et al., submitted, for more details). SUMMARY & RELEASE OF CODE We present a custom outlier rejection method for use with JWST/NIRSpec IFS data.This custom method employs a layered sigma clipping approach that uses the spatial profile S/N of the science target, which preserves the signal of the science target while efficiently and robustly removing outliers, primarily due to cosmic rays.Along with this paper, we release the associated algorithm code as Jupyter notebooks for use by the community.The repository for the code can be found at github.com/aibhleog/baryon-sweep(DOI: 10.5281/zenodo.8377532).This algorithm requires some manual customization -however, the results produce one of the lowest number of remaining outliers of which we are aware, therefore we argue that the final product makes the effort worthwhile. We developed this algorithm originally as a replacement of a critical step in the jwst pipeline.We compare to current version 1.11.3 jwst pipeline step and find that our algorithm results in one fifth as many residual artifacts as the jwst pipeline.Further, we find that running the updated outlier detection in version 1.11.3 of the jwst pipeline, and then running our algorithm, produces data that are nearly completely cleaned of outliers.We use this combination for our IFS data in the TEMPLATES DD-ERS program. Finally, an alternative approach to this algorithm could include applying the layered sigma clipping method to the individual 2D dithers (therefore working with the data before the cube-building stage, like the jwst pipeline's step).Additionally, the mask-making step of this algorithm could be altered to instead use Voronoi binning, Fourier transforms, and other such methods to separate target spaxels from non-target (or sky) spaxels.Depending upon the method used (and whether a S/N map or a flux map is utilized), such alternative methods could be more automated than the mask-making method used in this algorithm.Fu-ture improvements to the algorithm could include twodimensional spline interpolation at each S/N layer instead of replacing the clipped values with the median of the S/N layer.This work is based on observations made with the NASA/ESA/CSA JWST.The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST.We are grateful for the collective contributions of the roughly 20,000 individuals around the world who designed, built, tested, commissioned, and operate JWST.Facilities: JWST (NIRSpec) Software: jwst Calibration Pipeline (Bushouse et al. 2023), astropy (Astropy Collaboration et al. 2013, 2018, 2022), scipy (Virtanen et al. 2020), matplotlib (Hunter 2007), stenv (https://stenv.readthedocs.io/en/latest/),pandas (pandas development team 2020) APPENDIX A. COMPARISON OF THE ORIGINAL & UPDATED jwst OUTLIER DETECTION STEP During the first year of observations, the jwst pipeline's outlier detection step did not work, due to an initial coarse instrument model that later was found to be removing not only outliers present in the NIRSpec IFS data, but also removing real flux from the science target (jwst calibration pipeline versions 1.10.2 and earlier; e.g., Perna et al. 2023;Veilleux et al. 2023;Marshall et al. 2023;Vayner et al. 2023, among others). Figure 5 illustrates this effect for our example science target in both a 2D wavelength slice (centered on the [O iii] λ5008 emission line) and in spatially integrated 1D spectrum zoomed in to show the Hβ λ4864 and [O iii] λ4960,5008 emission lines.The data shown are from jwst pipeline versions 1.10.2 and 1.11.3, respectively.In both views, the effect of this overly aggressive outlier detection in the version 1.10.2jwst is clearly vis-ible -from the red pixels denoting NaNs in the left 2D wavelength slice to the stunted and oddly-shaped emission line profiles in the gold-colored spectrum. As previously noted, the jwst pipeline's outlier detection step has recently (c.August 2023) been improved such that it no longer removes real signal from science targets (versions 1.11.3 and later).Results from the updated version 1.11.3 jwst pipeline's outlier detection step are shown in Figure 5, in both a 2D wavelength slice (right) and the green-colored spatially integrated 1D spectrum, where the emission line profiles have the correct line strengths and line profiles. In additional testing of the updated pipeline outlier detection step for all of the TEMPLATES NIRSpec IFS data, we have found consistent results indicating that real signal is preserved with the newly updated pipeline -a significant improvement.However, the updated version 1.11.3 jwst pipeline outlier detection step does not remove all outliers (as evidenced in §3). Figure 1 . Figure 1.The two step process in making a layered target pixel mask, to use as input in the custom outlier rejection algorithm.(top) Step 1 of the process, where (a) we generate an initial target pixel mask from the S/N map of a series of slices around a bright feature (such as an emission line), and (b) we fine-tune the mask to remove pixels that are clearly not associated with the target.(bottom) Step 2 of the process, where (c) we define S/N bins for our initial target pixel mask such that there are a reasonable number of pixels in each bin while not covering too large a range of S/N. Figure 2 . Figure 2. A benchmark slice from the reduced IFU cube for the same target galaxy, chosen to show the layered sigma clipping step of the custom outlier rejection algorithm.(left) A view of the S/N mask layers of the target pixel mask, used to clip the target pixels in each slice as a function of mask layer in order to preserve the real signal from the science target while removing outliers.(middle) The slice from the fully-reduced cube with no outlier detection used, showing an artifact on the bottom right corner of the science target pixels.(right) The same slice after applying the layered sigma clipping step, showing the bright yellow pixels removed.For both the middle and right panels, we have shaded out the sky spaxels to emphasize the science target spaxels where the layered step is applied. Figure 3 . Figure 3.A view of a different benchmark slice in the reduced IFU cube for the same target galaxy, chosen to highlight the efficacy of the custom outlier rejection algorithm.(left) The slice from the fully-reduced cube with no outlier detection used, showing artifacts in the form of bright yellow pixels.(middle) The same slice after applying the custom outlier rejection algorithm, showing the artifacts removed.(right) The pixel logging extension included in the code, to track which pixels in each slice have been clipped by this method (blue) with the target pixel mask underlaid (grey). Figure 5 . Figure 5.A visualization of an issue in a critical step in the jwst pipeline that motivated a large part of this work.Plotted is a zoom-in on emission lines in a spatiallyintegrated spectrum of science target SGAS1723+34, showing the difference between the outlier detection step in the jwst pipeline software before (gold, version 1.10.2) and after (green, version 1.11.3) the recent (c.August 2023) update.Insets above the spectra show the same effect but in a 2D wavelength slice of the cube, centered on an emission line where galaxy signal was initially improperly clipped (red pixels).Both the 2D and 1D data show that the updated version 1.11.3 jwst outlier detection step no longer incorrectly flags and removes actual signal in IFS data, a welcome improvement. Figure 4. (left) A comparison of our custom outlier rejection algorithm (black), the updated outlier detection step in the version 1.11.3 jwst pipeline (green), and a combination of both methods (blue) for the G140H grating for our example galaxy. These observations are associated with JWST program # 1355.Support for JWST program # 1355 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127.TAH is supported by an appointment to the NASA Postdoctoral Program (NPP) at NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA.BW acknowledges support from NASA under award number 80GSFC21M0002.JEB & GMO acknowledges generous support from the Texas A&M University and the George P. and Cynthia Woods Institute for Fundamental Physics and Astronomy.
8,477
2023-12-19T00:00:00.000
[ "Physics" ]
State-of-the-Art Technologies for Building-Integrated Photovoltaic Systems : Advances in building-integrated photovoltaic (BIPV) systems for residential and commercial purposes are set to minimize overall energy requirements and associated greenhouse gas emissions. The BIPV design considerations entail energy infrastructure, pertinent renewable energy sources, and energy efficiency provisions. In this work, the performance of roof/façade-based BIPV systems and the affecting parameters on cooling/heating loads of buildings are reviewed. Moreover, this work provides an overview of different categories of BIPV, presenting the recent developments and sufficient references, and supporting more successful implementations of BIPV for various globe zones. A number of available technologies decide the best selections, and make easy configuration of the BIPV, avoiding any difficulties, and allowing flexibility of design in order to adapt to local environmental conditions, and are adequate to important considerations, such as building codes, building structures and loads, architectural components, replacement and maintenance, energy resources, and all associated expenditure. The passive and active effects of both air-based and water-based BIPV systems have great effects on the cooling and heating loads and thermal comfort and, hence, on the electricity consumption. Introduction Terrestrial solar energy amounts to around 1.8 × 1011 MW every year, which is around 10,000 times the rate of the global energy demand [1]. In developed countries, buildings consume about 30-40% of yearly electrical energy produced, and in developing countries, it expends from approximately 15% to 25% [2]. Increasing consumption of electrical energy from primary energy resources increases CO 2 emissions, which has a great impact on the environment [3][4][5]. Intuitively, mitigating energy demands in buildings will substantially curtail the required supply of energy and, hence, minimize greenhouse gas (GHG) emissions [6][7][8]. Therefore, there is a genuine interest in net-zero energy buildings (NZEBs) from engineers and scientists, focusing on the tangible measure of energy conservation of innovative solutions to optimize incident solar irradiation by the PV cells, to produce said thermal and electrical energy [35][36][37]. Performance Assessment Tools of Photovoltaic (PV) Modules The efficiency of converting the solar energy into electrical energy depends essentially on the PV panels that produce electrical power [38]. Currently, the power energy efficiency of the PV panels for the commercial engineering applications is within the range of 12-23% (for multi-or mono-crystalline modules, respectively) measured at the Standard Test Conditions (STC) of 1.0 kW/m 2 , ambient temperature of 25 °C, and wind speed of 1.5 m/s [39]. PV panels have limited overall efficiency and are very sensitive to weather conditions, such as dust, humidity, overcast conditions, and panel temperature increases. There are also the passive components necessary for energy transmissions, such as regulators, battery, cabling, and inversion of the supply to alternating current (AC) [59,60]. Some investigations were conducted to examine the influence of the inclination angle of PV panels and its orientation on the gained amount of solar irradiation and, hence, the output electrical power [61]. Where a cloud covered a portion of solar photovoltaics will reduce the total energy output [62,63]. Predicted methods are required to evaluate these effects and their costs on other systems to connect the solar PV panels to the buildings [64]. Furthermore, the electrical power supplied by the PV panels depends on its temperature and PV voltage; it is indispensable to illustrate the maximum power of the PV panel [65,66]. Under constant solar intensity (G) of 1000 W/m 2 , the current/voltage and power/voltage characteristics for different temperatures of the 10 W polycrystalline silicon photovoltaic module are illustrated in Figure 1. The appeal of PV modules for electrical power generation stems mainly from cost and on the efficiency of energy conversion, viability, availability, and affordability. Various methods and techniques have been developed and suggested to maximize the electrical power output of PV modules using concentrated photovoltaic [67,68], hybrid solar photovoltaic/thermal (PV/T) [69,70], nanofluids [71][72][73][74][75], evaporative cooling [76][77][78][79], phase change material (PCM) [80,81], thermoelectric [82,83], etc. A combination of photovoltaic/thermal (PV/T) can be augmented into façades, windows, rooftops, and shading devices to provide both electrical and thermal energy [84]. The integration of BIPV thermal systems with the façade is not straightforward; however, it positively affects the energy performance for both building and PV modules [85]. The performance of BIPV is usually closely associated with the purpose of the application, so the façade-based BIPV systems are classified into four classes-air-based, water heating, space heating, and ventilation systems. The rising surface temperature of PV modules not only decreases the generated electrical energy, but also decreases the life of PV modules by creating hot spots and increased shunt resistance. Combining PV modules with thermal collectors can also help control the overheating of PV as well as ventilation air pre-heating [86,87], underfloor heating system [88], domestic hot water [38], passive and active cooling [89], and heat storage [90,91]. Additionally, to enhance PV modules efficiency, they can depart from fixed array installation into either one-axis or two-axis tracking systems. PV modules with fixed-tilt (tilt angle chosen depending on geographic location) are less cost to install, operate, and maintain. Thus, arrays with two-axis tracking systems are more expensive due to adding the tracking mechanisms for the sun radiation [92]. The most common economic assessment criteria of levelized cost of electricity (LCOE) was calculated for several locations and different configurations of PV panels, i.e., fixed axis systems, one-axis tracking systems, and two-axis tracking systems [93]. The results revealed that the differences in LCOE for fixed, one-axis, and two-axis tracking systems were up to 213%, 240%, and 262%, respectively. On the other hand, the operating and maintenance costs for the fixed axis systems, one-axis tracking system, and two-axis tracking system were 25, 30, and 35 USD/kWp/year, respectively [94]. However, PV modules with one and two-axis tracking systems intercept a greater amount of solar radiation, but this increase has to be justified in magnitude and in terms of the parasitic energy required to operate the tracking system [95]. Since installation cost of electricity from the PV modules with a one-axis tracking system was found out to be the smallest among the three types of projects [96], one-axis tracking modules will be singled out for economic and environmental focus. Considerable efforts have been reported consistently to track solar radiation and consequently to improve the efficiency to make solar PV modules more attractive for energy conversion [97,98]. Several methods are available to assess output power of solar PV installations. Maximum power point tracker (MPPT) of a PV system that was adapted for a fixed voltage was presented by Salameh et al. [99]. The results reveal that the proposed controller was valid for various loads such as batteries or water pumps. The various applications of PV modules throughout the benchmark analysis are presented in Figure 2. For example, Wolf [100] introduced the fundamental concepts of the photovoltaic/thermal (PV/T) system over a one year for a single-family residence. Overview of BIPV Systems The BIPV is an energy producing system that combines the solar PV panels as part of Façades, windows, or roof devices with buildings. When an active heat recovery is cooperated with the BIPV systems, either in closed loop or in an open loop with forced cir- Overview of BIPV Systems The BIPV is an energy producing system that combines the solar PV panels as part of Façades, windows, or roof devices with buildings. When an active heat recovery is cooperated with the BIPV systems, either in closed loop or in an open loop with forced circulation of working fluids, they are well-known as the building-integrated photovoltaicthermal (BIPVT) systems [101]. In cold weather, air-based BIPV thermal systems have the benefit of supplying space heating during the year due to low ambient air temperatures [91,102]. Designing and achieving the operation at zero energy buildings (ZEBs) can be done by incorporating BIPVT systems [103]. However, it should maintain technical and economic requirements, aesthetical aspects prior to integrating into the building envelopes to fulfill the necessary factional requirements [104][105][106]. The initial maintenance and replacement costs, cost-efficiency, codes and standards, PV types, building load and location, psychological and social factors, are the main parameters that influence the BIPV systems [107][108][109]. Additionally, the progress of BIPV systems is restricted by the operational expertise, data collection, and planning analysis, commissioning, national manufacturing, the potential of the national market, standardized technology, etc. [110,111]. The positive energy district (PED) is currently supposed to be an integral portion of the district or urban energy system by positive influence [112]. The clear categories for different types of positive energy districts (PEDs) in the renewable energy market, for example in the EU, are presented by Lindholm et al. [13]. They offered a detailed analysis of fundamental factors in PED planning process. The challenges of the PEDs are still open for discussion in order to eventually drive the development of PEDs major advances forward, as described by Hedman et al. [113]. Recently, semi-transparent thin-film solar photovoltaic modules (STPV) for BIPV in windows and facades have generated many studies due to their superior performance levels [114][115][116][117]. A number of experimental and simulation studies considering the energy efficiency of semi-transparent thin-film PV (STPV). For the application of air conditioning systems, the STPV windows/facades produce both electrical energy and simultaneously minimize the cooling load of air-conditioning system by mitigating solar heat gain [118][119][120]. Furthermore, the BIPV based on STPV windows with convenient transmittance levels enables full utilization of daylighting [121][122][123]. Figure 3 summarizes publications available on the topic of BIPV systems within the past three decades (Web of Science database, 2021). It should be noted that these publications do not present all available research papers, but they show the strong trend of increased investigations of BIPV systems. available on the topic of BIPV systems within the past three decades (Web of Science database, 2021). It should be noted that these publications do not present all available research papers, but they show the strong trend of increased investigations of BIPV systems. Currently, there are different products of BIPV systems, depending on the application area, as presented in Table 1. In-roof mounting BIPV systems comprise different mounting systems integrated with the frameless (or not) PV module to the roof of the Currently, there are different products of BIPV systems, depending on the application area, as presented in Table 1. In-roof mounting BIPV systems comprise different mounting systems integrated with the frameless (or not) PV module to the roof of the building [124,125]. These systems achieve desired functions usually devoted to the building materials. However, with the utilization of regular PV modules, the aesthetical consideration is not important, and the integration is partial. On the contrary, the full roof solutions are more integrated, and the aesthetically has an essential function. These proposed solutions of the BIPV element accomplish the functions of the traditional roofing and sometimes it is designed as thermal insulation with different color choices. Therefore, this integration can be considered as an optimal function. In addition, metal roofing refers to lightweight metal roofing, with a supplementary layer of PV thin film, typically composed of copper indium gallium selenide (CIGS) photovoltaic cells [126,127]. The flexible lightweight modules that are lightweight BIPV systems, such as rolls and membranes, can be positioned on various surfaces by simple sticking onto the surface without any mounting elements [128]. Therefore, these types of BIPV systems are convenient for both façades and roof applications. Moreover, it can be combined with traditional building components in the manufacturing stage, to fulfill further functions. The non-ventilated or warm façade elements are elements of the BIPV systems constitutive of curtain walls in buildings. The rainscreen or cold façade are constitutive elements in the BIPV systems installed as a façade cladding [129]. Mostly, these types are a ventilation space between the elements of the BIPV systems and the second layer of the building façade. Finally, accessory types encompass a shading device, such as balustrade, balcony components, or louvers [130]. The experimental analysis to study the characteristics of real conditions in the mild Mediterranean climate of a BIPVT installed in a façade was done by Bot et al. [131]. The weather parameters ambient temperatures during one year, global radiation, diffuse radiation, indoor room temperatures, and normal direct radiation components were registered. Results also indicated that the electrical efficiency was around 15.1% and the system presented clear advantages to buildings, essentially due to the electrical power generated by PV modules, also to participate to heat the adjacent zone to the building. A dynamic simulation model based on TRNSYS to assess different options of energy efficiency, primary saving of energy, payback period, and CO 2 emissions including different buildings in various districts i.e., Naples in Italy and Fayoum in Egypt, was developed [132]. The model included the energy balance for transient operation, while incorporating the geometrical building envelope. The suggested options of energy efficiency included a building heating system using a hot water supply, air-to-air heat pump elements for cooling/heating of the enclosure by integrating the solar thermal systems and photovoltaic modules. The simulation results provide important guidelines for accurately selecting the optimal hybrid system designs and configurations of the BIPV system, as well as proposing guidelines for Buildings 2021, 11, 383 7 of 24 legislators to assess de-carbonization goals under different scenarios. It is interesting to note that the model is sensitive to change in ambient conditions and geometric location. For example, the payback period of the proposed energy system in Naples (Italy) was five years, however, in Fayoum (Egypt), it was 23 years. BIPV-Based Air Cycle The building-integrated photovoltaic/thermal (BIPV/T) system absorbs solar irradiation incident upon a building envelope and is responsible for converting a fraction of the solar energy into electrical and thermal energy [133,134]. The crystalline PV module converts typically almost 15-20% of solar radiation energy into electrical energy, and the rest is either reflected by 5-10% or converted into thermal energy (heat), which increases the surface temperature of the PV modules. For large-scale applications, peak temperature of a PV module can easily reach 60 • C (higher on hot summer days) [135]. Air-cooling is a heat mitigation technique that has been comprehensively investigated as a cost-effective method. Incorporating a gap of air between the PV panels and the building fabric i.e., façade or tilted roof is employed for forced air circulation to considerably cool the PV panels, and the produced pre-heated air is a practical supply of thermal requirements for buildings [118,136]. The PV modules are commonly installed in the BIPV systems on a rooftop or façade, and the most practical technique to adjust the temperature of modules is employing the forced convection using a fan. However, it causes additional capital investment and consuming more electrical energy [137]. While, the electrical energy consumption is reduced by nearly 20% in the installed HVAC systems with solar ventilated façade [138]. Moreover, it is usually preferred to install air-based BIPV/T systems in an open-loop arrangement to utilize the heated air for space heating. In the air-based BIPV system, the airflow supplied for space heating is guided by a fan to avoid air trapping into the air gap cavity leading to increasing the heat transfer rate to the building spaces. The influence of fans on the system operation concludes that the BIPV system efficiency can be improved by almost 9% [139]. The air gap BIPV/T with indoor air flow, results to be the more effective in decreasing the building heating load by 27% lower, and it is capable to avoid the decreasing of PV efficiency compared to the BIPV systems with no ventilation or air gap BIPV/T system [140]. Air gap in the BIPV systems allows natural circulation of air (no additional fan required) and the flow of air is achieved by the density difference that generates buoyancy forces, namely the stack effect that occurred in the BIPV/T air gap [141]. Theoretical and experimental studies incorporating buoyancy-driven air currents behind the PV modules was investigated by various researchers [142], using various air flow configurations [143], via the integration of semi-transparent PV panels by a double-pass façade [144], by employing the exergy analysis [145], by connecting in series PV/T air collectors [146], and by employing ventilator of wind-driven [147]. An air-to-water heat pump heating ventilation air conditioning (HVAC) system with a rooftop PV was simulated using TRNSYS software by Stamatellos et al. [148]. The results were utilized to evaluate the building energy performance, inverter-driven heat pump, the efficiency of the rooftop PV panels. Moreover, objective functions were introduced for optimizing the constructed area of the PV panels and their tilt angle, based on the price of the alternative electricity considering the net metering policies. The annual results of the system's performance with 20 lifetimes and the optimum value of the tilt angle of PV modules of 30 degrees are presented in Figure 4. In addition, the results revealed that the net metering tariffs to support were employed more effectively for the new designs for further expansion of BIPV systems in existing and modern buildings. the efficiency of the rooftop PV panels. Moreover, objective functions were introduced for optimizing the constructed area of the PV panels and their tilt angle, based on the price of the alternative electricity considering the net metering policies. The annual results of the system's performance with 20 lifetimes and the optimum value of the tilt angle of PV modules of 30 degrees are presented in Figure 4. In addition, the results revealed that the net metering tariffs to support were employed more effectively for the new designs for further expansion of BIPV systems in existing and modern buildings. Baljit et al. [149] present a comprehensive review of BIPV for thermal systems and their applications covering years 2006 to 2016. Roof and wall integrated BIPVT systems are water-based or air-based, which loop back into the building. Agathokleous et al. [150] presented double skin façades (DSF) and BIPV with a focus on airflow and other heat transfer characteristics. According to the use of thermal energy from the PV modules, the BIPV thermal systems are classified into three categories: cooling of PV, water heating, and air heating [151]. Debbarma et al. [101] reviewed BIPVT and BIPV technologies taken with a focus on aesthetics, cost, functions, and eventual applications. Modeling thermal performance and the energy and exergy analysis of BIPVT were reviewed by the same group in the work of Debbarma et al. [152]. Moreover, the application of the Trombe wall in different buildings was studied by Hu et al. [153]. Further, Saretta et al. [154] and Riaz Baljit et al. [149] present a comprehensive review of BIPV for thermal systems and their applications covering years 2006 to 2016. Roof and wall integrated BIPVT systems are water-based or air-based, which loop back into the building. Agathokleous et al. [150] presented double skin façades (DSF) and BIPV with a focus on airflow and other heat transfer characteristics. According to the use of thermal energy from the PV modules, the BIPV thermal systems are classified into three categories: cooling of PV, water heating, and air heating [151]. Debbarma et al. [101] reviewed BIPVT and BIPV technologies taken with a focus on aesthetics, cost, functions, and eventual applications. Modeling thermal performance and the energy and exergy analysis of BIPVT were reviewed by the same group in the work of Debbarma et al. [152]. Moreover, the application of the Trombe wall in different buildings was studied by Hu et al. [153]. Further, Saretta et al. [154] and Riaz et al. [155] reported on BIPV façades prior to 2019 for the energy renovation of heritage buildings. Shahsavar et al. investigated the energy-saving of a building due to using the BIPV; using the cooling potential of ventilation of air exhaust from building PV cooling and heating ventilation air, by the eventual heat rejection from PV modules, was investigated numerically [156]. The results showed that exhaust air and ventilation air in heating ventilating A/C systems could be employed efficiently as part of the ventilated heating load. An experimental investigation of the thermal characteristics of a two-inlet air-based open loop BIPV system under full-scale solar simulator was introduced [157], as presented in Figure 5. Opaque and semi-transparent silicon mono-crystalline PV modules were used for the study. The results revealed that a two-inlet system with frameless PV panels enhance thermal efficiency by 5% compared with the traditional single-inlet configuration. The BIPV/T system with semi-transparent modules maintained 7.6% more thermal efficiency compared with that of opaque ones. The thermography of PV surface temperature with airflow carrying different moisture content was performed to help understand transport mechanisms underneath and above the PV modules, by Mirzaei et al. [158]. To achieve this goal, a setup consisting of a BIPV model with cavity airflow ventilation in addition to a solar simulator inserted in a wind tunnel was developed as shown in Figure 6. Moreover, particle image velocimetry (PIV) and infrared thermography were utilized to observe the surface operating temperature of PV modules and airflow underneath and above the panel. The results revealed that the proposed arrangement would reduce the risk of BIPV systems due to the airflow and moisture entrance by lowering the areas of high pressure adjacent to the PV panels. using the cooling potential of ventilation of air exhaust from building PV cooling and heating ventilation air, by the eventual heat rejection from PV modules, was investigated numerically [156]. The results showed that exhaust air and ventilation air in heating ventilating A/C systems could be employed efficiently as part of the ventilated heating load. An experimental investigation of the thermal characteristics of a two-inlet air-based open loop BIPV system under full-scale solar simulator was introduced [157], as presented in Figure 5. Opaque and semi-transparent silicon mono-crystalline PV modules were used for the study. The results revealed that a two-inlet system with frameless PV panels enhance thermal efficiency by 5% compared with the traditional single-inlet configuration. The BIPV/T system with semi-transparent modules maintained 7.6% more thermal efficiency compared with that of opaque ones. Figure 5. Real photos of BIPV system using semi-transparent and black opaque PV panels. Reproduced with permission from ref. [157]. Copyright 2014 Elsevier. The thermography of PV surface temperature with airflow carrying different moisture content was performed to help understand transport mechanisms underneath and above the PV modules, by Mirzaei et al. [158]. To achieve this goal, a setup consisting of a BIPV model with cavity airflow ventilation in addition to a solar simulator inserted in a wind tunnel was developed as shown in Figure 6. Moreover, particle image velocimetry (PIV) and infrared thermography were utilized to observe the surface operating temperature of PV modules and airflow underneath and above the panel. The results revealed that the proposed arrangement would reduce the risk of BIPV systems due to the airflow and moisture entrance by lowering the areas of high pressure adjacent to the PV panels. . The proposed BIPV system with cavity ventilation air. Reproduced with permission from ref. [158]. Copyright 2014 Elsevier. Noor Muhammad et al. investigated the performance of a hybrid system of solar PV/T systems and heat pump depending on the re-uses of the thermal energy generated from solar PV modules, cooled by heat pumps, and then pumped into the ward for cooling purposes, as shown in Figure 7 [159]. The proposed hybrid system was preferred to cool the natural ventilation space in buildings, particularly involving the health facilities in hospitals for the tropical weather conditions. Peng et al. numerically evaluated the potential of energy saving and the annual energy performance of a ventilated photovoltaic with a double-skin facade (PV-DSF), as illustrated in Figure 8, for the summer season in the Mediterranean climate region [116]. Moreover, a sensitivity analysis based on the numerical model to investigate the width of the air gap and the ventilation modes was achieved to optimize the unit's design and assess the operational strategy of PV-DSF. It was found that the proposed PV-DSF was able to generate yearly electricity of about 65 kWh per unit area. In addition, the annual energy output could be doubled due to utilizing the cadmium telluride (CdTe) semi-transparent PV panels. The efficiency enhancement of semitransparent PV panels would further improve the energy-saving potential of a PV-DSF Noor Muhammad et al. investigated the performance of a hybrid system of solar PV/T systems and heat pump depending on the re-uses of the thermal energy generated from solar PV modules, cooled by heat pumps, and then pumped into the ward for cooling purposes, as shown in Figure 7 [159]. The proposed hybrid system was preferred to cool the natural ventilation space in buildings, particularly involving the health facilities in hospitals for the tropical weather conditions. Peng et al. numerically evaluated the potential of energy saving and the annual energy performance of a ventilated photovoltaic with a double-skin facade (PV-DSF), as illustrated in Figure 8, for the summer season in the Mediterranean climate region [116]. Moreover, a sensitivity analysis based on the numerical model to investigate the width of the air gap and the ventilation modes was achieved to optimize the unit's design and assess the operational strategy of PV-DSF. It was found that the proposed PV-DSF was able to generate yearly electricity of about 65 kWh per unit area. In addition, the annual energy output could be doubled due to utilizing the cadmium telluride (CdTe) semi-transparent PV panels. The efficiency enhancement of semitransparent PV panels would further improve the energy-saving potential of a PV-DSF and hence make this sustainable technology more appropriate. In addition, the PV-DSF with glazing systems increased net electricity by about 50%. Krauter et al. [160] proposed different designs of BIPV system to the configuration of PV and façades i.e., PV panel with ventilation and PV without ventilation as illustrated in Figure 9. The results show that module temperature was reduced by 18 K, while electrical efficiency was improved by 8% at a wind velocity of 2 m/s. BIPV-Based Water Cycle Utilizing water as a working fluid for carrying the generated heat from the photovoltaic modules in BIPV systems is, intuitively, more efficient than air due to its superior thermophysical properties [156]. A water-based PV cooling system is employed to operate at higher heat removal rates than air-cooled systems of PV panels and for critical applications. When the temperature of the circulating water is below the temperature of the PV cell, the energy conversion efficiency of PV is seen as significant [161,162]. Moreover, the flowing circulating water could absorb undesirable heat from modules as the temperature of the water rises gradually. This water could then be employed as a heat resource for thermal applications associated with buildings and solar-assisted heating/cooling energy technologies [163,164]. Using hybrid solar energy systems in different building configurations has the promising benefit of increasing the energy-generated output per unit area of the installed collector [16]. Apart from useful solar energy conversion of the BIPV systems, the reduced radiation transmittance into the building will lower space cooling/heating energy requirements, as well as save the building materials through the suitable design and appropriate construction integration [104,165]. BIPV-Based Water Cycle Utilizing water as a working fluid for carrying the generated heat from the photovoltaic modules in BIPV systems is, intuitively, more efficient than air due to its superior thermophysical properties [156]. A water-based PV cooling system is employed to operate at higher heat removal rates than air-cooled systems of PV panels and for critical applications. When the temperature of the circulating water is below the temperature of the PV cell, the energy conversion efficiency of PV is seen as significant [161,162]. Moreover, the flowing circulating water could absorb undesirable heat from modules as the temperature of the water rises gradually. This water could then be employed as a heat resource for thermal applications associated with buildings and solar-assisted heating/cooling energy technologies [163,164]. Using hybrid solar energy systems in different building configurations has the promising benefit of increasing the energy-generated output per unit area of the installed collector [16]. Apart from useful solar energy conversion of the BIPV systems, the reduced radiation transmittance into the building will lower space cooling/heating energy requirements, as well as save the building materials through the suitable design and appropriate construction integration [104,165]. Centralized PV modules and hot water collectors can be wall-mounted in vertical facades to serve as water pre-heating systems (see Figure 10), as investigated experimentally under different operating modes [166]. Results reveal that a naturally circulating water system is better for forced circulation in a hybrid solar collector pre-heating system than any other working fluid. The reported thermal efficiency was 38.9% at zero reduced temperature, the corresponding electrical efficiency was 8.56% during summer season, and it increased with avoiding the shading effects. A preferred thermal insulation performance was established on the façade of BIPV in both winter and summer. Centralized PV modules and hot water collectors can be wall-mounted in vertical facades to serve as water pre-heating systems (see Figure 10), as investigated experimentally under different operating modes [166]. Results reveal that a naturally circulating water system is better for forced circulation in a hybrid solar collector pre-heating system than any other working fluid. The reported thermal efficiency was 38.9% at zero reduced temperature, the corresponding electrical efficiency was 8.56% during summer season, and it increased with avoiding the shading effects. A preferred thermal insulation performance was established on the façade of BIPV in both winter and summer. An experimental implementation to study the performance of the PV/T solar heat pump A/C system was reported [167]. The effect of the evaporator and condenser pressure variations on the coefficient of performance (COP) of the heat pump A/C system, water temperature, overall system heat capacity, PV module surface temperature, and its efficiency were reported. The results indicated that the efficiency of the PV/T solar heat pump A/C system reached 10.4% with an improvement of 23.8% over the base system. Moreover, the COP of the heat pump A/C system was attained at 2.88 and the water temperature in the water heater increased to 42 °C. The economic, energy, and exergy analysis of integrated polyethylene heat exchangers underneath PV panels was investigated experimentally and numerically [168]. A thermal model was adapted to study the thermal performance of the roof proposed unit as illustrated in Figure 11 An experimental implementation to study the performance of the PV/T solar heat pump A/C system was reported [167]. The effect of the evaporator and condenser pressure variations on the coefficient of performance (COP) of the heat pump A/C system, water temperature, overall system heat capacity, PV module surface temperature, and its efficiency were reported. The results indicated that the efficiency of the PV/T solar heat pump A/C system reached 10.4% with an improvement of 23.8% over the base system. Moreover, the COP of the heat pump A/C system was attained at 2.88 and the water temperature in the water heater increased to 42 • C. The economic, energy, and exergy analysis of integrated polyethylene heat exchangers underneath PV panels was investigated experimentally and numerically [168]. A thermal model was adapted to study the thermal performance of the roof proposed unit as illustrated in Figure 11 Passive and Active Effects of BIPV Systems The overall attainable energy savings of air-based and water-based BIPV systems depend essentially on several factors, such as the optimal selection of operating parameters and accurate design parameters of the equipment, the insulation type, and the building construction materials. The schedules of cooling and heating, the control system, and the predominant climatic conditions considerably affect the energy efficiency [169]. According to the electricity consumption, it is important to minimize the peak demand, to avoid the equipment oversizing of power plant. The proper system sizing should rely upon a detailed system simulation including the building envelope and the detailed selection and control of HVAC equipment. The matching between the production and consumption of electricity can be advantageous to the stability of the national grid and the electricity cost. Passive Effects The integration of solar PV modules with the building envelope makes significant changes that are related to the thermophysical characteristics with resultant changes of the building cooling and heating demands and, hence, indoor thermal comfort. These phenomena are well known as the passive effects include the building wall temperature, cooling and heating loads and demands, and the building's indoor thermal comfort [140]. The wall temperatures of integrating solar PV modules are subjected to non-negligible changes with respect to those of standard walls without collectors [170]. The expected changes maintained for wall building and indoor air temperatures influence the indoor thermal comfort of the building. The free-floating temperature regime during switching off the HVAC system shows the passive effects of BIPV systems onto the conception of the thermal comfort. The predicted mean vote (PMV) is a function mainly of relative air velocity, mean radiant temperature, and indoor air temperature. The PMV that relates the thermal comfort to a large population is affected significantly by the indoor air and the wall temperatures [171]. Active Effects Here, the reported results regarding the active effects of the BIPV system are divided into subsections, namely the electricity production of PV; preparation of domestic hot water (DHW) for water-based BIPV systems; and production of hot-air for only air-based BIPV systems. The correct choosing of cooling systems for the PV panels reduces the operating temperatures of PV cells and consequently enhances both the useful life and the PV electric efficiency [172]. The operating temperature of the working fluid passing inside the air cycle-based BIPV system is lower than that maintained by the water cycle-based Figure 11. The integration of polyethylene heat exchanger underneath PV panels. Reproduced with permission from ref. [168]. Copyright 2014 Elsevier. Passive and Active Effects of BIPV Systems The overall attainable energy savings of air-based and water-based BIPV systems depend essentially on several factors, such as the optimal selection of operating parameters and accurate design parameters of the equipment, the insulation type, and the building construction materials. The schedules of cooling and heating, the control system, and the predominant climatic conditions considerably affect the energy efficiency [169]. According to the electricity consumption, it is important to minimize the peak demand, to avoid the equipment oversizing of power plant. The proper system sizing should rely upon a detailed system simulation including the building envelope and the detailed selection and control of HVAC equipment. The matching between the production and consumption of electricity can be advantageous to the stability of the national grid and the electricity cost. Passive Effects The integration of solar PV modules with the building envelope makes significant changes that are related to the thermophysical characteristics with resultant changes of the building cooling and heating demands and, hence, indoor thermal comfort. These phenomena are well known as the passive effects include the building wall temperature, cooling and heating loads and demands, and the building's indoor thermal comfort [140]. The wall temperatures of integrating solar PV modules are subjected to non-negligible changes with respect to those of standard walls without collectors [170]. The expected changes maintained for wall building and indoor air temperatures influence the indoor thermal comfort of the building. The free-floating temperature regime during switching off the HVAC system shows the passive effects of BIPV systems onto the conception of the thermal comfort. The predicted mean vote (PMV) is a function mainly of relative air velocity, mean radiant temperature, and indoor air temperature. The PMV that relates the thermal comfort to a large population is affected significantly by the indoor air and the wall temperatures [171]. Active Effects Here, the reported results regarding the active effects of the BIPV system are divided into subsections, namely the electricity production of PV; preparation of domestic hot water (DHW) for water-based BIPV systems; and production of hot-air for only air-based BIPV systems. The correct choosing of cooling systems for the PV panels reduces the operating temperatures of PV cells and consequently enhances both the useful life and the PV electric efficiency [172]. The operating temperature of the working fluid passing inside the air cycle-based BIPV system is lower than that maintained by the water cycle-based BIPV system. In turn, the lower surface temperature of PV panels and, thus, enhanced yielding electricity, is achieved by adopting the air-cooled BIPV/T over the water-cooled one [173]. The hot water supplied by water cycle-based BIPV system is exploited for preparing domestic hot water (DHW) [171,174]. The hot air generated from air cycle-based BIPV system is employed for space heating applications and as pre-heated air for a drier machine, etc. [175,176]. A considerable reduction in space heating is maintained for the air-based BIPV systems as a result of free space heating, leading to saving the yearly heating energy of the HVAC system [177]. Economic Considerations of BIPV The building sector accounts for around 40% of energy consumption globally. Thus, a paradigm shift towards BIPV systems can enhance interest in replacing traditional construction materials for the envelope with a new building, with provisions for incorporating PV modules. This opens the door for the introduction of innovative construction material with high thermal insulation and an electrical energy producer. The BIPV systems encompass both energy-related aspects considering electricity production and buildingrelated aspects linked to the functions of the construction material. The most important challenge of widespread BIPV systems are the economic national policy based on the public acceptance, feed-in-tariff implementation, the national economic support, as well as the technical aspects, such as the losses of energy conversion and the considerations of different architectural elements [178]. Therefore, the economics of the BIPV can be segmented into categories based on the aforementioned criteria. The economy of the BIPV systems depends on the category of buildings on which BIPV products are employed such as residential or non-residential buildings considering the architectural characteristics, occupancy profile, techno-economic feasibility, national economic support, annual costs of maintenance and replacement, available codes of the BIPV systems, building loads and structure, etc. Thus, the fundamental drawback associated with BIPV systems is the high cost/kilowatt per hour of electrical energy generated [158]. Thus, an expensive technology BIPV system at the current time; only 0.05% of primary consumed energy is currently generated by this promising technology [179]. For example, Yang [180] classified barriers for BIPV proliferation in terms of application, lifecycle, design, construction, and installation to commission and maintenance. A number of BIPV systems have been simulated to study the energy requirements for residential/non-residential buildings. Appropriate designing of the BIPV systems with improper installation well contributes to achieving an acceptable growth in the energy expenditure to ensure thermal comfort in buildings. The performance of a three-zone residential building with an air/water heat pump heating, ventilation and air conditioning (HVAC) system combined with a rooftop PV installation, was simulated using the TRNSYS environment [148]. The results were employed to evaluate the high performance of the energy building using an inverter-driven heat pump by scroll compressor as well as the high efficiency of PV panels. Furthermore, objective functions were presented to optimize the area of installed PV panels and their tilt angles, considering the pricing of alternative electricity and subsidies. In addition, this type of financing was effective for growing the rooftop PV installations on existing and new houses under zero interest rates would be downpayment of the government bank loan. The ventilated façade system with phase change material (PCM) introduces superior thermal comfort conditions compare with the traditional building as well as it also reduces the utilization of electrical power, commonly for HVAC systems [181]. The thermal performance of ventilated double skin Façade integrated with phase change material for building heating in winter months was assessed by Gracia et al. [182]. Three methods were investigated; free-floating or natural ventilation mode, controlled temperature mode desired electricity to drive the heat pump, and demand profile operated on both natural ventilation and mechanical mode. The result indicated that the proposed system provided indoor temperature on mild and severe winter season and free-floating mode achieved the best thermal comfort than other modes. Investigating the interactions effects between the building envelope and the HVAC systems is fundamental to maintain the NZEBs [183]. The prerequisite information of technology, performance, and the parameters to operate and design the integrated HVAC and domestic hot water (DHW) production systems for buildings were presented by [184]. Where the review concerned with the integrated energy systems introducing the required space heating and DHW production; cooling and heating for HVAC and DHW production; electricity and DHW production; mechanical ventilation and DHW production. Furthermore, it was found that the economic evaluation of the BIPV systems is usually required for renovations and retrofits to evaluate the system payback time. For the new, under-construction buildings, the economic assessment is uncomplicated because it is not influenced by the materials cost that will be substituted by the components of the BIPV. Nevertheless, in new or old buildings, the total cost, payback time, and feed-in-tariff are necessary to be considered for optimizing the system economic for different applications of BIPV systems. Concerning the purpose served by the BIPV systems for achieving the national sustainable development goals (SDGs), the analysis of the life cycle is also critical [185]. The market segmentation encompassing technical and economic aspects of the BIPVT systems is presented in Figure 12. Where the cladding typology is the combination of the thermal property such as the insulation property or not and the visual property such as the transparent or opaque of a BIPV technology. The market segmentation introduces multiple possibilities when it comes to the BIPV systems, and it gives the reality that it is a promised market, considering different technical and economic aspects of the BIPV systems, allowing a clear package of mutually building and cladding typologies. This enables the development in planning and installation of the BIPV systems. the best thermal comfort than other modes. Investigating the interactions effects between the building envelope and the HVAC systems is fundamental to maintain the NZEBs [183]. The prerequisite information of technology, performance, and the parameters to operate and design the integrated HVAC and domestic hot water (DHW) production systems for buildings were presented by [184]. Where the review concerned with the integrated energy systems introducing the required space heating and DHW production; cooling and heating for HVAC and DHW production; electricity and DHW production; mechanical ventilation and DHW production. Furthermore, it was found that the economic evaluation of the BIPV systems is usually required for renovations and retrofits to evaluate the system payback time. For the new, under-construction buildings, the economic assessment is uncomplicated because it is not influenced by the materials cost that will be substituted by the components of the BIPV. Nevertheless, in new or old buildings, the total cost, payback time, and feed-in-tariff are necessary to be considered for optimizing the system economic for different applications of BIPV systems. Concerning the purpose served by the BIPV systems for achieving the national sustainable development goals (SDGs), the analysis of the life cycle is also critical [185]. The market segmentation encompassing technical and economic aspects of the BIPVT systems is presented in Figure 12. Where the cladding typology is the combination of the thermal property such as the insulation property or not and the visual property such as the transparent or opaque of a BIPV technology. The market segmentation introduces multiple possibilities when it comes to the BIPV systems, and it gives the reality that it is a promised market, considering different technical and economic aspects of the BIPV systems, allowing a clear package of mutually building and cladding typologies. This enables the development in planning and installation of the BIPV systems. Discussions Heat generation issues associated with the design and implementation of BIPV systems are fundamental problems that require systematic mitigation and development. Overheating of PV modules and transferring this heat into the building can inadvertently increase the cooling load and increase the power consumption for A/C equipment. The target of reducing average energy consumption in buildings to levels that can be handled by passive technologies will still have to contend with a total primary energy demand at 60 kWh/m 2 [9]. There are significant climatic conditions, such as high sunshine and mild climate; the available urban areas, economic government policy, humidity, and temperature, etc., affecting the perforce and installation of BIPV systems. Building materials and house constructions are different from one country to another, but the photovoltaic technology is almost similar and international. The majority of the BIPV systems depend on the fixed-tilt PV modules system, which have low efficiency of energy conversion, so more effort is required for examining the one and two-axis tracking system as possible as the possibility of the building design and architectural considerations. The architectural envelopes of buildings integrated with PV Discussions Heat generation issues associated with the design and implementation of BIPV systems are fundamental problems that require systematic mitigation and development. Overheating of PV modules and transferring this heat into the building can inadvertently increase the cooling load and increase the power consumption for A/C equipment. The target of reducing average energy consumption in buildings to levels that can be handled by passive technologies will still have to contend with a total primary energy demand at 60 kWh/m 2 [9]. There are significant climatic conditions, such as high sunshine and mild climate; the available urban areas, economic government policy, humidity, and temperature, etc., affecting the perforce and installation of BIPV systems. Building materials and house constructions are different from one country to another, but the photovoltaic technology is almost similar and international. The majority of the BIPV systems depend on the fixed-tilt PV modules system, which have low efficiency of energy conversion, so more effort is required for examining the one and two-axis tracking system as possible as the possibility of the building design and architectural considerations. The architectural envelopes of buildings integrated with PV module systems offset the installation cost that is equivalent to conventional components [165]. Moreover, deploying tracking systems for the PV modules will definitely complicate the architectural elements of integrating BIPV systems. These trackers should be lightweight with an acceptable level of electrical efficiency for the building advancement [186]. Predicting the annual energy performance of the BIPV system requires sophisticated telemetry equipment and analysis software to provide necessary performance and meteorological data for validating the computer simulation models, which could then be employed. On the other hand, sensitivity analyses based on the simulation models are required to evaluate the overall annual energy and thermal performances of BIPV systems for cooling/heating in summer/winter for different climate zones, considering different guiding factors, such as geographical, climatological, space utilization, various costs, system design, operating parameters, etc. Architects and designers may overlook maintenance provisions/costs and the possibility of replacing a single or more PV module/s during the BIPV system design phase. There is no available accessibility to the external fixing of defected PV modules and there is a serious obstacle. Replacements of PV modules are complex and tedious because of the huge amount of wiring that interconnects the modules to each other. The building electrification with installing the rooftop/façade PV panels on existing or new buildings based on mortgage extensions or private investment is a valid solution that can strongly support the energy sector. Thus, the existing legislation needs to be improved national economic support such as feed-in tariffs. Similarly, the measurement standards of the payback period should be established for BIPV system to evaluate the profitability of the proposed hybrid system more accurately. Moreover, more methodologies are required to estimate the economic viability for financing investment scenarios, where such projects are covered by loans from banks. Moreover, BIPV systems have a significant potential to utilize the low natural convective heat transfer at the rear side of the BIPV to act as a thermal insulating layer rather than employing additional costly insulation material [125]. The proposed BIPV systems for the renewable district based on a domestic hot water network supplied by PV modules represents a more convenient and promising energy measure to decarbonize residential territories. The air-to-air heat pumps integrated with PV panels can considerably minimize the primary heating/cooling energy consumptions for buildings. Therefore, the BIPV system is a promising technology that may be quickly and easily adopted by districts, where the performances of heat pumps are essentially high. However, further work may require an electric energy storage system to better address the mismatch of power production and consumption. Regarding the commissioning purposes, there is no monitoring system for the system performance after the installation of the BIPV system that ensures that the functions of the BIPV system are attainable for the long-term. Periodic comprehensive monitoring procedures are prerequisites in order to inspect any malfunctions and to decide the system changes to ensure maximum performance for a long time. The integration of PV modules with building envelopes introduces remarkable changes that consider the thermophysical characteristics, with changes of cooling and heating of building demands and, hence, indoor thermal comfort. The passive and active effects of air-based and water-based BIPV systems include the indoor and wall surface temperature, cooling and heating load and, hence, thermal comfort. These effects place an essential load on the electrical energy consumption of buildings and, consequently, have great influence on designing the national grid of electricity. The HVAC integrated BIPV system provides energy required for the cooling and heating of buildings, domestic hot water, air handling for ventilation, etc., in one package, set-up with air or water airsides, and is a sophisticated technology for a new ZEB construction. The overall energy characteristics and the economic feasibility of these systems should be studied to evaluate the system benefits and drawbacks. The optimization of potential benefits requires a whole-building approach and design concept stage, and the technical, environmental, financial, and energy characteristics that influence one another are holisti-cally coincided to achieve optimal conditions. Moreover, the electricity consumption of the HVAC system versus that of traditional buildings is influenced by passive effects. Conclusions and Outlooks The essential factors that greatly impact the development of the BIPV market are the price of the related components of the PV and the performance. This is accompanied by growing interest in recent technologies based on sustainable energy and the increasing possibilities of the aesthetics of the BIPV systems. Moreover, design aspects, such as standardization and coding of the BIPV systems, enable the installation processes, and reduce the possible risks. In addition, it is crucial to have sufficient awareness and knowledge regarding the BIPV system in the construction sector. Increasing the possibilities of electrical energy by distributed PV systems is a fundamental motivating parameter. The most interesting result of the BIPV system was predicting the impact of the variable pricing of electricity by net metering. Such national policies could increase the significant growth in renewable electricity, particularly in the countries that are currently phasing out traditional power plants based on depleted fossil fuels. Thus, reasonable designs for BIPV systems, based on net metering tariffs, are recommended, to support further expansion of these systems in new (and existing) buildings. Furthermore, more studies on roof/façade PV installation should be investigated with different economic methodologies, showing a combination between PV systems and other components of buildings, including battery storage, which may impact the economic profile of the overall energy system of the building. Exhaust-ventilated air from the building was used as a cooling fluid to reduce the operating temperatures of the PV panel and, hence, it increased the electricity production of the BIPV systems with optimum values, depending on the surface area and the mass flow rate of air. Without shadings on BIPV systems in the early morning and in the late afternoon during the sunny hours per day, particularly during the winter solstice, better electrical performances could have been achieved. Additionally, any modifications that could be achieved to improve the absorption of long-wave radiation into the BIPV system, where increasing the transmittance/absorptance products results in a considerable increase in thermal efficiency of all the design and operating parameters, without significantly reducing the electrical efficiency of the integrated system. Utilizing environmentally friendly-sustainable technology of the BIPV systems is in line with the sustainable developments goals of governments, to reduce carbon emissions and minimize greenhouse gas emissions (GHGs). In addition, renewable energy measures the building-integrated photovoltaic panels for various applications, such as solar collectors and air-to-air heat pumps, maintaining promising primary energy savings, depending on the solar radiation, national cost of electricity, etc. The overall energy savings for both airbased and water-based BIPV systems depends fundamentally on a number of parameters, such as the optimal operating parameters and correct design parameters of the equipment, insulation type, and building construction materials. Data Availability Statement: Data sharing is not applicable to this article. Conflicts of Interest: The authors declare no conflict of interest.
12,230.4
2021-08-27T00:00:00.000
[ "Engineering" ]
Mechanistic Role for a Novel Glucocorticoid-KLF11 (TIEG2) Protein Pathway in Stress-induced Monoamine Oxidase A Expression* Background: The function of KLF11/TIEG2 under stressful conditions is undefined. Results: KLF11 increases brain MAO expression through its promoter and a chromatin partner, which can be enhanced by stress. Conclusion: This is the first elucidation of mechanisms underlying stress-induced KLF11-MAO up-regulation. Significance: This novel KLF11-MAO pathway may play an important role in stress-related brain disorders. Chronic stress is a risk factor for psychiatric illnesses, including depressive disorders, and is characterized by increased blood glucocorticoids and brain monoamine oxidase A (MAO A, which degrades monoamine neurotransmitters). This study elucidates the relationship between stress-induced MAO A and the transcription factor Kruppel-like factor 11 (KLF11, also called TIEG2, a member of the Sp/KLF- family), which inhibits cell growth. We report that 1) a glucocorticoid (dexamethasone) increases KLF11 mRNA and protein levels in cultured neuronal cells; 2) overexpressing KLF11 increases levels of MAO A mRNA and enzymatic activity, which is further enhanced by glucocorticoids; in contrast, siRNA-mediated KLF11 knockdown reduces glucocorticoid-induced MAO A expression in cultured neurons; 3) induction of KLF11 and translocation of KLF11 from the cytoplasm to the nucleus are key regulatory mechanisms leading to increased MAO A catalytic activity and mRNA levels because of direct activation of the MAO A promoter via Sp/KLF-binding sites; 4) KLF11 knockout mice show reduced MAO A mRNA and catalytic activity in the brain cortex compared with wild-type mice; and 5) exposure to chronic social defeat stress induces blood glucocorticoids and activates the KLF11 pathway in the rat brain, which results in increased MAO A mRNA and enzymatic activity. Thus, this study reveals for the first time that KLF11 is an MAO A regulator and is produced in response to neuronal stress, which transcriptionally activates MAO A. The novel glucocorticoid-KLF11-MAO A pathway may play a crucial role in modulating distinct pathophysiological steps in stress-related disorders. Chronic stress increases the levels of blood glucocorticoids (1-3) and brain monoamine oxidase (MAO) 3 A (4, 5). Monoamine oxidases (MAO) are catalytic enzymes prevalent in the brain and peripheral tissue (6). Present as two structurally distinct isoforms, MAO A and MAO B both degrade monoamine neurotransmitters and produce hydrogen peroxide as a toxic byproduct (7). MAO A primarily deaminates serotonin, norepinephrine, and dopamine and, therefore, is implicated in several psychiatric diseases. Elevated brain MAO A levels are present in living patients and post-mortem human subjects with depressive disorders, including major depressive disorder (8 -12) and in mothers during postpartum depression (13), supporting the theory that an imbalance in biogenic amines can influence the affective state (14,15). Such correlations suggest MAO A to be a biochemical link for stress and depression, which often exists comorbidly in clinical studies. Increasing efforts have been made to understand the mechanism of stress-induced MAO A expression (4, 5, 16 -18). Dexamethasone, a synthetic glucocorticoid that induces cellular stress, has been shown to increase MAO A mRNA, protein, and enzymatic activity in human skeletal muscle cells (19) and to increase MAO A mRNA levels in the dorsal raphe nucleus in rats (20). In addition, dexamethasone treatment of neuroblastoma cells demonstrated that activation and subsequent binding of glucocorticoid receptors (GR) to the glucocorticoid response DNA element (GRE) sequence on the MAO A promoter-activated MAO A transcription (21). In addition to three consensus GREs in the distal promoter region, the core promoter of MAO A contains four Sp/KLF-binding sites that have been implicated in the transcriptional activation of MAO A (21). Kruppel-like factor 11 (KLF11), a member of the Sp/KLF family of transcription factors, has been shown to up-regulate MAO B transcription via similar Sp/KLF-binding sites (22). However, possible involvement of KLF11 in MAO A transcriptional activation has not been studied. KLF11 is also referred to as transforming growth factor ␤-inducible early gene 2 (TIEG2). It is currently better known because of its role in metabolism. Indeed, publications from our group demonstrate that KLF11 (TIEG2) regulates genes involved in the metabolism of lipids, glucose, prostaglandins, neurotransmitters, and alcohol/drugs (23)(24)(25). We have also published that KLF11 is linked to three types of diabetes (a metabolic disease) (26 -28). Thus, a more in-depth investigation of how stress hormones, such as glucocorticoids, affect the function of KLF11, as shown here, is of substantial biomedical relevance. Here we report that KLF11 acts as a direct transcriptional activator in stress-induced MAO A expression. First, cultured neuronal cells treated with dexamethasone show increased KLF11 expression and nuclear translocation and increased MAO A expression and activity. Second, the MAO A expression and activity after dexamethasone administration is influenced by KLF11, as shown by both the overexpression and knockdown of KLF11. Third, dexamethasone-induced KLF11 directly binds and activates transcription of MAO A via the p300 pathway, independently of the GR-induced transcriptional activation. Fourth, KLF11 knockout mice show decreased MAO A expression in brain. Fifth, KLF11 and MAO A levels are increased in the brains of rats exposed to chronic social defeat (CSD) stress, a well established animal model for depression (29 -31). Together, these findings suggest that KLF11 is a direct activator for MAO A and that the novel stress-induced KLF11-MAO A pathway may modulate behavioral traits associated with stress or depressive disorders. EXPERIMENTAL PROCEDURES Cell Line and Rat Primary Cortical Neurons-The SH-SY5Y human neuroblastoma cell line was purchased from the ATCC. Cells were cultured in DMEM supplemented with 10% FBS. Rat brain cortex (E18, 19) neuronal cells were purchased from Lonza (R-Cx-500) and cultured in poly-D-lysine-coated plates with PNGM TM BulletKit TM medium following the instructions of the manufacturer. After ϳ4 days in culture, cells were treated with or without 100 nM dexamethasone for 48 -72 h, as described previously (21,32,33). Western Blot Analysis-Whole-cell protein extracts were obtained in radioimmune precipitation assay buffer (Sigma), and the lysates were centrifuged at 4°C (11,500 rpm) for 10 min to pellet and eliminate the cell debris. Brain tissue from each animal was homogenized in a 0.5-ml solution containing 1 mM EDTA, 10 mM Tris-HCl, and fresh protease inhibitor (Sigma) and centrifuged at 4°C (3,500 rpm) for 10 min. Supernatants were stored at Ϫ80°C. Forty micrograms of total protein were separated in 10.5% SDS-PAGE gel. After transfer, the membranes were probed with primary and secondary antibodies. All band intensities were normalized to those of ␤-actin using Quantity One analysis software (8,23). Immunofluorescence-Cells were plated on a four-well chamber slide (Nalge) with or without 100 nM dexamethasone treatment for 48 h. Cells were then fixed with 4% paraformaldehyde and incubated with mouse anti-KLF11 (1:1000) antibody overnight at 4°C. After incubation with secondary antibody, stained slides were mounted with Vectashield in the presence of 4,6-diamino-2-phenylindole (DAPI nuclear stain, Vector Lab, Inc.) (17). Generation of KLF11 and pcDNA Stably Transfected Cell Lines (Overexpressing KLF11)-Cells were plated at a density of 10 6 cells per 10-cm dish. The next day, the KLF11 expression vector or pcDNA 3.1 control vector was transfected into cells with SuperFect transfection reagent (Qiagen, Inc). After 24 h, cells were treated with the antibiotic Geneticin (G418, 600 g/ml). Resistant clones were isolated into separate dishes after 6 days and cultured under continuous G418 selection (35). siRNA-mediated Klf11 Gene Knockdown-Control siRNA or KLF11 siRNA for human KLF11 (Santa Cruz) or for rat KLF11 (Qiagen) was transfected into SH-SY5Y cells or the rat primary cortical neurons with the siPORT amine transfection agent (Ambion) following the protocol of the manufacturer. Briefly, siPORT amine transfection agent and Opti-MEM I medium were mixed with each siRNA for 10 min with a final siRNA concentration of 20 nM per 10-cm dish. The siRNA⅐siPORT amine transfection agent complex was directly added to the cell culture medium (34). MAO A Catalytic Activity Assay-SH-SY5Y cells, the rat primary cortical neurons, and animal brain tissue were homogenized in assay buffer (50 mM sodium phosphate buffer). Approximately 100 g of total protein were incubated with 100 M [ 14 C]5-hydroxytryptamine in assay buffer at 37°C for 20 min. The reaction was terminated by addition of 100 l of 6N HCl. Reaction products were then extracted with benzene/ ethyl acetate, and its radioactivity was determined by liquid scintillation spectroscopy (8,21). Transient Transfection and Luciferase Activity Assay-KLF11 interaction with the MAO A promoter was determined by transient transfection and luciferase assays using the following luciferase reporter gene constructs: 1) a segment of the MAO A core promoter containing only Sp/KLF-binding sites, 2) the MAO A promoter containing only three GREs (deleted Sp/KLF-binding sites), or 3) the MAO A 2 kb promoter (containing both Sp/KLF-binding sites and GREs) (22). These MAO A promoter-luciferase reporter gene constructs were cotransfected with the KLF11 vector (or the pcDNA3.1 vector) (22) and the p300-expression vector (28) in SH-SY5Y cells using Superfect transfection reagent (Qiagen) following the protocol of the manufacturer (21,22). ChIP Assays-SH-SY5Y cells (150-mm dish) were crosslinked by 1% formaldehyde for 10 min, scraped into PBS containing protease inhibitors (Sigma), and centrifuged. Cells were then resuspended in 350 l of lysis buffer (1% SDS, 10 mm EDTA, and 50 mm Tris-HCl (pH 8.1)). Nuclear protein-DNA complexes were immunoprecipitated by incubation with anti-KLF11 (with BioMag goat anti-mouse) antibody overnight at 4°C. DNA was recovered from the beads by elution buffer (1% SDS and 0.1 m NaHCO 3 ) and analyzed by real-time PCR as described previously (22). Klf11 Ϫ/Ϫ Mice-The Klf11 homozygous knockout model was generated at the University of Washington, Seattle following standard homologous recombination techniques to inactivate the endogenous Klf11 gene in embryonic stem cells, generating chimeras, and isolating colony founders carrying the knockout gene (28,36). This animal was originally generated in a mixed background and subsequently transferred to the Mayo Animal Facilities where it was crossed back into a pure C57BL/6 background for more than 20 generations to produce the inbred strain used in this study. In all of the experiments, male Klf11 Ϫ/Ϫ animals were compared with age-matched male Klf11 ϩ/ϩ littermates. CSD Experiments-Twenty adult, male Wistar rats (weighing 180 -220 g) were provided with free access to Purina rat chow and water. Rats were housed in individual cages in a temperature-and humidity-controlled room with a reversed 12:12-h light/dark cycle (1). The chronic social stress induced in the experimental group was on the basis of the original resident-intruder paradigm (37,38). Each rat (total ϭ 10) was transferred from its home environment to a cage holding one of ten male, Long Evans "resident" rats (weighing 580 -620 g, Harlan). Within 3 min, the intruder was attacked and defeated by the resident, as indicated by freezing behavior and submissive posture. The intruder and resident were then immediately separated, and the intruder was kept in a small plastic wire mesh compartment within the cage of the resident for 1 h. Subsequently, the intruder was released from the small cage back into its home habitat. This procedure was repeated once daily for 4 days during week 1, for 2 days for weeks 2 and 3, and for 4 days during week 4. The rats (control and stressed groups) were sacrificed by decapitation on day 29 (1). The control group (10 rats) was maintained and handled in the same manner as the social defeated rats except for the stress exposure (CSD). All animal protocols were performed according to the Ethical Guidelines on Animal Experimentation and approved by the Institutional Animal Care and Use Committees. Radioimmunoassay of Corticosterone Levels-Blood from decapitated rats was collected for determination of individual corticosterone levels by the Radioimmunoassay Laboratory at the University of Mississippi Medical Center using the Coat-A-Count rat corticosterone kit (Diagnostic Products Corp., Los Angeles, CA) (1). HPLC Measures for Serotonin Levels-Rat brain tissue (100 mg) was homogenized on ice with a 0.5 ml solution containing 0.1 M perchloric acid, 0.3 mM EDTA, and 0.01 mM ascorbic acid. The resulting homogenate was stored on dry ice for ϳ10 min, thawed, and centrifuged at 4°C for 5 min (12,500 rpm). The supernatant was then used to determine serotonin levels. Serotonin was separated by HPLC analysis using an HPLC system with a Waters 600 pump/controller, 717 autosampler, Waters 2465 electrochemical detector and a PerkinElmer Life Sciences C18 column with a guard column and a combination of isocratic elution. The mobile phase contained 50 mM anhydrous citric acid, 50 mM sodium acetate, 50 mM sodium hydroxide, EDTA, 1 mM sodium octyl sulfate, 7% methanol and 6% acetonitrile (pH 4.5). Ten microliters of each supernatant was injected per sample. The resulting pellets were dissolved in 0.1 M sodium hydroxide, and protein content was determined using the bicinchoninic acid kit (Pierce Biotechnology, Inc). Waters Millennium32 software was used for programming the pump flow rate, controlling the autosampler, and for acquisition of data and analysis. Statistical Analysis-Statistical significance was evaluated using Student's t test for two group comparison or analysis of variance followed by Bonferroni adjusted tests when comparing more than two groups. A value of p Ͻ 0.05 was considered significant. Glucocorticoid Exposure Activates the Expression and Nuclear Translocation of KLF11 (TIEG2) in Human Neuronal Cells-Both transcriptional activation and nuclear translocation are hallmarks of the activation of the KLF11 gene. Thus, we initially used the well characterized brain-derived SH-SY5Y cell line as a culture system for performing mechanistic studies to determine whether KLF11 is a glucocorticoid-inducible gene ( Fig. 1). SH-SY5Y cells were treated with 100 nM dexamethasone (a synthetic glucocorticoid) for 48 h and KLF11 mRNA levels were determined by quantitative real-time RT-PCR. Approximately a 2.3-fold increase in KLF11 expression (p Ͻ 0.01, Fig. 1A) was observed following dexamethasone administration. As detected by Western blot analysis, KLF11 protein levels were increased similarly 1.8-fold in whole cell lysates (p Ͻ 0.05, Fig. 1B, lane 2 versus lane 1) upon dexamethasone treatment. Moreover, dexamethasone increased KLF11 protein levels 3.3-fold (p Ͻ 0.01, Fig. 1B, lane 4 versus lane 3) in the nuclear fraction, indicating that KLF11 translocated into the nucleus to regulate dexamethasone-induced regulation of its target genes, such as MAO A. We have documented previously that MAO A expression and activity were increased similarly by dexamethasone (21). Next, we visualized the translocation of KLF11 into the nucleus (Fig. 1C) after treatment with dexamethasone using immunofluorescence. The relative distribution of nuclear and cytosolic KLF11 was semiquantified using image analysis software (SlideBook) and expressed as the ratio of nucleus:cytosol. As shown in Fig. 1C, the ratio of nucleus:cytosol for the untreated control group was 1:3.75 and for the dexamethasonetreated group the ratio was 1:1.18 (p Ͻ 0.01), indicating that dexamethasone significantly increases KLF11 nuclear translocation at least 3-fold, which is consistent with the Western blot analysis (Fig. 1B). These results are the first to identify inducible cytoplasmic-to-nuclear shuttling of KLF11 in response to corticosteroid treatment, demonstrating the dynamic regulation of KLF11 localization in the nucleus that may underlie the stress-induced regulation of target genes by this important Kruppel-like transcription factor. KLF11 Mediates Basal and Dexamethasone-induced MAO A mRNA Levels and Enzymatic Activity-Although KLF11 has been shown to up-regulate MAO B transcription (22), its action on MAO A remains unknown. Addressing this question is of significant medical relevance because both MAO isoforms par-ticipate in diseases and are targets of psychotropic therapy. Therefore, we quantified the effects of KLF11 on MAO A expression at both the mRNA and protein levels in SH-SY5Y cells stably transfected with KLF11 versus control vector (pcDNA), along with control-siRNA-and KLF11-siRNAtransfected cells (Fig. 2). The overexpressed KLF11 (Fig. 2Aa) (35) and siRNA-mediated KLF11 knockdown (Fig. 2Ba) (34) have been confirmed by Western blot analysis. We find that MAO A mRNA levels in KLF11-transfected samples were increased significantly ϳ2-fold compared with pcDNA- twice the level of MAO A catalytic activity compared with control cells (p Ͻ 0.02, Fig. 2Ac, lane 4 versus lane 3). The dexamethasone-induced increase in MAO A catalytic activity was amplified significantly 2-fold compared with KLF11-overexpressing cells without dexamethasone treatment (p Ͻ 0.05, Fig. 2Ac, lane 4 versus lane 2). To better define the role of KLF11 in basal and dexamethasone-induced MAO A regulation, cells were treated with KLF11-siRNA to deplete endogenous KLF11 and compared with cells treated with control siRNA (Fig. 2B). Without dexamethasone treatment, MAO A mRNA levels were unchanged in KLF11 siRNA-transfected cells compared with control cells (Fig. 2Bb, lane 2 versus lane 1). However, dexamethasone-induced MAO A mRNA levels in KLF11 siRNA-transfected cells were decreased by ϳ30% compared with control siRNA-treated cells (p Ͻ 0.05 Fig. 2Bb, lane 4 versus lane 3). Furthermore, MAO A enzymatic activity was not altered significantly with or without dexamethasone exposure in KLF11 siRNA-transfected cells compared with control siRNA-transfected cells (Fig. 2Bc, lane 4 versus lane 3 and lane 2 versus lane 1). Thus, these combined experiments, using a well characterized cell model for studying stress responses in SH-SY5Y cells, demonstrate that KLF11 mediates corticoidinduced up-regulation of MAO A at the mRNA, protein, and enzymatic activity levels. Subsequently, we examined whether this pathway is operational in primary neuronal cell culture. We find that the glucocorticoid-KLF11-MAO A pathway is activated in rat primary cortical neurons in a similar fashion to that observed in cell line models (Fig. 3). KLF11 mRNA levels were increased ϳ2.6-fold (p Ͻ 0.01 Fig. 3Aa) following dexamethasone administration. KLF11 protein levels doubled in the cell lysate from rat primary cortical neurons upon this treatment (p Ͻ 0.02, Fig. 3Ab, lane 2 versus lane 1), and the nuclear KLF11 level was increased more than 3- fold (p Ͻ 0.01, lane 4 versus lane 3). Also, the catalytic Stress Induces KLF11 (TIEG2) and Monoamine Oxidase A activity of MAO A was significantly increased with the treatment (p Ͻ 0.01, Fig. 3Ac, lane 2 versus lane 1). On the other hand, KLF11 siRNA treatment of the rat primary cortical neurons depleted endogenous KLF11 (Fig. 3Ba). Without dexamethasone treatment, MAO A mRNA levels were decreased by 33% in KLF11 siRNA-transfected cells compared with control cells (Fig. 3Bb, lane 2 versus lane 1, p Ͻ 0.05). Dexamethasoneinduced MAO A mRNA levels in KLF11 siRNA-transfected cells were decreased by ϳ47% compared with control siRNA-treated cells (p Ͻ 0.05, Fig. 2Bb, lane 4 versus lane 3, p Ͻ 0.01). In addition, MAO A enzymatic activity was reduced by ϳ27% (3Bc, lane 2 versus lane 1, p Ͻ 0.08) without dexamethasone treatment and even more reduced (by ϳ36%, lane 4 versus lane 3, p Ͻ 0.05) in KLF11 knockdown cells exposed to dexamethasone compared with control siRNA-transfected cells. These results suggest that dexamethasone has more effect on rat primary cortical neurons than on the SH-SY5Y cell line. Our data also demonstrate that KLF11 participates in the dexamethasone-induced transcriptional activation of MAO A. Identification of a p300-KLF11 Pathway That Activates MAO A Transcription at the Promoter Level-To determine whether the KLF11-induced increase in MAO A mRNA is the result of promoter activation, we assessed the activity of luciferase reporter constructs by a MAO A promoter fragment ligated upstream of this luciferase reporter gene vector (Fig. 4). Cells were transiently cotransfected with the MAO A-luciferase reporter construct and KLF11 or control vector (pcDNA3.1). Dexamethasone exposure increased luciferase activity, which is indicative of MAO A promoter activity, 1.8-fold compared with untreated control cells (p Ͻ 0.05, Fig. 4A, lane 2 versus lane 1). KLF11 overexpression doubled luciferase activity/MAO A promoter activity (p Ͻ 0.05, Fig. 4A, lane 3 versus lane 1). Activation of the MAO A promoter was increased ϳ3.4-fold in KLF11 cotransfected cells treated with dexamethasone compared with untreated control cells (p Ͻ 0.01, Fig. 4A, lane 4 versus lane 1). MAO A promoter activity was also increased ϳ1.8-fold following dexamethasone treatment of KLF11 cotransfected cells compared with transfected cells without dexamethasone treatment (p Ͻ 0.02, Fig. 4A, lane 4 versus lane 3). This study was complemented by ChIP assay to define whether the MAO A promoter is a direct target of KLF11 in vivo. Indeed, our results demonstrated that dexamethasone significantly increased the recruitment of KLF11 to the MAO A core promoter, which contains Sp/KLF binding sites (Fig. 4B), suggesting that activation of the MAO A promoter occurs, at least in part, via KLF11 recruitment. However, how KLF11 couples to distinct chroma- tin remodeling machines to regulate this gene remains unknown. Histone acetyl transferases (HATs), such as p300, have been shown to couple to KLF11 and other KLF transcription factors to regulate transcription in different cell systems (26,36,39) However, not much is known about the effect of these HATs in the central nervous system. The potential regulation of MAO A by p300 is of significant clinical relevance because patients who present with mutations in this gene (Rubinstein-Taybi syndrome) are affected by severe mental illnesses (40) and other central nervous system abnormalities (41). Therefore, we investigated the effect of p300 on the KLF11-MAO A pathway using transient cotransfection of the MAO A promoter-luciferase reporter with a p300 and KLF11 expression construct (Fig. 5). Activation of MAO A by KLF11 was greatly increased when cotransfected with p300 (Fig. 5, lane 4 versus lane 2, p Ͻ 0.01), revealing that this HAT augments KLF11-mediated transcriptional activation of MAO A. Thus, these results, combined with the rest of our cell biological studies, outline a new pathway that is initiated by glucocorticoids and acts through KLF11 to bind and activate MAO A via a p300-dependent mechanism. Both Sp/KLF-binding Sites and GREs Independently Contribute to Dexamethasone-induced MAO A Promoter Activity-To elucidate additional mechanisms underlying glucocorticoid-induced MAO A expression, the effects of dexamethasone on the MAO A core promoter (which only contains putative Sp/KLF binding sites), the MAO A promoter (which only contains three GREs) with deleted Sp/KLF-binding sites and the MAO A 2-kb promoter (containing both Sp/KLF-binding sites and GREs) were assessed independently by transient transfection and luciferase assays (Fig. 6, A and B). As shown in Fig. 6B, dexamethasone exposure increases Sp/KLF11-mediated activation of MAO A core promoter activity by 50% (lane 3 versus lane 2, p Ͻ 0.05) and MAO A promoter containing the activity of three GREs 2-fold (lanes 7 versus 6; p Ͻ 0.05), respectively. However, transfection of KLF11 did not increase the GREcontaining MAO A promoter activity (Fig. 6B, lane 6 versus lane 5), further supporting that KLF11 regulates MAO A gene expression through Sp/KLF-binding sites. Transfection of the GR increased dexamethasone-induced GRE-containing promoter activity 2.0-fold (Fig. 6B, lane 8 versus lane 7, p Ͻ 0.02). Conversely, the addition of GR did not significantly increase MAO A core promoter activity (Fig. 6B, lane 4 versus lane 3). Stress Induces KLF11 (TIEG2) and Monoamine Oxidase A Furthermore, when cells were transfected with the MAO A 2-kb promoter (containing both Sp/KLF-binding sites and three GREs) and treated with dexamethasone, the 2-kb promoter activity was increased by 80% (Fig. 6B, lane 11 KLF11 Knockout Mice Have Reduced MAO A mRNA Levels and Enzymatic Activity-To verify the importance of KLF11 as a novel transcriptional activator for MAO A gene expression, we investigated whether this pathway is operational in mice carrying a germ line inactivation of KLF11 (Klf11 Ϫ/Ϫ mice). The levels of both MAO A mRNA and catalytic activity were determined in the brain cortex of Klf11 Ϫ/Ϫ mice and compared with Klf11 ϩ/ϩ littermates (Fig. 7). The mRNA level of MAO A was reduced by 43% in KLF11 knockout mice compared with mice expressing wild-type KLF11 (p Ͻ 0.05, Fig. 7A). The enzymatic activity of MAO A was decreased by 26% in the brain cortex of knockout mice compared with the wild type (p Ͻ 0.07, Fig. 7B). Together, this data provides in vivo evidence for the role of KLF11 as an upstream transcriptional activator of MAO A gene expression in the central nervous system. These findings are also congruent with the studies in rat primary cortical neu-rons that were depleted of endogenous KLF11 by siRNA, which resulted in reduced MAO A mRNA levels (Fig. 3Bb, lane 2 versus lane 1, p Ͻ 0.05) and MAO A enzymatic activity levels (Bc, lane 2 versus lane 1, p Ͻ 0.08) without dexamethasone treatment. Activation of the Glucocorticoid-KLF11-MAO A Pathway during CSD Stress-Previous studies have amply demonstrated that many of the effects underlying CSD stress proceeds primarily via corticosteroids (glucocorticoids in rats), making this model an optimal tool to validate whether the new KLF11-MAO A pathway is operational under conditions that simulate the psychosocial stress commonly observed in humans with depression and other stress-related mood disorders. For this purpose, rats were subjected to CSD stress, and MAO A RNA and protein levels were measured. To confirm that the CSD stress was conducted effectively, the levels of blood corticosterone (glucocorticoids) in these rats were determined. As shown in Fig. 8, serum corticosterone levels in rats treated with CSD stress were elevated ϳ2.8-fold compared with untreated rats (p Ͻ 0.002, Fig. 8A). Because MAO A oxidizes serotonin, brain levels of serotonin were also determined. As expected, serotonin levels were decreased significantly in both the cortex (by 29%, p Ͻ 0.001) and thalamus (by 45%, p Ͻ 0.03) (Fig. 8B) of CSD-treated rats compared with respective controls. Together, these results demonstrate that the biochemical and physiological parameters of CSD stress are recapitulated in our animal model system. Thus, using this validated model, we measured mRNA and activity levels of MAO A from the brain cortex and thalamus of control and CSD rats (Fig. 9). Our results show that MAO A mRNA expression and catalytic activity were increased significantly in the prefrontal cortex of CSD rats compared with controls, ϳ3.2-fold (p Ͻ 0.01, Fig. 9Aa) and ϳ1.5-fold (p Ͻ 0.02, Fig. 9Ab), respectively, in the cortex. Likewise, MAO A mRNA and catalytic activity were increased 3-fold (p Ͻ 0.02, Fig. 9Ba) and ϳ2-fold (p Ͻ 0.05, Fig. 8Bb), respectively, in the rat brain thalamus exposed to CSD stress. Therefore, chronic stress induced by CSD increases MAO A gene expression and enzymatic activity in the rat brain cortex and thalamus. Expression of the Transcription Factor, KLF11 (TIEG2), Is Increased Significantly in the Rat Brain Cortex and Thalamus by CSD Stress-Because changes in MAO expression rely on the activity of certain transcription factors, we reasoned that a key regulatory protein may be altered under stressful conditions. To investigate whether the increase of MAO A by CSD was due to changes in KLF11 levels, we determined mRNA and protein levels of KLF11 by quantitative real-time RT-PCR and Western blot analysis, respectively. As shown in Fig. 10, KLF11 mRNA and protein levels were elevated in the cortex of rats exposed to CSD stress compared with controls, ϳ3.5-fold (p Ͻ FIGURE 8. Effects of chronic social stress on rat blood corticosterone (glucocorticoids) and brain serotonin levels after a 28-day exposure to CSD stress (an animal model for depression). A, blood corticosterone levels were determined by radioimmunoassay. B, serotonin levels in the rat cortex or thalamus were measured by HPLC. Data represent the mean Ϯ S.E. of 10 rats (n ϭ 10) in each group. FIGURE 9. Effects of chronic social stress on MAO A mRNA and catalytic (enzymatic) activity in the brain tissue of rats exposed to CSD stress (an animal model for depression) compared with unexposed control rats. Stressed rats were exposed to CSD for 28 days. MAO A levels from the brain cortex (A) or brain thalamus (B) were determined by quantitative real-time RT-PCR (for MAO A mRNA levels) (a) and by enzymatic activity assay (for MAO A catalytic activity levels) (b), respectively. Data represent the mean Ϯ S.E. of 10 rats (n ϭ 10) in each group. FIGURE 10. Effects of chronic social stress on KLF11 (TIEG2) levels in the brain tissue of rats after a 28-day exposure to CSD stress (an animal model for depression). KLF11 levels were determined from the brain cortex (A) or brain thalamus (B). a, KLF11 mRNA levels were quantified by real-time RT-PCR. b, quantitative Western blot analysis of KLF11. Each KLF11 band was evaluated by its relative intensity and normalized to the density of ␤-actin. Representative Western blot analyses from three untreated controls and three stressed rats are shown in the bottom panels. Stress Induces KLF11 (TIEG2) and Monoamine Oxidase A 0.005, Fig. 10Aa) and ϳ1.7-fold (p Ͻ 0.03, Fig. 10Ab), respectively. As expected, KLF11 mRNA and protein levels were also increased in the brain thalamus ϳ3-fold (p Ͻ 0.01, Fig. 10Ba) and ϳ2-fold (p Ͻ 0.04, Fig. 10Bb), respectively, compared with unexposed control rats. Thus, these results, which are congruent with our in vitro investigation using isolated neuronal cell lines and primary culture, help to establish that the activation of the glucocorticoid-KLF11-MAO A pathway is operational in the brain cortex and thalamus and is activated under conditions that model the chronic social stress observed in humans. DISCUSSION Abnormalities of MAO A levels and activity have been associated with a number of psychiatric disorders (8 -13, 42, 43). It is critical to explore the molecular basis for regulation of MAO A expression and enzymatic function. Through data from relevant cell culture systems under cellular stress, a KLF11-deficient mice model and a rat model for depression (generated by CSD stress), this study reports a novel pathway of stress-induced, KLF11-p300-mediated activation of MAO A expression. We first found that MAO A and KLF11 expression levels showed a positive correlation. Dexamethasone exposure further augments MAO A activity, which correlates with increased KLF11 expression. ChIP assay results suggest that this up-regulation occurs by the preferential binding of KLF11 to Sp/KLF cis-regulatory sites within the MAO A core promoter, triggering p300-mediated chromatin remodeling and promoter activation. Our data suggest that both KLF11 and GR contribute to glucocorticoid-induced activation of MAO A through Sp/KLFbinding sites and GREs, respectively. These independent but redundant MAO A transcriptional activation pathways, by KLF11 and GR, may ensure faster and/or greater activation (GR-mediated) in the presence of elevated glucocorticoid levels and may also ensure the long lasting maintenance of MAO A expression (KLF11-mediated) once glucocorticoid levels have decreased during exposure to chronic social stress. Further investigation is needed to fully understand the dynamics of this feed-forward loop. In rats exposed to CSD stress, this study documents the significant increase in serum corticosterone (glucocorticoids) level along with increased MAO A and KLF11 expression, supporting the results from in vitro experiments. Considering the fact that KLF11 regulates a large number of target genes involved in different processes, such as metabolism, cell cycle, and apoptosis, in addition to MAO A as shown here, this finding implies that stress and stress hormones can alter the expression pattern of a gene network, therefore causing fundamental changes in the cells. Our model is further reinforced by the observation that dexamethasone-induced nuclear translocation of KLF11 facilitates activation of its target genes. Notably, the translocation of KLF11 into the nucleus parallels the nuclear translocation of the multifunctional protein GAPDH, which has been implicated in cellular apoptosis (44,45). KLF11 has been shown to associate intranuclearly with GAPDH and to promote neuronal cell degeneration and death via the GAPDH-KLF11-MAO B cascade (23,34). This interaction may also occur in the cytosol, similar to the complex of the ubiquitin ligase Siah1 and GAPDH (45,46). Like Siah1, KLF11 contains a putative translocation signal that may facilitate GAPDH-nuclear translocation. KLF11 knockout mice (Klf11 Ϫ/Ϫ ) were further used to investigate MAO A mRNA and catalytic activity, which supports the substantial role of KLF11 in the up-regulation of MAO A. These findings are congruent with the decrease in MAO A expression following KLF11 knockdown in primary cortical neurons compared with controls (Fig. 3). Interestingly, the differences in MAO A expression were readily amplified after dexamethasone exposure. Thus, effects of CSD stress in KLF11 knockout mice would need to be investigated in the future. It is expected that KLF11 knockout mice exposed to chronic social stress would exhibit reduced MAO A induction as compared with the wildtype mice. Additionally, the consistent results of increased KLF11 and elevated MAO A in the CSD chronic stress rat model and in dexamethasone-treated cells (14) validated the importance of stress-mediated KLF11 up-regulation of MAO A in chronic biological stress and depression as it parallels other recent publications using the CSD stress paradigm in rats or mice as a model for depression (1, 29 -31). In summary, our study, using relevant cellular and molecular approaches in models ranging from cells to laboratory rodents, suggests that KLF11 (TIEG2) is a novel transcriptional activator for MAO A gene expression. We have shown that stress increases KLF11 expression and induces its nuclear translocation both in vivo and in vitro and contributes to increase in MAO A expression. Thus, future studies of the interactions of MAO A and its transcription factors could provide insight into novel psychotherapies. Finally, we have demonstrated that KLF11 couples to a HAT, p300, to regulate MAO A. This knowledge, combined with the existence of HAT inhibitors already in advanced phases of clinical trials, suggests that the activity of this transcriptional pathway could be manipulated pharmacologically to combat stress-related disorders such as depression.
7,599.2
2012-05-24T00:00:00.000
[ "Biology" ]
Differential cross section measurement of charged current $\nu_{e}$ interactions without final-state pions in MicroBooNE In this letter we present the first measurements of an exclusive electron neutrino cross section with the MicroBooNE experiment using data from the Booster Neutrino Beamline at Fermilab. These measurements are made for a selection of charged-current electron neutrinos without final-state pions. Differential cross sections are extracted in energy and angle with respect to the beam for the electron and the leading proton. The differential cross section as a function of proton energy is measured using events with protons both above and below the visibility threshold. This is done by including a separate selection of electron neutrino events without reconstructed proton candidates in addition to those with proton candidates. Results are compared to the predictions from several modern generators, and we find the data agrees well with these models. The data shows best agreement, as quantified by $p$-value, with the generators that predict a lower overall cross section, such as GENIE v3 and NuWro. In this letter we present the first measurements of an exclusive electron neutrino cross section with the MicroBooNE experiment using data from the Booster Neutrino Beamline at Fermilab.These measurements are made for a selection of charged-current electron neutrinos without final-state pions.Differential cross sections are extracted in energy and angle with respect to the beam for the electron and the leading proton.The differential cross section as a function of proton energy is measured using events with protons both above and below the visibility threshold.This is done by including a separate selection of electron neutrino events without reconstructed proton candidates in addition to those with proton candidates.Results are compared to the predictions from several modern generators, and we find the data agrees well with these models.The data shows best agreement, as quantified by p-value, with the generators that predict a lower overall cross section, such as GENIE v3 and NuWro. Many fundamental questions in neutrino physics are still unresolved [1] and will be addressed by upcoming experiments that use liquid argon detectors [2,3].These experiments will look for the appearance of electron neutrinos in a muon-neutrino beam to search for CP violation, measure the neutrino mass ordering, and explore longstanding anomalies.They will also address broader physics goals such as searching for dark matter particles in the beam, for which ν e interactions are a dominant background, and characterizing supernova explosions, for which ν e interactions are the primary signal.It is therefore vital to improve the modeling of ν e interactions in argon to enable those searches with high sensitivity. We present a measurement of ν e interactions in argon without final-state pions in MicroBooNE, both with and without visible protons.This analysis is the first ν e -argon cross section measurement in an exclusive final state and provides additional model discrimination relative to previous inclusive measurements.Also, as a first ν e cross section measurement on the Booster Neutrino Beamline (BNB) [4] at Fermilab, we provide a complementary result to previous measurements on argon [5][6][7] performed on ν e events from the Neutrinos at the Main Injector (NuMI) beamline [8].This measurement also complements the differential electron neutrino crosssection measurement on a hydrocarbon target in a similar exclusive final state [9]. MicroBooNE has recently completed the first round of searches [10][11][12][13] for an excess of low-energy chargedcurrent (CC) ν e interactions that could explain the Mini-BooNE anomaly [14], and did not observe an excess.The search for ν e events without visible final-state pions [11], however, observed mild tension with the model used to predict the ν e interaction rate.Consistency was found * microboone<EMAIL_ADDRESS>be at the 10%-20% level in terms of p-values after systematic uncertainties were constrained with a highstatistics measurement of CC ν µ interactions from the same beam.In this letter we build on this result to perform a cross section measurement under the assumption of no new physics, with the goal of providing input to neutrino interaction model development. The MicroBooNE detector [15] is a liquid argon time projection chamber (TPC).The TPC is a 2.56 m by 2.32 m by 10.36 m volume filled with 85 metric tons of liquid argon.As charged particles travel through the detector, they ionize the argon, and the ionization electrons drift in the applied electric field of 273 V/cm, to be detected by induction on two planes of wires and collected on the third plane of wires.Each plane of wires has a different orientation (vertical, +60 • , −60 • ) so that when they are read out in time they result in three different "views" that are combined to derive 3D images of neutrino interactions.The detector also contains a light collection system, consisting of 32 photomultiplier tubes with fast timing resolution, that makes it possible to identify ionization electrons coincident with the neutrino beam arrival. The neutrinos measured in this analysis come from the BNB.They have an average energy of about 0.8 GeV and are primarily muon neutrinos, with only a 0.5% contribution from electron neutrinos [16].This analysis measures this intrinsic electron neutrino component using data collected from 2016-2018, corresponding to 6.86 × 10 20 protons on target (POT). The neutrino flux simulation used in this analysis was developed by the MiniBooNE collaboration [16] and is modified to use the position of MicroBooNE.Neutrino interactions in the detector argon are simulated using v3.0.6 G18 10a 02 11a of the GENIE event generator [17] with the MicroBooNE tune applied [18].There are several steps involved to simulate the detector response.Particles are propagated through the detector us-ing Geant4 [19], and then the charge and light produced by these particles is simulated with LArSoft [20].A simulation of the charge induced by drifting electrons is used for the wire and readout electronics response [21,22].Scintillation light propagation is modeled with a lookup table from a Geant4 simulation of photon propagation.Data-driven electric field maps are used to take into account distortions in the electric field from space charge [23,24].Ion recombination is simulated with a modified box model [25], and a time dependent simulation is used for the drift electron lifetime and wire response.Cosmic rays are a significant background in MicroBooNE and are incorporated in a data-driven way by overlaying a simulated neutrino interaction onto cosmic data collected during periods of time when the neutrino beam was off.This method also provides a datadriven incorporation of detector noise. Neutrino events are reconstructed in this analysis using the Pandora pattern-recognition toolkit [26].A set of algorithms first removes obvious cosmic-rays that cross the detector and then selects a neutrino candidate in time with the beam.Particles are reconstructed as showers or tracks within this neutrino candidate; typically electrons and photons are shower-like, while muons, charged pions, and protons are track-like.The Pandora event reconstruction has been used for many previously published results by the MicroBooNE collaboration [6,7,11,[27][28][29][30][31][32][33][34].Additional tools are used on top of the Pandora pattern recognition to enhance shower-track separation, perform particle identification to separate proton and muon tracks [35], and to perform electron-photon separation for showers [11].Track and shower energies are measured separately.Calorimetric energy reconstruction is performed for electromagnetic showers starting with the total energy clustered in the shower (E shr ).This is corrected to account for inefficiencies in charge collection using a simulation of electrons and with this correction the reconstructed energy is defined as E reco =E shr /0.83.For tracks, the energy is estimated based on particle range [36].Using simulation, the energy resolution is estimated to be 3% for protons if their kinetic energy (KE) is greater than 50 MeV, and 12% for electrons.The absolute resolution on cos θ is 0.01 for electrons and 0.03 for protons, where θ is the angle of the particle with respect to the beam. We define true signal events as charged current ν e interactions that contain an outgoing electron with KE e > 30 MeV, and do not contain final-state charged pions with KE π ± > 40 MeV or any neutral pions.Signal events are further characterized in terms of the leading proton kinetic energy.Events with visible protons (KE p ≥ 50 MeV) are defined as 1eNp0π events.Events without visible protons (KE p < 50 MeV), or events for which no proton exits the nucleus, are defined as 1e0p0π events [37].These 1e0p0π events are required to pass additional phase space restrictions on the electron energy (E e > 0.5 GeV) and the angle between the neutrino beam and electron directions (cos θ e > 0.6). We perform a differential cross section measurement in four kinematic variables: the electron energy, the electron angle with respect to the beam, the leading proton energy, and the leading proton angle with respect to the beam.All of these variables except the leading proton energy are measured for only the 1eNp0π signal.The leading proton energy measurement includes both 1e0p0π and 1eNp0π events with smearing allowed between these two samples.This is possible because 1e0p0π signal events by definition have a leading proton kinetic energy below 50 MeV, and therefore these events can be included as a single bin in the proton kinetic energy measurement from 0 to 50 MeV.This is the first measurement to characterize proton production in neutrino interactions across the visibility threshold.Using the MicroBooNE tune of GENIE v3 [18], 1eNp0π events are predicted to be 60% quasi-elastic (QE) neutrino interactions, 30% meson exchange current (MEC), and with subdominant contributions from resonant (RES) (10%) and deep inelastic scattering (DIS) (1%) interactions; 1e0p0π events are mostly QE, with contributions from MEC and RES each at the 10%-15% level [37].The relative abundance of the different interaction types is not flat with respect to the measured variables which may provide some insight into the differences between models when data is compared to event generators. Events are selected with separate criteria based on the presence or absence of candidate protons.This selection strategy is the same as in Ref. [11], although a few of the requirements have been updated to optimize the selections for a cross section measurement.The main objective is to maintain sufficient ν e purity for a cross-section extraction while maximizing the ν e efficiency across the phase space of the measurement.For both the 1eNp0π and 1e0p0π selections the largest increase in efficiency comes from a relaxed cut on the boosted decision trees (BDTs) used in the analysis.These BDTs are the same, including the training, as those used in Ref. [11].Additionally, for the 1eNp0π selection, we relax the requirements on proton vs muon particle identification, on the shower dE/dx, and on the shower conversion distance.For the 1e0p0π selection we add requirements to increase the purity as needed for a cross-section measurement, particularly on the energy deposited per unit length (dE/dx) at the start of the electron candidate, and by restricting the phase space to the highest-purity region with cos θ reco e > 0.6 and E reco e > 0.51 GeV.We find that with these selections an appropriate visibility threshold for the leading proton kinetic energy is 50 MeV, which is approximately where the 1e0p0π selection efficiency turns off and the 1eNp0π efficiency turns on [37].Therefore, for 1eNp0π selected events we also require that the leading reconstructed proton has KE reco p > 50 MeV.With the data sample used in this analysis, a total of 145.5 events are predicted in the 1eNp0π selection, with a 1eNp0π purity of 69%.We expect to select about 100 (2) true 1eNp0π (1e0p0π) events with an efficiency of 17%.The largest backgrounds to the 1eNp0π selection are events with final state π 0 (ν e CC and ν µ CC or NC interactions, for a total 15.3 predicted events), other ν µ CC events (12.9 predicted events), and cosmic rays (6.8 predicted events).In the 1e0p0π selection about 10 (2) true 1e0p0π (1eNp0π) signal events are predicted with an efficiency of 12% and 1e0p0π purity of 65%; the total prediction is 17.6 events, and the largest background is from interactions with final state π 0 mesons (2.8 predicted events). The prediction on the total number of selected events is subject to uncertainties from several sources.Variations in the flux prediction may come from uncertainties on the hadron production cross section and on the modeling of the beamline [16,38].These are propagated to an uncertainty on the predicted event rate by reweighting the nominal simulation, and are found to be at the 6% level and mostly flat in terms of the variables used in the analysis.Uncertainties on the neutrino interaction model are included based on the nominal tuned GENIE v3 simulation using a reweighting method for most of the sources and with a limited set of specific variations [18].The impact of the interaction model uncertainties is only evaluated on the efficiency and smearing for true signal events; the number of signal events is not varied as it is the quantity of interest for the cross-section measurement.These combine to a 4% uncertainty on the total event prediction.Uncertainties on the propagation of final state particles the detector are assessed by varying re-interaction cross sections for charged pions and protons, again by reweighting [39].These uncertainties are generally at the 1% level, but grow to as high as 8% at high proton energies.Uncertainties on detector modeling are assessed using dedicated samples that are produced by varying parameters related to specific detector effects to amounts compatible with estimates from MicroBooNE data.These include space-charge effects, electron-ion recombination, light measurement, and wire response [40].Overall, these effects combine to approximately a 5% effect but can grow to 10%-20% at high electron and proton energies as well as for the 1e0p0π selection.Other subdominant uncertainties are due to the size of simulated samples, the POT measurement, and the estimate of the total number of argon nuclei in the detector. Covariance matrix formalism is used to include systematic uncertainties in the analysis, where the total systematic uncertainty covariance matrix C Syst is defined as the sum of the covariance matrices of each uncertainty (flux, cross section, re-interaction, detector, Monte Carlo statistics, POT, and the number of nuclei), with individual entries written as C ij : Here the covariance matrix is written in terms of bin indices i and j, and constructed as a sum over systematic variations k up until the total number of systematic variations N , with the central value bin content defined as n CV and the content of bin i in variation k defined as n k i .Finally, statistical uncertainties from the data measurement are included as where C DataStat is diagonal with elements corresponding to the Poisson variance in each bin.Statistical uncertainties in the data are the leading source of uncertainty in this measurement. The observed distributions for the four variables considered in this analysis are shown in Fig. 1, where the data is overlaid on top of the nominal simulation based on the tuned version of GENIE v3 [18].The data sample consists of 111 events selected with the 1eNp0π selection and an additional 14 events with the 1e0p0π selection.The simulation predicts more events than the data, especially at forward angles with respect to the beam and at intermediate energies.These are similar observations to those presented in Ref. [11]. To extract the cross section from the observed number of events we first define a response matrix, which maps the generated signal events in the true variable space to the observed signal events after selection in the reconstructed space.The off-diagonal elements of the response matrix define the amount of smearing between true and reconstructed bins.Both 1e0p0π and 1eNp0π events are included in the response matrix for the proton energy, with 1e0p0π events in a single bin and 1eNp0π events in the other bins.This means that smearing is included between these selections through the off-diagonal elements.The other variables use only 1eNp0π events.Due to the limited size of the selected data sample the bin width is typically larger than the resolution on the measured variables so smearing is limited and most events fall into the correct bins with >70% across all variables and >90% for electron angle.An unfolded differential cross-section measurement in the true-space bin i for the variable x measured in reconstructed-space bin j is defined as: where U is the unfolding matrix, n is the number of data events, b is the number of background events, N target is the number of nucleons, φ is the integrated electron neutrino flux, and (∆x) i is the measured bin width in the variable x.The unfolding matrix U is used in place of the inverse of the response matrix R −1 to avoid instabilities in the cross-section result from a direct matrix inversion.We extract the cross section using an unfolding procedure based on the D'Agostini method [41] with three iterations.This number of iterations is found to give results that are stable and with limited bin-to-bin fluctuations. In the cross-section extraction, we use a number of nucleons equal to 4.3912 × 10 31 , and a POT-integrated BNB ν e flux of 2.73 × 10 9 cm −2 , which is taken to be the reference flux [42] of the measurement and used as a constant value.As described in a previous MicroBooNE publication [43], this method allows for a consistent treatment FIG. 1.The observed number of events in data compared to the simulated prediction using the MicroBooNE tune of GENIE v3.The selection used is reported in each panel.The 1eNp0π selection is used for (a) the angle between the neutrino beam and electron direction, (b) the electron energy, and (c) the angle between the neutrino beam and leading proton direction.The 1eXp0π = (1e0p0π OR 1eNp0π) selection is used for (d), the leading proton kinetic energy, where events selected with the 1e0p0π selection populate the leftmost bin and events from the 1eNp0π selection populate the other bins. of flux uncertainties.The uncertainties on the total prediction (Eq.2) are analytically propagated through the unfolding procedure to obtain a covariance matrix in unfolded cross section [44]. The resulting cross sections are presented in Fig. 2, where they are compared to a number of modern generators: the MicroBooNE tune of GENIE v3.0.6 [18], GENIE v3.0.6 G18 10a 02 11a [17], GENIE v2.12.2 [45,46], NuWro 19.02.1 [47,48], and NEUT v5.4.0 [49,50].These generators have different initial state nuclear models (GENIE v2 uses a relativistic Fermi gas, while the others use a local Fermi gas), quasi-elastic models (GENIE v3 and NEUT use Valencia [51][52][53], GENIE v2 and NuWro use Llewellyn Smith [54]), and MEC models (GENIE v2 uses an empirical model, and the others the Valencia model).Details about the models used in these generators and a more complete description of their differences are found in other MicroBooNE publications [7,28,29] and a summary table presented in [55].We assess the agreement with these generators by computing χ 2 values and the p-values corresponding to the upper tail of the cumulative distribution for the χ 2 per degrees of freedom. While all generators are in reasonable agreement with the data, the level of agreement differs depending on the generator and the variable as shown in Table I FIG. 2. Differential cross sections from unfolded data and comparisons with predictions from different generators.The signal definition is reported for each panel: 1eNp0π is used for (a) the angle between the neutrino beam and electron direction, (b) the electron energy, (c) the angle between the neutrino beam and the leading proton direction, and the right panel of (d) the leading proton kinetic energy.An additional phase space restriction is applied to the leftmost panel of (d).Compatibility is evaluated in terms of p-values, and reported in the legends.the largest overall cross section, especially at forward proton angles, and GENIE v2, which has the largest prediction for 1e0p0π events, partly due to its empirical MEC model [56] with no Pauli blocking.The discrepancy between data and generator models is largest in leading proton angle, with p-values that range from 1% to 7%, and is most pronounced in the forward direction.Future measurements with more statistics will be able to further explore these features.More information about these results is provided in supplementary material, including tabulated cross-section values, χ 2 values, the background-subtracted observations, covariance matrices, and response matrices [37]. In summary, this letter presents the first differential ν e -argon cross-section measurement without pions in the final state in electron angle and energy as well as leading proton angle and energy, where the proton energy is characterized both above and below the visibility threshold.The findings are typically in agreement with predictions from modern generators, except for tension in the proton . The data indicate a preference for GENIE v3 and NuWro, both of which have a smaller overall electron neutrino prediction.Compared to the default GENIE v3, the MicroBooNE tune enhances the QE and MEC components and tends to over-predict, especially at intermediate energies.The lowest p-values are obtained for NEUT, which predicts TABLE I . Agreement between unfolded data and generator neutrino interaction models represented as p-values.
4,818.8
2022-08-03T00:00:00.000
[ "Physics" ]
Effect of the chromo-electromagnetic field fluctuations on heavy quark propagation at the LHC energies We consider the effect of the chromo-electromagnetic field fluctuations in addition to the collisional as well as the radiative energy loss suffered by heavy quarks while propagating through the hot and densed deconfined medium of quarks and gluons created in relativistic heavy ion collisions. The chromo-electromagnetic field fluctuations play an important role as it leads to an energy gain of heavy quarks of all momentum, significantly effective at the lower momentum region. With this, we have computed, for the first time, the nuclear modification factor (R_{AA}) of heavy mesons, viz., D-mesons and B-mesons and compared with the those experimental measurements in Pb-Pb collisions at \sqrt{s_{NN}} = 2.76 TeV and \sqrt{s_{NN}} = 5.02 TeV by the CMS and ALICE experiments at the LHC. Our results are found to be in very good agreement with those available data measured by CMS and ALICE experiments. Introduction The main goal of relativistic heavy-ion collisions at Relativistic Heavy Ion Collider (RHIC) at BNL and Large Hadron Collider (LHC) at CERN is to produce a hot and dense deconfined state of QCD matter, so called quark-gluon plasma (QGP).It is believed that this new deconfined state of matter has been formed during relativistic heavy-ion collisions at RHIC[1] and LHC [2].One of the features of this deconfined plasma created in heavy ion collisions is the suppression of high energy hadrons compare to the case of p− p collisions, called jet quenching.This jet quenching is caused due to the energy loss of initial hard partons via collisional and radiative energy loss inside the deconfined medium.It was anticipated first by Bjorken [3] as a crucial probe of this deconfined medium. Heavy quarks are mostly produced in early stage of the heavy ion collisions from the initial fusion of partons They may also be produced in the QGP, if initial temperature of QGP is high enough than the mass of the heavy quarks. However, no heavy quarks are produced at the latter stage and none in the hadronic matters.Hence, the total number of heavy quarks becomes frozen at the very early stage in the history of the collisions, which makes them a good probe of the QGP.These heavy quarks immediately after their production will propagate through the dense medium and will start losing energy during their path of travel.This energy loss suffered by the heavy quarks are reflected in the transverse momentum spectra and nuclear modification factor of heavy mesons. Heavy quarks lose energy in two different fashions in the QGP: one is caused by elastic collisons with the light partons of the thermal background (QGP) and the other one is by radiating gluons, viz., bremsstrahlung process due to the deceleration of the charge particles. The energy loss in the QGP are usually obtained by treating the medium in an average manner and the fluctuations are ignored.Since QGP is a statistical ensemble of mobile coloured charge particles, which could also be characterised by omnipresent stochastic fluctuations.This microscopic fluctuations generally couple with the external perturbations and affect the response of the medium. The effect of electromagnetic field fluctuations during the passage of charged particles though a non-relativistic classical plasma has been calculated by several authors in the literature [29,30,31,32,33,34].On the other hand the effect of chromo-electromagnetic fluctuations in the QGP leads to an energy gain of heavy quarks of all momentum and significantly at the lower momentum ones [25].This is because the moving parton in the QGP encounters a statistical change in the energy due to the fluctuations of the chromo-electromagnetic fields as well as the velocity of the particle under the influence of this field.The effect of such fluctuation was not considered in earlier literature for studying the hadron spectra in the perspective of heavy ion collisions. In this Letter, we investigate for the first time the effect of the chromo- The paper is organised in as follows: In sec.2 we brifely outline the basic setup containing, heavy quark production and fragmentation, models for both collisional and radiative energy loss, and energy gain due to field fluctuations, medium evolution and initial conditions etc, for the purpose.Here we consider the collisional energy loss of heavy quarks by Peigne and Pashier (PP) formalism [18] and the radiative energy loss by Abir, Jamil, Mustafa and Srivastava (AJMS) formalism [26] along with the energy gain due to the chromoelectric field fluctuations prescription by Chakraborty, Mustafa and Thoma (CMT) in Ref. [25].In sec.3 we discuss our results and a conclusion in sec.4. Heavy quark production and fragmentation The heavy quarks in p − p collisions are mainly produced by fusion of gluons or light quarks [35].Their production cross section has been obtained to nextto-leading order (NLO ) accuracy with CT10 parton distribution function [36] for p-p collisions.For heavy ion collisions, the shadowing effect is taken into account by using the NLO parameters of EPS09 [37] nuclear parton distribution function.The same set of parameters as that of Nelson et.al. [38] Medium Evolution and initial condition As the heavy quarks lose energy during their passage through the QGP medium, it is important to figure out the path length it is traversing inside the medium.We consider a heavy quark, which is being produced at a point (r,φ) in heavy ion collisions and propagates at an angle φ with respect to r in the transverse plane.So, the path length L covered by the heavy quark inside the medium is given by [40]: where R is the radius of the colliding nuclei.The average distance travelled by the heavy quark inside the plasma is where T AA (r, b = 0) is the nuclear overlap function.We estimate L = 6.14f m for central P b − P b collisions.The effective path length of heavy quark of transverse mass m T and transverse momentum p T in the QGP of life time τ f is obtained as, We consider the medium evolution as per the isentropic cylindrical expansion as discussed in Ref. [41].The equation of state is obtained by Lattice QCD along with hadronic resonance in order to calculate temperature as a function of proper time [42].We calculate the heavy quark energy loss over QGP life time and finally averaged over the temperature evolution.The initial conditions used for the hydrodynamic medium evolution are similar to the Ref [28].We consider the initial time τ 0 = 0. Collisional Energy Loss: Peigne and Peshier (PP) Formalism One of the important mechanism in which heavy quarks may lose energy inside the QGP is by collisions.The calculation of collisional energy loss per unit length dE/dx has been reported by in the past by several authors [4,5,43]. The most detailed calculation of dE/dx was made by Brateen and Thoma [5] which was based on their previous QED calculation of dE/dx for muon [44].This calculation of Brateen and Thoma for dE/dx is based on an assumption that the momentum exchange in elastic collisions, q ≪ E, which is not appropriate in the domain E ≫ M 2 /T , where M is the mass of the heavy quark and T is the temperature of the medium.The improved differential energy loss expression, valid for E ≫ M 2 /T , is given by Peigne and Pashier [18] as where, µ 2 g = 4πα s T 2 (1 + n f /6) is the square of Debye screening mass, n f = 3, is the number of active quark flavours and c(n f ) ≈ 0.146n f + 0.05 and α s = 0.3, is the strong constant. Radiative Energy Loss: Abir, Jamil, Mustafa and Srivastava (AJMS) Formalism The most important and dominant way of energy loss from a fast partons inside the QGP is due to gluon radiation.The first attempt to estimate the radiative energy loss was made in Ref. [6].Later many authors [8,9,17,21,45,46,47] also estimated the energy loss with various ingredients and kinematical conditions.In Refs.[8,9] the soft gluon emission was estimated which was found to suppress compared to the light quarks due to the mass effect, known as dead cone effect.The radiative energy loss induced by the medium due to the dead cone effect was limited only to the forward direction.In Ref. [12] by relaxing some of the constraints imposed in Refs.[8,9], e.g., the gluon emission angle and the scaled mass of the heavy quark with its energy, a generalised dead cone was obtained which led to a very compact expression for the gluon emission probability off a heavy quark.Based on the generalised dead cone approach and the gluon emission probability, AJMS [26] computed the heavy quark radiative energy loss1 as with where and ρ QGP is the density of the QGP medium which acts as a background containing the target partons.If ρ q and ρ g are the density of quarks and gluons respectively in the medium, then the ρ QGP is given by Energy gain by chromo-electromagnetic fields fluctuations: Chakraborty, Mustafa and Thoma (CMT) Formalism The energy loss calculations both collisional and radiative of heavy quarks in the QGP were obtained by treating the QGP medium without considering microscopic fluctuations.However, QGP being the statistical system, it is characterised by stochastic chromo-electromagnetic field fluctuations.Since the energy loss is of topical interest for the phenomenology of heavy quark jet quenching in hot and dense medium.A quantitatively estimate of the effect of the microscopic electromagnetic fluctuations on the propagation a heavy quark was done using semiclassical approximation2 by CMT in Ref. [25].This was found to led an energy gain of the heavy quark caused due to the statistical change in the energy of the moving parton in the QGP due to the fluctuations of the chromo-electromagnetic fields as well as the velocity of the particle under the influence of this field.The leading-log (LL) contribution of the energy gain was obtained [25] as where k min = µ g = Debye mass and k max = min E, with q ∼ T is the typical momentum of the thermal partons.One can physically interpret this energy gain of a heavy quark that absorbs gluons during its propagation.as a function of its momentum, obtained using PP [18], AJMS [26] and Fluctuations [25].The energy loss of a bottom quark inside QGP medium as a function of its momentum, obtained using PP [18], AJMS [26] and Fluctuations [25]. In Fig. 1 Fig. 3 and Fig. 4 display the fractional energy loss from collisional and radiative process, and also the energy gain due to the field fluctuations for charm and bottom quarks, respectively.It is clear that the energy gain for heavy quarks is relatively more at the lower momentum region (4 − 40 MeV) than that in very higher momentum (> 40 MeV) region.The reason is that the field fluctuations and thus the energy gain become substantial in the low velocity limit.Because of this the field fluctuations, the total energy loss of a heavy quark gets reduced up to a very moderately high values of momentum beyond which its contribution gradually diminishes.The relative importance of it will be very re levant for LHC energies as we would see below. Results and Discussions In Fig. 5 and Fig. 6 we have displayed the nuclear modification factor, R AA , for D 0 -meson in (0 − 10)% and (0 − 100)% centrality, respectively, in P b − P b collisions, considering both collisional and radiative energy loss along with the energy gain due to the field fluctuations and compared with ALICE [48] and CMS data [49].We observe that only the radiative energy loss (AJMS) or the with CMS data [51].The radiative energy loss itself produces a small suppression but when the collisional one is added generates more suppression than the measured CMS data.When the energy gain due to field fluctuation is taken into account in addition to both radiative and collisional losses, the suppression is found to be very closer to the measured data within their uncertainties. Conclusion The energy loss encountered by an energetic parton in a QGP medium reveals the dynamical properties of that medium in view of jet quenching of high energy partons.This is usually reflected in the transverse momentum spectra electromagnetic field fluctuations leading to energy gain of heavy quarks in addition to both the collisional and the radiative energy loss on the nuclear modification factor for D and B mesons and compared with the measurements of both ALICE and CMS experiments in P b − P b collisions at √ s N N = 2.76 TeV and CMS experiment at √ s N N = 5.02 TeV.We found that the chromoelectromagnetic field fluctuations play an important role on the propagation of the heavy quark jets in a QGP vis-a-vis the nuclear modification factor of heavy flavoured hadrons.It is interesting to note that only collisional or radiative or both energy loss can not explain the data satisfactorily.If the energy gain due to fluctuations is included along with the collisional and radiative energy loss, then the data can be explained very satisfactorily from low to moderately high value of transverse momentum. Figure 1 : Figure 1: The energy loss of a charm quark inside QGP medium Figure 2: The energy loss of a bot- Figure 3 :Figure 4 : Figure 3: Fractional energy loss of charm quark inside QGP due to fluctuations, collisions (PP) and radiations (AJMS) with its momentum.The path length considered is L = 5 fm. Figure 5 :Figure 6 :Fig. 7 Figure 7 :Figure 8 : Figure 5: Nuclear modification factor R AA of D 0 -meson with collisional (PP) and radiative (AJMS) energy loss along with the effect of fluctuations as a function of transverse momentum p T for (0 − 10)% centrality at P b − P b collisions at √ s NN = 2.76 TeV.The data for D 0 -meson are taken from the measurement of AL-ICE [48] and CMS experiment [49]. and nuclear modification factor of mesons which are measured in heavy ion experiments.For the phenomenology of the heavy quarks jet quenching the field fluctuations in the QGP medium were not considered in the literature before.In this article, for the first time, we have considered the propagation of high energy heavy quarks by including the energy gain due to field fluctuations along with the energy loss caused by the collisions and gluon radiations inside the QGP medium.The nuclear modification factors R AA for D-mesons and B-mesons in P b − P b collisions at √ s N N = 2.76 TeV and √ s N N = 5.02 TeV are calculated by including the both energy losses and the field fluctuations effect.We note that the radiative energy loss alone can describe the D-mesons suppressions at higher transverse momentum.Nevertheless, the nuclear modification factors for both D and B mesons are found to agree quite well with those data in the entire p T range measured by CMS and ALICE experiments at LHC energies, if the energy gain due the field fluctuations are taken into account in addition to the collisional and radiative loss in the medium.The effect of field fluctuations in hot and dense QGP medium is found to play an important role in the propagation of heavy quarks also in describing the experimental data for heavy quarks quenching.
3,536.4
2017-11-16T00:00:00.000
[ "Physics" ]
Oscillating paramagnetic Meissner effect and Berezinskii-Kosterlitz-Thouless transition in underdoped Bi2Sr2CaCu2O8+δ ABSTRACT Superconducting phase transitions in two dimensions lie beyond the description of the Ginzburg-Landau symmetry-breaking paradigm for three-dimensional superconductors. They are Berezinskii-Kosterlitz-Thouless (BKT) transitions of paired-electron condensate driven by the unbinding of topological excitations, i.e. vortices. The recently discovered monolayers of layered high-transition-temperature (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} ${{{T}}}_{{\rm C}}$\end{document}) cuprate superconductor Bi2Sr2CaCu2O8+δ (Bi2212) meant that this 2D superconductor promised to be ideal for the study of unconventional superconductivity. But inhomogeneity posed challenges for distinguishing BKT physics from charge correlations in this material. Here, we utilize the phase sensitivity of scanning superconducting quantum interference device microscopy susceptometry to image the local magnetic response of underdoped Bi2212 from the monolayer to the bulk throughout its phase transition. The monolayer segregates into domains with independent phases at elevated temperatures below \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} ${{{T}}}_{{\rm C}}$\end{document}. Within a single domain, we find that the susceptibility oscillates with flux between diamagnetism and paramagnetism in a Fraunhofer-like pattern up to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} ${{{T}}}_{{\rm C}}$\end{document}. The finite modulation period, as well as the broadening of the peaks when approaching \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} ${{{T}}}_{{\rm C}}$\end{document} from below, suggests well-defined vortices that are increasingly screened by the dissociation of vortex-antivortex plasma through a BKT transition. In the multilayers, the susceptibility oscillation differs in a small temperature regime below \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} ${{{T}}}_{{\rm C}}$\end{document}, consistent with a dimensional crossover led by interlayer coupling. Serving as strong evidence for BKT transition in the bulk, we observe a sharp jump in phase stiffness and paramagnetism at small fields just below \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} ${{{T}}}_{{\rm C}}$\end{document}. These results unify the superconducting phase transitions from the monolayer to the bulk underdoped Bi2212, and can be collectively referred to as the BKT transition with interlayer coupling. superconducting phase transition in three dimensions (3D) where pairing and phase coherence happen at the same temperature following the Bardeen-Cooper-Schrieffer (BCS) theory 23 . This is the reason why in conventional 2D superconductors the BKT transition appeared at temperatures below bulk " . The question regarding cuprates is whether BKT physics could be universal beyond the 2D limit given their layered structure but finite interlayer coupling 24,25 . Earlier work on bulk crystals showed evidence of vortex excitation above " 14 , supporting a pre-formed pairing scenario 10,13 . Nevertheless, ubiquitous emergent electronic and spin orders are known to be on the same energy scale as Cooper pairing in underdoped cuprates [26][27][28][29] , which obscured the phase transition between the highly-debated pseudogap regime and the superconducting order 30,31 . Furthermore, the surprise finding of similar electronic structure and " in the monolayer and the bulk Bi2212 Scanning superconducting quantum interference device (sSQUID) 3-7 has high flux sensitivity and spatial resolution essential to study Pearl vortices and superfluid density of ultrathin Bi2212. This technique employs a nano-fabricated chip which integrated micron-sized pickup loops into a two-junction SQUID that converted the flux through the loop ( ) into a voltage signal 35 . The pickup loops of our nano-SQUID were in a gradiometric design so that flux due to uniform external field through both loops cancels out. Therefore, the flux signal we measured was strictly from the sample. In addition, we used mu-metal to shield the earth magnetic field and a homewound coil to compensate for any residual field so that the sample could be measured in a true zero-field environment. Flowing an alternating current ( & ) through the field coil, we obtained the real part of the AC susceptibility (c') by demodulating the in-phase component from the flux signal in the pickup loop. We thermally isolated our nano-SQUID from the sample so that the sample temperature could be independently raised up to 200 K without introducing additional noise on the nano-SQUID, which was kept at 4.6 K 7 . As can be seen below, such capability of highly sensitive susceptometry over a large temperature range in a well-controlled magnetic field was critical for the investigation of the phase-transition of cuprate high-temperature superconductors in the 2D limit. The bulk Bi2212 samples we started from throughout this study were optimally-doped single crystals. In the bulk form, the " was 88.2 K as determined by both volumetric magnetometry and sSQUID susceptometry under zero magnetic field (SOM). We mechanically exfoliated the crystals using the technique described earlier 1 to obtain thin flake samples of various thickness (Fig. 1a). The approach curves at 10 K could be well fitted with the model of a thin large diamagnetic disk (Fig. 1b) 36 , which gave Λ = 171 µm for the monolayer (SOM). Using = 1.5 nm of the monolayer, we obtained = 358 nm, comparable to that of the bulk Bi2212 with slight under-doping 37 . The deviation from optimal doping was due to loss of oxygen occurred after exfoliation and before an h-BN capping layer could completely cover it (SOM). This resulted in loss of superconductivity of the left corner of the sample and reduced " of the rest comparing with the bulk " (Fig. 1c). Since Λ was larger than the size of the monolayer, supercurrent was expected to be mostly along the edge of the sample. Reassuringly, there was no visible vortex in the interior of the monolayer under an out-of-plane field H = 0.17 G. The weak magnetic contrast at the edge, which had negative contrast relative to H just inside the boundary (Fig. 1d), suggested it was due to Meissner current. The spatial variation of both the magnetization and susceptibility became much less uniform as temperature increased. Even though c' (Fig. 1e) was qualitatively similar to its low temperature state, the Meissner current surrounding the whole sample shrank and separated into two loops at 40 K (Fig. 1f). Since the diamagnetism at 40 K was much weaker than that at 15 K (Fig. 1c), meaning a much larger Λ, the smaller and segregated flux feature could only result from formation of domains of different superconducting phases. The sporadic features in magnetometry, also present in the bulk (SOM), were due to vortex moving through the sample during scanning and were intrinsic to Bi2212 at this temperature regime likely due to its strongly layered structure 38 . The domains became more distinctive at 50 K in both c' and magnetometry images (Figs. 1g and h). The two triangular areas in the middle showed diminishing diamagnetism as they had a lower " = 52 K (defined by the temperature above which no diamagnetic feature could be observed). These two weakly superconducting domains disrupted the phase coherence between the left and right domains which had " = 64 K. The existence of such domains could prevent bulk probes from resolving phase-coherent processes within the domains. At even higher temperatures, susceptometry images changed more dramatically with the field (Fig. 2). The overall diamagnetic signal was weaker at higher fields than those of the lower ones because increased screening current costed free energy and reduced the superfluid density. Magnetometry did not show noticeable Meissner current (Fig. 2a) under a relatively small H = 0.32 G at 60 K, but the diamagnetic region shrank and a small area of paramagnetic region occurred on the top (Fig. 2b). At H = 0.53 G, the contrast enhanced a bit in magnetometry (Fig. 2c). The paramagnetic region on the top turned into diamagnetic, but another paramagnetic area occurred on the left side of the sample (Fig. 2d). As H increased further to 0.74 G, there was no qualitative change in the magnetometry image ( Fig. 2e) but the left side returned to be weakly diamagnetic again with developments of other paramagnetic areas in the middle (Fig. 2f). The magnetometry image at H = 0.95 G (Fig. 2g) was similar to the one at H = 0.74 G, but the susceptometry was different and the paramagnetic pattern seemed more random (Fig. 2h). The overall diamagnetic signal was weaker at higher fields than those of the lower ones. The weak and similar magnetometry signal ruled out that the paramagnetic signal in susceptometry was from cross-talk between the two channels. The reappearance of diamagnetism at the same location with increasing field suggested that the sample was still in a superconducting state despite the reduced superfluid density under these fields. Such paramagnetism in the superconducting state was reminiscent of the paramagnetic Meissner effect (PME, also called Wohlleben effect), which was observed several decades ago in granular Bi2212 bulk samples by cooling under small magnetic fields 39,40 . The original explanation, which relied on d-wave pairing forming p-junctions across grain boundaries 41 , was debated, as the PME may also occur in conventional mesoscopic superconductors 42 and surface states 43 of odd-frequency superconductors 44,45 . Since our Bi2212 sample was single-crystalline, the formation of pjunctions was unlikely regardless of the pairing symmetry. In order to investigate the origin of the PME in the monolayer sample, we measured susceptibility as a function of magnetic field and temperature (T) at a fixed location on the sample. We picked the middle point of the left region of the monolayer sample (as shown by the black dot in Figure 1g). The susceptibility obtained by sweeping H at various T (Figs. 3a) showed the most pronounced paramagnetism between 50 K and " = 64 K. Other than the spikes, the field sweep c' curves were typical of a type II superconductor: a flat diamagnetic bottom at low field which started to increase at the lower critical field ( '( ) around 0.5 G (Fig. 3b), which was expected from the penetration depth at this temperature. c' leveled off to values slightly below zero at higher fields, which were four orders of magnitude smaller than the upper critical field (> 4.6 at 61 K for " = 64 K). The PME appeared as several symmetric spikes at fields ) overlaying on the diamagnetic background in the field sweep curves (Fig. 3b), which slowly shifted with temperature (Fig. 3a). The spikes that were closest to zero field (labelled as '1') disappeared below 59 K. The higher order ones (second and third are labelled as '2' and '3', respectively) persisted to lower temperatures. When temperature was continuously varied rather than the field, the H-T diagram obtained from the 'field-cooling' cycles showed different peak amplitude but the same peak position in field ) (SOM). The ) was also independent of the modulation field frequency or amplitude if it was small, but they were strongly suppressed when the modulation amplitude was comparable or larger than the peak spacing in H (SOM). These ruled out resonance artifact in the AC susceptibility and provided further evidence that the oscillating PME was an intrinsic response of the Bi2212 monolayer under an external field. The temperature evolution of the PME peaks, in comparison with the diamagnetism at zero-field, showed its connection with BKT transition. The disappearance of the PME peaks was at the same temperature as the disappearance of diamagnetism at zero-field, i.e., " = 64 K. The ) of the lowest three peaks (Fig. 3c) did not go down to zero at " , suggesting robust phase coherence up to the transition. Converting the zero-field χ′( ) to 1/Λ( ) (Fig. 3c In order to investigate the sample size effect on the oscillating PME, we obtained χ′( ) curves at various horizontal positions x across the left domain (indicated by the arrow in Fig. 1g) of the monolayer sample at 60 K. The lowest three spikes in the H-T sweep (Fig. 3a) were still clearly distinguishable at all of the x positions except for the one at the left edge (Fig. 3d). All the spikes are shifting towards smaller H in a uniform fashion as x increased (Fig. 3e). This could be wellunderstood when we consider field screening from the nano-SQUID (Fig. 1a): sample area on the right side of the pickup loop was completely shielded from the small external field we applied and only the area to the left and underneath the pickup loop was subject to H. Normally this would not lead to any observable effect as only the field on the sample directly underneath the pickup loop matters. Here, however, it was the total flux threading the sample rather than the field that determines the susceptibility. As the SQUID moved to larger x, the exposed area increased, and a smaller H was needed to maintain the same flux. This point could be best shown by multiplying ) and the exposed area to obtain total flux through the sample as a function of x ( Fig. 3f). (Since diamagnetism of the monolayer sample at this temperature region was quite weak, as evident from the magnetometry, Meissner screening from the sample could be ignored.) The lowest three peaks respectively corresponded to 1, 2 and 3 flux quanta through the left domain of the sample. The domain size, which was estimated from the area when the nano-SQUID pickup loop was at the right edge of the domain, was about 200 µm 2 , in agreement with the optical image. Using the ) ( ) relation of the three lowest order spikes (Fig. 3e), we can now identify the seemingly random paramagnetic features we observed in the susceptometry images at 60 K (Fig. 2). They were a result of single vortex penetration at H = 0.53 G (Fig. 2d), double at H = 0.74 G (Fig. 2f) and triple at H = 0.95 G (Fig. 2h), respectively. The oscillating PME with H was clearly not due to vortex melting 4,46,47 , in which case only a singular paramagnetic peak appeared at a large enough field to generate high vortex density for the melting transition 48 . The above observation rather showed that local magnetic susceptibility was dependent upon the global fluxoid of a domain. The scale of the domain size R ~ 20 µm over which the coherent oscillation occurred was a thousand times larger than the coherence length of Bi2212 (about 16 nm even at 63 K for " = 64 K using BCS theory). This ruled out the oscillating PME we observed was the same effect as seen in a conventional mesoscopic superconductor, which required sample size close to its coherence length. By the similar consideration of the disparity in scales, the oscillating PME in monolayer Bi2212 could also not be due to the Little-Parks effect because the change in critical temperature would be unnoticeable: zero-temperature coherence length. As a control experiment supporting the above two arguments, ultrathin NbSe2 flakes, which has very similar 1 and penetration depth as monolayer Bi2212, did not show any PME over a similar normalized temperature ( / " ) and field range (SOM). Indeed, vortex cores have diverging size through a conventional superconducting phase transition which suppresses phase coherence. The persistence of phase coherence even at " , manifested by the susceptibility oscillation, was a unique feature of a BKT transition, which required finite-sized phase singularities. Previous theoretical and numerical studies of a 2D XY model under weak frustration did find paramagnetic susceptibility 49 . But even before any modeling, the duality between phase and charge of a 2D superconductor allowed us to visualize vortices as Coulomb gas 8,9 to gain deeper intuition for the oscillation of PME. The ln( / ) repulsion between two vortices with the same vorticity distance apart is exactly the same as Coulomb repulsion in 2D electrodynamics. The magnetic field acts as a chemical potential for the Coulomb gas and thus the vorticity is equivalent to the vortex particle number. When this chemical potential exceeds the energy cost of a vortex, one vortex will be thermally excited. Moreover, adding an additional vortex to our sample of finite size must overcome the effective repulsion between the two vortices, which originates from redistribution of supercurrents when two vortices are present. Consequently, the second vortex will be excited only when the chemical potential reaches the amount proportional to the inter-vortex repulsion. Further increasing the magnetic field, the third vortex will be excited when the chemical potential overcomes the mutual repulsion between the three vortices. Therefore, upon increasing the magnetic field, vortices will enter one-by-one at specific fields. This is in direct analogy with the Coulomb blockade of single-electron transistors 50,51 that would exhibit spikes in tunneling conductance when gate voltage was tuned. Noting that the flux counterpart of the charge tunneling conductance is magnetic susceptibility, we can see that the observed oscillating PME spikes originated from the quantized nature of vortex. Mapping the 2D superconductor near the BKT transition to the Coulomb-gas model, we computed the paramagnetic response of the vortices in a circular disk. The high symmetry of the geometry simplified the computation although admittedly it did not match the actual shape of the sample. We found qualitative agreement on χ′( , ) between the modeling (with only two fitting parameters) and the experiment (SOM). The extracted first two PME peaks evolving with temperature also agreed with those of the experiment (Fig. 3c, blue and green). The main discrepancy occurred for the third peak where the modeling underestimated the peaks (Fig. 3c). This is likely caused by our crude approximation of the shape of the sample. Nevertheless, the modeling captured the most striking feature of the experiment: the ) 's were finite at " (Fig. 3c). Indeed, in the Coulomb-gas model, the peaks remain at finite fields because the superfluid density renormalizes both the repulsive interaction and the chemical potential of the vortices (SOM). This could only happen if there existed well-defined vortex cores and finite coherence length much smaller than sample size, which were essential for a vortex-driven phase transition. Besides the monolayer, the oscillating PME occurred generally in the ultrathin Bi2212 samples we studied. For example, a quadruple-layer sample ( " = 87 K) similarly exhibited paramagnetic peaks (Fig. 4a). The main difference from the monolayer was that the peaks at = ±0.2 G did not persist beyond 85 K, at which temperature the phase stiffness also exhibited a kink (Fig. 4b). The PME of a different domain on this sample with slightly lower " (SOM) and a quintuple-layer sample were similarly separated into two temperature regimes (Figs. 4c and d). The absence of oscillating PME in the temperature regime above the kink suggested it was not caused by disparate " 's of the surface layer and the buried ones. Instead, for a layered system with small interlayer coupling, the BKT transition should have two characteristic temperatures determined by 1/Λ( *+, 6 ) = independent. The quadruple-layer showed these two *+, 6 for = 1 and 4 below and above the kink, respectively (Fig. 4b). This strongly suggested the kink was a cross-over regime for the onset of interlayer coupling. The PME with much enhanced oscillation below the cross-over (Figs. 4a-d) further corroborated that the phase coherence between the layers was established in this regime. In even thicker samples, inhomogeneous domains in different layers were more prominent such that their contributions to PME were less in phase. This is clearly the case in a 20-nm sample where the PME peaks occurred at seemingly random fields (SOM). In a bulk sample, which was more than 200 nm thick, the domains were even more 'coarse-grained'. As a result, the PME peaks did not show any oscillation with field and only a small range of temperature close to " exhibited overall paramagnetic response (Fig. 4e). As a function of temperature, there was a PME peak right below " at finite fields (Fig. 4f, red), reminiscent of the original observation of PME in cuprates by bulk magnetometry 39,40 . The striking similarity suggested they shared the same origin (even though those observations were on poly-crystalline samples). The PME temperature-range grew with H because of the reduction of diamagnetic strength with field. Such PME was clearly absent from the bulk NbSe2 ( " = 6.6 K) control sample (SOM). Noise-like features in susceptibility of Bi2212 below " (Fig. 4e), which was absent above " , suggested strong phase-fluctuations accompanying vortex excitations in this temperature regime. The phase stiffness versus temperature curves of Bi2212 and NbSe2 were also markedly different (Fig. 4f). While the latter showed a smooth power-law rise below " typical of a BCS superconductor, Bi2212 showed a sharp jump which rose to 85% of its peak magnitude within 0.15% of " . Such a sharp jump in superfluid density at zero-field was exactly what a BKT transition entailed 54 . At finite fields, the PME peak slightly obscured the sharp rise in diamagnetic susceptibility. Notwithstanding, the PME per se was strong evidence that BKT physics was also responsible for the superconducting to normal transition of the bulk Bi2212. The BKT transition we observed in Bi2212 is likely applicable to cuprate superconductors in general since the family shares the layered structure with various degrees of interlayer coupling. Cooper pairs formed at a very high temperature enables vortex excitation to play a central role behind the universal scaling between the superfluid density and " 13,55 , the abnormal critical behavior of the specific heat of underdoped YBa2Cu3O6+x 30 and the Nernst effect in the pseudogap phase 14 . In conclusion, we have observed PME oscillations in monolayer and few-layer Bi2212, which evolved to be continuous with field in thick samples. Combined with the characteristic features in phase stiffness at precisely-determined zero-field, we provided strong evidence that the superconducting transitions in underdoped Bi2212 from 2D to the bulk were generalized BKT transitions with interlayer coupling. The spatial resolution of sSQUID to distinguish domains of different size and " and its sensitivity in susceptometry at very low field and high temperature were indispensable to these observations. Our technique and the observations showed that ultrathin Bi2212 was a promising material platform to understand phase fluctuations in the enigmatic pseudogap regime of the underdoped cuprates. Fig. 1g. a, χ′( , ) of the monolayer sample taken at the point shown in Fig. 1g. The data were obtained by sweeping the field at different T. Arrows indict the direction of the sweep. The three lowest order paramagnetic peaks were labeled by the numbers '1', '2', and '3'. b, χ′( ) at various temperatures taken from a. c, Peak position ( ) ) as a function of T extracted from a (left axis). The three lowest-order paramagnetic peaks were represented by blue, green and red d symbols, respectively. The light open circles of similar colors were obtained from our Coulomb gas simulation (see text and SOM). The inverse of the Pearl length, 1/Λ( ), obtained from the diamagnetic susceptibility at zero-field in a (right axis). The diamagnetic data in the range of 60 K -66 K (solid dots) were from a fine temperature sweep at zero-field (SOM). The interception of the dashed straight line with 1/Λ( ) determined the BKT temperature (see text). d, χ′( ) curves obtained at 60 K at different displacement of the nano-SQUID tip over the monolayer sample along the arrow shown in Fig. 1g. The curves with larger were shifted downward proportionally. e, ) ( ) extracted from d. f, ) ( ) from e plotted in units of flux quantum. This was obtained by integrating the field over the exposed area of the domain (see Fig. 1a and text). The interception of the dashed lines with this curve determines the BKT temperature of the coupled layers (n = 1) and independent layers (n = 4), respectively. c, χ′( , ) in a quintuplelayer sample. d, χ′( ) at two different temperatures obtained from c. The oscillating paramagnetic susceptibility in the multilayers separated into two temperature regimes demarked by the horizontal dashed line, below which the layers were Josephson-coupled. f, χ′( ) at zerofield (orange squares) and H = 25 G (red dots) taken from e. In comparison, a similar zero-field curve for a NbSe2 bulk sample ( " = 6.6 K) was shown as the blue curve. Note that the 'noisy' diamagnetic signal in the Bi2212 bulk sample under finite H was absent above " and was therefore intrinsic to the sample. Both the paramagnetic peak at finite H and a sharp jump of superfluid density at " suggested BKT transition in the bulk Bi2212. Additional data on these samples available in SOM.
5,691
2021-12-09T00:00:00.000
[ "Physics" ]
Full NLO QCD predictions for Higgs-pair production in the 2-Higgs-doublet model After the discovery of the Higgs boson in 2012 at the CERN Large Hadron Collider (LHC), the study of its properties still leaves room for an extended Higgs sector with more than one Higgs boson. 2-Higgs doublet models (2HDMs) are well-motivated extensions of the Standard Model (SM) with five physical Higgs bosons: two CP-even states h and H, one CP-odd state A, and two charged states H±\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^\pm _{}$$\end{document}. In this letter, we present the calculation of the full next-to-leading order (NLO) QCD corrections to hH and AA production at the LHC in the 2HDM at small values of the ratio of the vacuum expectation values, tanβ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tan \beta $$\end{document}, including the exact top-mass dependence everywhere in the calculation. Using techniques applied in the NLO QCD SM Higgs pair production calculation, we present results for the total cross section as well as for the Higgs-pair-mass distribution at the LHC. We also provide the top-quark scale and scheme uncertainties which are found to be sizeable. 1 Introduction [1,2] are well motivated extensions of the SM.They belong to the simplest Higgs sector extensions of the SM that, taking into account all relevant theoretical and experimental constraints, are testable at the LHC.In their type II version they contain the Higgs sector of the Minimal Supersymmetric extension of the SM (MSSM) as a special case.Featuring five physical Higgs bosons after electroweak symmetry breaking (EWSB), they represent an ideal benchmark framework for the investigation of various possible new physics effects to be expected at the LHC in multi-Higgs boson sectors. 2-Higgs Doublet Models The neutral Higgs boson pairs of the 2HDM are dominantly produced via the loop-induced gluon-fusion process gg → φ 1 φ 2 , where φ 1/2 denote scalar or pseudoscalar Higgs bosons of the 2HDM.Only for mixed scalar+pseudoscalar Higgs production the Drell-Yan-type process q q → Z * → A + h/H takes over the dominant role in large regions of the parameter space [3].The topic of our paper is the calculation of the full NLO QCD corrections to scalar Higgs-pair and pseudoscalar Higgs-pair production via gluon fusion within the 2HDM. In the past the NLO QCD corrections to the gluon-fusion process gg → HH have been calculated within the SM and the MSSM in the heavy-top limit (HTL) [3].This calculation has been extended to the NNLO QCD corrections in the HTL [4][5][6].Quite recently, this level has been extended to the N 3 LO order in the HTL [7][8][9][10].On the other hand finite top mass effects beyond the HTL have turned out to be sizeable [11][12][13][14][15].The inclusion of the related uncertainties due to the scheme and scale dependence of the virtual top mass has been shown to be mandatory, since they dominate the intrinsic theoretical uncertainties [13][14][15].For BSM scenarios, the NLO QCD corrections to all production modes involving scalar and pseudoscalar Higgs bosons are known in the HTL [3], while partial results for the virtual corrections to pseudoscalar Higgs-pair production are known beyond NLO QCD within the HTL [16]. The paper is organised as follows.In Section 2 we introduce the 2HDM and the benchmark point we have selected to obtain our numerical results, then we give a short descrip-tion of the details of our calculation in Section 3. Our results for hH and AA production are presented in Section 4. The theoretical uncertainties are discussed in Section 5, in particular the top-quark scale and scheme uncertainties in Section 5.2.A short conclusion is given in Section 6. The 2-Higgs Doublet Model The 2HDM is obtained by extending the SM by a second Higgs doublet with the same hypercharge.We work within the 2HDM version with a softly broken Z 2 symmetry under which the two Higgs doublets Φ 1,2 behave as Φ 1 → −Φ 1 and Φ 2 → Φ 2 .In terms of the two SU(2) L Higgs doublets with hypercharge Y = +1 the most general scalar potential that is invariant under the SU(2) L ×U(1) Y gauge symmetry and that has a softly broken Z 2 symmetry is given by Working in the CP-conserving 2HDM, the three mass parameters, m 11 , m 22 and m 12 , and the five coupling parameters λ 1 -λ 5 are real.The discrete Z 2 symmetry (softly broken by the term proportional to m 2 12 ) has been introduced to ensure the absence of tree-level flavour-changing neutral currents (FCNC).Extending the Z 2 symmetry to the fermion sector, all families of same-charge fermions will be forced to couple to a single doublet so that tree-level FCNCs will be eliminated [2,17].This implies four different types of doublet couplings to the fermions that are listed in Table 1 together with the transformation properties of the fermions.The corresponding 2HDM types are named type I, type II, lepton-specific and flipped.The resulting couplings of the fermions normalised to the SM couplings can be found in [2].After EWSB, the Higgs doublets Φ i (i = 1, 2) can be expressed in terms of their vacuum expectation values (VEV) v i , the charged complex fields φ + i , and the real neutral CP- Table 1 Classification of the Yukawa types of the Z 2 symmetric 2HDM.2nd-4th columns: allowed coupling combinations of Higgs doublet and fermion types; last five columns: Z 2 assignments for the quark doublet Q, the up-type quark singlet u R , the down-type quark singlet d R , the lepton doublet L, and the lepton singlet l R . even and CP-odd fields ρ i and η i , respectively, as The mass matrices are obtained from the terms bilinear in the Higgs fields in the potential.Due to charge and CP conservation they decompose into 2 × 2 matrices M S , M P and M C for the neutral CP-even, neutral CP-odd and charged Higgs sector.They are diagonalised by the following orthogonal transformations This leads to the physical Higgs states, a neutral light CPeven, h, a neutral heavy CP-even, H, a neutral CP-odd, A, and two charged Higgs bosons, H ± .By definition, m h < m H .The massless pseudo-Nambu-Goldstone bosons G ± and G 0 are absorbed by the longitudinal components of the massive gauge bosons, the charged W ± and the Z boson, respectively.The rotation matrices are given in terms of the mixing angles ϑ = α and β , respectively, and read The mixing angle β is related to the two VEVs as The mixing angle α is given by where (M S ) i j (i, j = 1, 2) denote the matrix elements of the neutral CP-even scalar mass matrix M S .Introducing we obtain [18] in terms of the abbreviation λ 345 ≡ λ 3 + λ 4 + λ 5 (11) and using the short-hand notation s x ≡ sin x etc. In the minimum of the potential, the following conditions have to be fulfilled, where the brackets denote the vacuum expectation values.This results in the two equations Exploiting the minimum conditions of the potential, we use the following set of independent input parameters of the model, In this work we choose a benchmark point of the 2HDM type I, in which the couplings of the two Higgs doublets to the up-and down-type fermions are equal.The benchmark point of the 2HDM type I that we use in our numerical analysis is given by the following set of input parameters It fulfils all relevant theoretical and experimental constraints.For a description of the constraints, see Ref. [19]. Partonic leading order cross section As we work in the 2HDM type I, we are dominated by the top-quark loop contributions so that we neglect the bottomquark loops as well as light-quark loops.Note that while we work in the 2HDM type I, we could apply our approximation to the 2HDM (with natural flavour conservation) of any type as long as we work at low tan β values, as the top-quark Yukawa coupling is the same in all 2HDM types.In particular we could apply our approximation to the 2HDM type II and even to the MSSM as long as the squark contributions can be suppressed, which is the case for squark mass above 400 GeV [3].This is typically the case in current MSSM fits to data [20][21][22][23][24].The leading-order (LO) diagrams for hH and AA production, as depicted in Fig. 1 include triangle diagrams, involving a light and heavy CP-even Higgs h, H propagator coupled to the final-state Higgs bosons with various triple Higgs couplings, and box diagrams with two Yukawa couplings.Note, that we focus here on the production of a mixed CP-even and a pure CP-odd Higgs pair. The analytical results and the numerical method for LO and NLO QCD hh and HH production can be derived from the SM results [11-14, 25, 26] by simple adjustments of the involved Yukawa and trilinear Higgs self-couplings as well as the sum over Higgs-boson propagators.It should be noted that for larger Higgs masses, as e.g. for HH production, the top-mass effects and the associated mass and scheme uncertainties will be larger than for an SM Higgs mass of 125 GeV. We follow the conventions of Ref. [3] and decompose the cross section into scalar form factors after the application of two tensor projectors on the matrix elements.The partonic cross section σ (gg → φ 1 φ 2 ), with φ 1 φ 2 = hH or AA, can be written as where ) is the strong coupling constant evaluated at the renormalisation scale µ R , and the Mandelstam variables ŝ and t are given by with the scattering angle θ in the partonic c.m. system and where m φ 1 and m φ 2 are the Higgs boson masses, i.e. either The variable m φ 1 φ 2 denotes the invariant Higgs-pair mass.The factor S is a symmetry factor, S = 1/2 for AA production and S = 1 for hH production.The Källen function λ is given by The integrations limits read as The coefficients C h/H △ contain the triple Higgs couplings λ φ 1 φ 2 h/H and the reduced Yukawa couplings g t h/H , which are given by the 2HDM Yukawa coupling modification w.r.t. to the SM top-Yukawa coupling, as well as the CP-even Higgs boson propagators 1 , The coefficient C □ contains only reduced Yukawa couplings to the final-state Higgs bosons, For the various φ 1,2 they are given by Fig. 1 Generic one-loop diagrams for LO Higgs-boson pair production via gluon fusion, gg → φ 1 φ 2 , in the 2HDM type I.The contribution from triple Higgs couplings is marked in red.Note that φ 1 φ 2 = hH or AA. In the heavy top-limit (HTL) approximation, the form factors reduce to with a = −1 for hH production and a = 1 for AA production. The full m t -dependence at LO can be found in Refs.[25,26]. Hadronic cross section The structure of the NLO QCD corrections is very similar to the SM case presented in Refs.[13,14].They include twoloop virtual corrections to the triangle and box diagrams, one-particle-reducible diagrams involving two triangle diagrams connected by a virtual gluon exchange, and one-loop real corrections involving an extra parton in the final state.The partonic contributions are then convolved with the parton distributions functions (PDFs) f i evaluated at the factorisation scale µ F in order to obtain the hadronic cross section.The parton luminosities dL i j /dτ can be defined as with τ = Q 2 /s, s being the hadronic c.m. energy, so that the NLO hadronic differential cross section with respect to Q 2 can be written as with the LO and the virtual and real correction contributions for i j = gg, ∑ q, q qg, and ∑ q q q, z = Q 2 /τs, and the variable /s.We include five external massless quark flavours.The coefficients C virt of the virtual and C i j of the real corrections in the HTL have been obtained in Ref. [3] and are given by where C ∞, hH/AA △△ denotes the contribution of the one-particle reducible diagrams in the HTL with the transverse momentum The functions P gg (z) and P gq (z) are the related Altarelli-Parisi splitting kernels [27], given by with N F = 5 in our calculation.The cross section σLO (Q 2 ) is calculated in the full theory, i.e. taking into account the finite top-quark mass at the integrand-level.The total cross section can be obtained after a final integration over Q between the threshold m φ 1 + m φ 2 2 and the hadronic c.m. energy s. Virtual corrections Three generic types of diagrams contribute to the virtual corrections cf.Fig. 2: (i) two-loop triangle diagrams involving the light and heavy scalar Higgs bosons in the s-channel propagators, (ii) one-particle reducible diagrams emerging from two triangular top loops coupling to a single external Higgs boson that are connected by t-channel gluon exchange and (iii) two-loop box diagrams.The diagrams of class (i) consist of off-shell single scalar Higgs production dressed with the trilinear Higgs vertex.The relative QCD corrections coincide with the NLO QCD corrections to scalar Higgs boson production with mass Q and can thus be adopted from the single-Higgs calculation [28][29][30][31][32].The diagrams of class (ii) define the coefficients c 1 , c 2 in Eq. ( 28).The analytical expressions of the coefficients c 1 , c 2 of the one-particle reducible contributions can be obtained from the corresponding Higgs decay widths of φ → Zγ (φ = h, H, A) [33][34][35] with the corresponding adjustments of the involved couplings. The full top-mass dependence of c 1 , c 2 is given by 2 with τ φ = 4m 2 t /m 2 φ (φ = h, H, A) and λ t = 4m 2 t /t.The generic loop functions are given by In the case of different pseudoscalar Higgs bosons as in more extended Higgs sectors, the coefficient reads These expressions approach the HTL values given in Eq. ( 28). The involved part of our calculation is the two-loop box diagrams of type (iii).We have used the same method as in Refs.[13][14][15], i.e. we have performed a Feynman parametrisation, end-point subtractions and the subtraction of special infrared terms to allow for a clean separation of the ultraviolet and infrared singularities.For the stabilisation of the 6-dimensional Feynman integrals we have applied integrations by parts to reduce the powers of the singular denominators and performed the integrations with a small imaginary part of the virtual top mass.In order to arrive at the narrowwidth approximation for the virtual top mass, we have used Richardson extrapolations [36] along the lines of our SM calculation of Refs.[13,14].However, here we needed to extend the calculation for scalar Higgs-boson pairs to the case of different final-state Higgs masses.For the calculation of pseudoscalar Higgs-boson pairs, we have used a naive anti-commuting γ 5 matrix at the pseudoscalar vertices, since only even numbers of γ 5 contribute to the (C P-even) virtual corrections diagram by diagram.For this case, we have used the same projectors as in the double-scalar case, since the contributing tensor structures are the same.Since each individual two-loop box diagram is singular for the t integration, we have applied a technical cut at the integration boundaries and included a suitable substitution to stabilise this integration for each diagram.We have checked explicitly that our results do not depend on this technical cut. The top mass has been renormalised in both the on-shell scheme and in the MS scheme.The on-shell scheme predictions are our default central predictions while the MS scheme predictions are used to calculate the top-quark scale and scheme uncertainties, see below.The strong coupling constant is renormalised in the MS scheme with 5 active flavours.We have obtained finite results for the virtual corrections by subtracting the HTL results as in the SM case so that we end up effectively calculating the NLO mass ef-fects only.To obtain the final hadronic differential cross section, we have added back the HTL results calculated with HPAIR 3 .The calculation of each two-loop box diagram has been performed independently at least twice with different Feynman parametrisations and we have obtained full agreement within the numerical precision. Real corrections The calculation of the finite mass effects in the real corrections, ∆ σ mass i j = ∆ σ i j − ∆ σ HTL i j , follows closely the method described in Refs.[13,14] for the SM case.The HTL contributions are calculated again with the program HPAIR while the partonic mass effects are obtained as where the exact four-momenta p i are mapped onto LO subspace four-momenta pi following Ref.[37]. The HTL matrix elements have been calculated analytically, while the full one-loop matrix elements have been obtained by two different methods.They have been generated using FeynArts [38] and FormCalc [39] on the one hand, and obtained analytically using FeynCalc [40] on the other hand.The scalar one-loop integrals have then been calculated numerically using the library COLLIER 1.2 [41].The phase-space has also been parameterised in two different ways.The two methods agree within the numerical precision. Numerical results We present our numerical results at a hadron pp collider for c.m. energies of √ s = 13 and 14 TeV (LHC energies), √ s = 27 TeV (high-energy variant of the LHC, the HE-LHC), and √ s = 100 TeV (FCC energy).We use m t = 172.5 GeV for the on-shell top-quark mass.We have performed the calculation using the NLO PDF set PDF4LHC15 [42] as implemented in the LHAPDF-6 library [43].Our central scale choice is µ R = µ F = µ 0 = Q/2, and α s (M 2 Z ) is set according to the chosen PDF set, with an NLO running in the fiveflavour scheme.As done also in the SM calculation [13,14], we have used the narrow-width approximation for the top quark.We use the 2HDM benchmark scenario given in Eq. (16). We have calculated a grid of Q-values from Q = 259.907(269.422)GeV, for hH production (for AA production), to Q = 1500 GeV, so that we obtain the invariant Higgs-pairmass distributions depicted in Fig. 3 for hH production (left) 3 The program can be downloaded at http://tiger.web.psi.ch/hpair/.and AA production (right), for the LHC at 13 TeV.The results at 14 TeV are shown in Fig. 4, while the results for the HE-LHC are shown in Fig. 5 and the results for the FCC in Fig. 6.The full NLO QCD results are displayed in red, including the numerical errors as well as a band indicating the renormalisation and factorisation scale uncertainties obtained with a standard seven-point variation around our central scale choice (cf.Subsec.5.1).The blue line shows the (Born-improved) HTL prediction, while the yellow line displays the HTL supplemented by the full mass effects in the real corrections only and the green line (including numerical errors) the HTL supplemented by the full mass effects in the virtual corrections only. The mass effects in the real corrections increase with increasing c.m. energy both for hH and AA final states.In CPeven hH production, they reach a negative peak at around Q = 400 GeV and are of the order of −10% at 13 TeV (of the order of −20% at 100 TeV) before mildly increasing up to around -6% at Q = 1500 GeV at 13 TeV (−14% at 100 TeV).In CP-odd AA production, the behaviour of the mass effects in the real corrections is slightly different.There is also a negative peak around Q = 400 GeV, of the order of −8% at 13 TeV (−14% at 100 TeV), but then it mildly increases before reaching a plateau around Q = 1000 GeV.The mass effects are then practically constant, about −6% at 13 TeV (−11% at 100 TeV).The mass effects in the virtual corrections are negative at large Q values for both hH and AA final states, as expected by the restoration of partial-wave unitarity in the high-energy limit.Combined with the mass effects in the real corrections, the full mass effects reach about −30% (−40%) at Q ≃ 1500 GeV for hH production, at lower c.m. energies (at 100 TeV), while the mass effects in the virtual corrections are smaller for AA production, reaching about −15% (−20% for Q ≃ 1500 GeV, at lower c.m. energies (at 100 TeV).This is the same behaviour that is observed in the SM case [11][12][13][14], albeit with a smaller correction for AA production.Note that the mild increase in the mass effects in the virtual corrections at large Q values for AA production can be attributed to numerical fluctuations.The most striking difference between CP-even and CP-odd pair production can be seen around the t t threshold and below.There is a distortion of the shape that is distinctly different from the SM case and also between hH and AA productions, hence discriminating between the two production channels. We have also obtained the total cross sections from the differential distributions, using a numerical integration of Q.For Q between 300 GeV and 1500 GeV we have used the trapezoidal method supplemented by a Richardson extrapolation [36] while we use a Simpson's 3/8 rule [44] for Q between 270 GeV and 300 GeV and a simple trapezoid for Q between the threshold and 270 GeV.For the FCC c.m. en- ergy of 100 TeV we have also included three new Q bins between 1500 GeV and 2500 GeV and add their contribution using a Simpson's rule.Including the numerical errors on the final decimal number, we have obtained the following results for the full NLO QCD total cross sections for hH and AA production in our 2HDM benchmark scenario, using PDF4LHC15 PDF sets, The corresponding results in the (Born-improved) HTL approximation, obtained using the same numerical integration of the Q grid, are The comparison of Eq. ( 33) with Eq. ( 34) gives a ≃ −12% top-mass effect correction at NLO on the total cross section for hH production at LHC energies (≃ −21% at the 100 TeV FCC), and a ≃ −5% correction for AA production at LHC energies (≃ −11% at the 100 TeV FCC).While the mass effects are of similar size as the SM Higgs-pair production for CP-even Higgs bosons, they are smaller for CP-odd Higgs pair production. Factorisation and renormalisation scale uncertainties We have estimated the factorisation and renormalisation scale uncertainties using the standard seven-point method.We have varied both the factorisation scale µ F and the renormalisation scale µ R around our central scale choice µ R = µ F = Q/2, by a factor of two up and down while avoiding the choices leading to the ratio µ R /µ F being either greater than two or smaller than one-half.The maximal and minimal cross sections obtained by this procedure are then compared to the nominal cross section obtained with the central scale choice. The scale uncertainties are similar to what is obtained for SM Higgs pair production [11][12][13][14].They are slightly larger in AA production than in hH production.We have also found the following scale dependences for the differential cross section at 13 minimal and maximal cross sections against the central OS prediction are used to calculate the scale and scheme uncertainties.This procedure has already been used for SM predictions and this gives rise to significant uncertainties that are comparable or even larger than the usual factorisation and renormalisation scale uncertainties [13][14][15].We compare the five predictions (the OS predictions and the three MS predictions) in Fig. 7 at the 13 TeV LHC, in Fig. 8 at the 14 TeV LHC, in Fig. 9 at the 27 TeV HE-LHC, and in Fig. 10 at the 100 TeV FCC.The red lines display the OS full NLO QCD Higgs-pair invariant mass distributions, the blue lines the MS full NLO QCD predictions with m t (m t ), the yellow lines the MS full NLO QCD predictions with m t (Q/4), and the green lines exhibit the MS full NLO QCD predictions with m t (Q).For Q values above Q = 400 GeV, the MS prediction with µ t = Q always leads to the smallest distribution while the maximum at large Q values is given by the OS prediction.The lower panels in each figures display the ratios of the various predictions to our central OS prediction.As in the SM case, we see large deviations at large Q values, ≃ −50% at Q = 1500 GeV for all c.m. energies.We have obtained the following uncertainties at 13 TeV for selected Q values in hH production using PDF4LHC15 parton densities, dσ (gg → hH) dQ As already seen in the SM case, the top-quark scale and scheme uncertainties turn out to be significant, as large or even larger than the factorisation and renormalisation scale uncertainties.For Q > 400 GeV, the maximum cross section is always the OS prediction. From the differential distributions, we can obtain the topquark scale and scheme uncertainties on the total cross section.We adopt the envelope for each Q-bin individually to build up two maximal and minimal differential distributions and we integrate these distributions over Q using fits of the various distributions which are then numerically integrated.We have arrived at the following top-quark scale and scheme uncertainties for the CP-even hH total cross section, The scale and scheme uncertainties are sizeable and should be included in an uncertainty analysis of the 2HDM Higgspair production cross sections according to the procedure of Ref. [15]. Conclusions In this work, we have calculated the full NLO QCD corrections to mixed scalar and pure pseudoscalar Higgs-boson pair production via gluon fusion gg → hH, AA within the 2HDM type I, working in our benchmark scenario that is not excluded at the LHC.We have integrated the two-loop box diagrams numerically by performing end-point and infrared subtractions of the contributing Feynman integrals.A numerical stabilisation across the virtual thresholds has been achieved by integration by parts of the integrand to reduce the power of the problematic denominators of the Feynman integrals.The results of the triangle diagrams, involving s-channel scalar Higgs propagators and the corresponding trilinear Higgs couplings, have been adopted from the single-Higgs case.The one-particle reducible contributions emerging from either two single scalar or pseudoscalar Higgs couplings to gluons can be derived from the known results for h, H, A → Zγ with appropriate replacements of the contributing couplings and masses.After renormalising the top mass and the strong coupling, we have subtracted the (Born-improved) HTL to obtain the pure virtual NLO top-mass effects.The real corrections have been computed by generating the full one-loop matrix elements with automatic tools.These have then been connected to suitable subtraction matrix elements in the HTL for the radiation part, but keeping the full LO top-mass dependence.This could be achieved by suitably projected 4-momenta inside the LO sub-matrix elements.This yields the pure NLO top-mass effects of the real corrections.Adding both subtracted virtual and real corrections, we obtain the full NLO QCD top-mass effects that have then been added to the (Born-improved) HTL results of Ref. [3] by using the code Hpair.Very similar to the corresponding SM calculation of Refs.[11][12][13][14][15], we find NLO top-mass effects of about 15-25% (depending on the collider energy) for the total cross sections if the top mass is defined as the top pole mass.For the invariant Higgs-pair mass distribution, the NLO top-mass effects can reach a level 30-40% for large invariant mass values.The larger the hadronic collider energy, the larger NLO top-mass effects emerge.The renormalisation and factorisation scale dependence induces uncertainties at the level of 10-15% for scalar Higgs pairs and 12-17% for pseudoscalar Higgs pairs at NLO, i.e. similar to the SM case.We have studied the additional theoretical uncertainties originating from the scale and scheme choice of the virtual top mass and obtained additional uncertainties of about 5-15% for scalar and about 10% for pseudoscalar Higgs-pair production that are significant and should be included in future Higgs-pair analyses.These uncertainties are larger for distributions at large invariant Higgs-pair masses. Fig. 3 Fig. 4 Fig. 5 Fig.3Invariant Higgs-pair-mass distributions for Higgs boson pair production via gluon fusion at the 13 TeV LHC as a function of Q using the PDF4LHC15 PDF set, in the 2HDM type I. Left: CP-even hH production.Right: CP-odd AA production.In both panels, the Born-improved HTL results (in blue), HTL results including the full real corrections (in yellow), HTL results including the full virtual corrections (in green, including the numerical error), and the full NLO QCD results (in red, including the numerical error) are depicted.The inserts below display the ratio to the NLO HTL result for the different calculations.The red band indicates the renormalisation and factorisation scale uncertainties for the results including the full NLO QCD corrections.gg → hH at NLO QCD | √ s = 14 TeV | PDF4LHC15 dσ/dm hH [fb/GeV] µ R = µ F = m hH /2 NLO scale uncertainty Fig. 7 Fig.7Higgs-pair invariant mass distribution at the 13 TeV LHC with different scale and scheme for the top-quark mass, in the 2HDM type I. Left: CP-even hH production.Right: CP-odd AA production.The lower panels display the ratio to the default OS prediction. gg → hH at NLO QCD | √ s = 13 TeV | PDF4LHC15 dσ/dm hH [fb/GeV] µ R = µ F = m hH /2 m AA[GeV] Top-quark scale and scheme uncertaintiesThe calculation of the NLO QCD corrections has been performed in two different schemes for the renormalisation of the top-quark mass.Our central predictions use the on-shell (OS) scheme with a mass m t = 172.5 GeV both in the Yukawa couplings and in the loop propagators.The MS scheme can instead be used, with an appropriate choice of the topquark mass counterterm.On top of this scheme choice, there is also a scale choice for the renormalisation of the top-quark mass, m(µ t ).To obtain the top-quark scale and scheme uncertainties, we have compared three MS predictions to our central OS prediction, for µ t = Q/4, Q, and µ t at the MS top mass itself, m t (m t ) = 163.02GeVforour choice of the OS top-quark mass value, obtained with an N3LO evolution and conversion of the pole into the MS mass m t (m t ).The gg → hH at NLO QCD | √ s = 13 TeV | PDF4LHC15 dσ/dm hH [fb/GeV] µ R = µ F = m hH /2 MS scheme with m t (m t ) MS scheme with m t (m HH /4) MS scheme with m t (m HH ) OS scheme, m t = 172.5 GeV MS scheme with m t (m t ) MS scheme with m t (m AA /4) MS scheme with m t (m AA ) OS scheme, m t = 172.5 GeV m AA[GeV] MS scheme with m t (m t ) MS scheme with m t (m HH /4) MS scheme with m t (m HH ) OS scheme, m t = 172.5 GeV MS scheme with m t (m t ) MS scheme with m t (m AA /4) MS scheme with m t (m AA ) OS scheme, m t = 172.5 GeV m AA[GeV]Fig.8Same as in Fig.7but for √ s = 14 TeV.gg → hH at NLO QCD | √ s = 27 TeV | PDF4LHC15 dσ/dm hH [fb/GeV] µ R = µ F = m hH /2 MS scheme with m t (m t ) MS scheme with m t (m HH /4) MS scheme with m t (m HH ) OS scheme, m t = 172.5 GeV Ratio to OS m hH [GeV] gg → AA at NLO QCD | √ s = 27 TeV | PDF4LHC15 dσ/dm AA [fb/GeV] µ R = µ F = m AA /2 m A = 134.711GeV MS scheme with m t (m t ) MS scheme with m t (m AA /4) MS scheme with m t (m AA ) OS scheme, m t = 172.5 GeV Ratio to OS m AA [GeV] gg → hH at NLO QCD | √ s = 100 TeV | PDF4LHC15 dσ/dm hH [fb/GeV] µ R = µ F = m hH /2 Full NLO results for different top-quark masses 2HDM type I m h = 125.09GeV m H = 134.817GeV MS scheme with m t (m t ) MS scheme with m t (m HH /4) MS scheme with m t (m HH ) OS scheme, m t = 172.5 GeV Ratio to OS m hH [GeV] gg → AA at NLO QCD | √ s = 100 TeV | PDF4LHC15 dσ/dm AA [fb/GeV] µ R = µ F = m AA /2 A = 134.711GeV MS scheme with m t (m t ) MS scheme with m t (m AAg /4) MS scheme with m t (m AA ) OS scheme, m t = 172.5 GeV m m AA [GeV]
7,639.8
2023-03-09T00:00:00.000
[ "Physics" ]